I think this is wrong. Google is a competitor both in devices and in the OS for mobile devices. Apple charge a premium that they justify by superior features, ease of use, effortless integration with other Apple products and so on. I wonder how well they will be able to produce differentiating iOS AI features whilst they use Gemini. I suspect it will more or less have parity with Android devices. If more and more interactions with the device occur through this AI interface I wonder what that does to the perception of Apple products. I suppose they already have the worst AI voice assistant and it hasn't damaged them all that much.
Google is not really a competitor to Apple in devices. I mean, they sell devices, but at a way lower volume. The Pixel phone is essentially a tech demo that exists to push their Android partners into making more competitive devices themselves.
The corporate strategies are not directly comparable. The entire Android project is essentially a loss leader to feed data back into Google’s centralized platform, which makes money on ads and services. Whereas Apple makes money directly from the device sales, supported by decentralized services.
Apple never produced a differentiated experience in search or social, two of the largest tech industries by revenue. Yet Apple grew dramatically during that time. Siri might never be any better than Google’s own assistant, and it might never matter.
Your framing fits well for the Nexus era and even the earliest Pixel iterations, where Google’s hardware largely functioned as a reference implementation and ecosystem lever, nudging OEMs into making better devices.
However, the current Pixel strategy appears materially (no pun intended) different. Rather than serving as an “early adopter” pathfinder for the broader ecosystem, Pixel increasingly positions itself as the canonical expression of Android—the device on which the “true” Android experience is defined and delivered. Far from nudging OEMs, it's Google desperately reclaiming strategic control over their own platform.
By tightening the integration between hardware, software, and first-party silicon, Google appears to be pursuing the same structural advantages that underpin Apple’s hardware–software symbiosis. The last few generations of Pixel are, effectively, Google becoming more like Apple.
I think you're assuming that no durable or at-scale changes in compute form factor will occur, so that their success pretty much just solely comes down to differentiated iPhone software features. That seems unlikely to me. I don't see phones going away in the next decade like some have predicted, but I do think new compute form factors are going to start proliferating once a certain technological "take off" point is reached.
The broader point I'm making is that Apple likely couldn't do all the other things they're excelling at right now and compete head-on with Google / OpenAI / Anthropic on frontier AI. Strategically, I think they have more wiggle room on the latter for now than many give them credit for so long as they continue innovating in their core space, and I think those core innovations are yielding synergies with AI that they would've lost out on if they'd pivoted years ago to just training frontier LLMs. There's a very real risk that if they'd poured resources into LLMs too early, they would've ended up liquidating their reserves in a race-to-the-bottom on AI against competitors who specialize in it, while losing their advantages in fundamental devices and compute form factors over time.
Have you bothered to look at all? Read the output of the model when asked about why it has the behaviour it does. Look at the plethora of images it generates that are not just historically inaccurate but absurdly so. It tells you "heres a diverse X" when you ask for X. Yet asking for pictures of Koreans generates only Asian people but prompts for Scots or French people in historical periods generate mostly non-white people. You're being purposefully obtuse, Google has had racism complaints about previous models, talks often about AI safety and avoiding 'bias'. You're trying to argue that it's more likely that the training data had an inherent bias against generating white people in images purely by chance?
I use it for in component logic in vue. Used to have a collection of boolean or enum variables that represent the state of the component and change according to actions e.g. const isLoadingUser = ref(true). This works but often leads to unforeseen states being possible and just isn't very clear. In contrast, with xstate you define all states and legal transitions and wire everything up to send events, you can write guards that prevent transitions. I transitioned all my complicated multi stage forms to xstate and since doing so they've become substantially more robust. I highly recommend trying to out.
> There was a lot of evidence supporting the wet market theory at the time and supporting that it wasn't made in a lab
You do realise that when people say it was a lab leak, they don't necessarily mean it was made in a lab. They are merely saying that the origin of the pandemic was a virus escaping the nearby Wuhan Institute of Virology via one of the workers/researchers there.
You’re over-generalizing on “people”. The majority, or at least loudest, of the lab leak proponents also say it was a Chinese-designed bioweapon intentionally released in China to attack the US. They have been moving goalposts, but your more reasonable take is not at all universal.
Yeah who cares about making needless changes to art. A lot of European art is white centric, maybe we should paint other races into it to make it more inclusive?
There is a proposed method to sequester carbon and reduce ocean acidification by doing this process, extracting the Hydrogen and Chlorine (or hydrochloric acid) for industrial purposes, and releasing the sodium hydroxide to absorb dissolved CO2 (carbonic acid) into sodium carbonate.
It was mind blowing for me at the time! Subsequent approaches to accelerated silicate weathering like Project Vesta dropped the chemical component and just used mechanical crushing of rocks to accelerate weathering. The all-mechanical approach is less complicated and energy intensive.
If you're extracting chlorine to react it with rock it probably makes more sense to just react the rock directly. But if you can use the chlorine industrially it makes more sense not to involve the mining and transport of rock. There's currently a significant industry of chlorine and HCl production that isn't linked to a carbon sequestration process that we can supplant.
One could also release the chlorine into the atmosphere to destroy atmospheric methane. Elemental chlorine in sunlight is rapidly (within minutes) broken down into chlorine atoms. These atoms, being free radicals, efficiently extract hydrogen from methane molecules, starting a chain of reactions that converts the remaining fragment to CO2 and water.
You'd need a hell of a lot of chlorine to compensate for current methane injection, though.
No , cholrine radicals will also deplete ozone layer ,
Besides I don't think releasing highly reactive gases at any concentration into the atmosphere is a good idea, there can be other effects we haven't studied well enough.
Methane is present throughout the troposphere -- it has an atmospheric lifetime of something over a decade and becomes well mixed. You'd want to release the chlorine in a sufficiently dilute and dispersed form that it didn't overwhelm the methane in the air it which it was released.
Needs massive energy input which would come from where? Also, massive amounts of toxic chemicals. Use biology to solve climate change by growing biomass (bio CCS) in the form of kelp, diatoms, and other high growth life (maybe GMO) to sink to the bottom of the ocean.
> But if they manage to get H2 from it then the sodium, oxygen, and chlorine have to bind into something else than usual I guess.
It's the same electrolysis process used today in swimming pools to generate chlorine. The hydrogen evaporates, the chlorine ends up in the water, and the chemical reaction ends up producing the same salt it started with. You don't have to add salt[0] and the chlorine pretty quickly evaporates and breaks down in sunlight if you don't put cyanuric acid in the water to bind it.
Mostly it could be ignored, though if the chlorine level got high enough it would kill the organic things.
[0] except for losses due to other reasons than chlorine generation
Musk is clearly an incredibly brittle narcissist even by billionaire standards, he is probably the least suited person to owning a platform like Twitter because he seems to misunderstand the trade-offs inherent in the social media business. I find the cult of sycophants that idolise Musk as the real life Tony Stark nauseating. The hero worship of people claiming he's saving free speech by making the platform more restrictive than it's ever been has been hilarious to watch.
However, there was no viable EV market prior to Musk. The US wasn't launching rockets into space before SpaceX. Those are two things that many people thought weren't possible. Pretending that life is just a series of coin flips we can't influence and that all the people who've achieved more than us are merely more fortuitous rather than more deserving is just a coping mechanism. Almost nobody would have made the investments he made given the same opportunities.
I want free speech, hence I think Elon is a tremendous hypocrite for enacting this policy. Once you start deciding whether or not things are 'safe' to say you will end up in the exact situation Jack, et al. were in when they were censoring just with different biases. He doesn't seem to understand that and is doomed to repeat their mistakes.
There is some irony now seeing those that didn't believe the banning of accounts arbitrarily was an issue under previous management decrying this move by Elon.
> There is some irony now seeing those that didn't believe the banning of accounts arbitrarily was an issue under previous management decrying this move by Elon.
No, the irony is not that the site under both owners is trying to remove bad/harmful content (just defining it differently).
The irony is that Musk thought he wasn’t going to have to do it at all: “absolute free speech”, “public square”, “comedy is legal”, etc.
One of the banned journalists went on Mastodon and said (paraphrasing): “It’s his site and he can ban whoever he wants”
And to be fair, under both owners, accounts were banned for violating ToS policies. The policies are just different, but they’re still the rules you agree to when you use the site.
I just don’t think anyone thought “free speech” meant no parodying, no republishing public FAA info, etc.
Many journalists are singularly obsessed with the eradication of 'harmful' or 'unsafe' accounts from Twitter. They are particularly concerned about doxxing when it happens to political figures they're sympathetic to or journalists. Technically all home addresses are public information, just as FAA data is. Yet people get rather nervous when their home address ends up on the internet and rightly so.
Their entire argument is about the prevention of the exact sort of thing that Musk alleges happened to a car carrying his child - real world harm from online activity. So why exactly are they upset about this change in policy that while clearly motivated by self-interest rather than any principle, technically aligns with some of their goals? It's because they want to be able to doxx people they think deserve it. Because when they doxx it's journalism, but when their enemies doxx it's stochastic terrorism.
reply