Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sometimes doing nothing is the winning move.




Look at Magic Cue in this year's Android update

> Magic Cue - Magic Cue proactively surfaces relevant info and suggests actions, similar to how Apple's personalized Siri features were supposed to work. It can display flight information when you call an airline, or cue up a photo if a friend asks for an image.

https://www.macrumors.com/2025/08/20/google-pixel-10-ai-feat...

Google shipped it, despite it not working.

> I spent a month with the Pixel 10's most hyped AI feature, and it hasn't gone well

https://www.androidauthority.com/google-pixel-10-magic-cue-o...

Likewise Daily Hub didn't work but was shipped anyway.

> In our testing, Daily Hub rarely showed anything beyond the weather, suggested videos, and AI search prompts. When it did integrate calendar data, it seemed unable to differentiate between the user’s own calendar and data from shared calendars. This largely useless report was pushed to the At a Glance widget multiple times per day, making it more of a nuisance than helpful.

https://arstechnica.com/google/2025/09/google-pulls-daily-hu...

Apple announced that the Siri uodate didn't work well enough to ship, and didn't ship it.


...as I wrote, they don't do "nothing".

They roll out hardware to consumers they can use for AI once their service is ready, with users paying for that rollout until then.

Meanwhile they have started to deploy a marketplace ecosystem for AI tasks on iOS, where Apple has the first right-to-refuse, allowing the user to select a (revenue-share-vetted) 3rd party provider to complete the task.

So until Apple is ready, the user can select OpenAI (or soon other providers) to fulfill an AI-task, and Apple will collect metrics on the demand of each type of task.

This will help them prioritize for development of own models, to finally make use of their own marketplace rules to direct the business away from third parties to themselves.

My guess is that they will offer a mixed on-device/cloud AI-service that will use the end-users hardware where possible, offloading compute from their clouds to the end-users hardware and energy-bill, with a "cheap" subscription price undercutting others on that AI-marketplace.


It isn’t clear to me that Apple will ever pursue their own chatbot like Gemini, ChatGPT, etc. There’s lots of potential for on device AI functions without it ever being a general purpose agent that tries to do everything. AI and LLMs are not synonymous.

From UX perspective they already have Siri for that

You are just making things up in this grand AI strategy you have imagined for Apple. I cannot "fulfill an AI-task" with my phone because the overpaid idiots building it in Cupertino have years ago bought into the trainwreck that is Siri. So now I cannot "select my favorite AI provider" from the "marketplace ecosystem for AI tasks" to "fulfill an AI-task" nor will a meddling middle manager in the Loop collect metrics on the demand for "my AI tasks".

And now they are converting Siri into an orchestrator to "broker" between the user and the AI-providers for a revenue-share, because they are not ready to compete in that space themselves...

see here: https://news.ycombinator.com/item?id=46210481


Assuming that Apple take 30% rev-share from other AI-service providers on their AI-marketplace, once they are ready they can easily offer a lower pricing than anyone else and still retain a higher profit-margin.

But for this to make economic sense, the "AI-bubble" may need to burst first, forcing the competitors to actually provide their services for-profit.

Until then it might be more profitable to just forward AI-tasks to OpenAI and others and let them burn more money.


> once they are ready they can easily offer a lower pricing than anyone else

Do you have any evidence whatsoever that could back-up this claim? It feels like you're just saying this because you want it to be true, not because you have any concrete proof that Apple can sell competitive inference.


> Do you have any evidence whatsoever that could back-up this claim? It feels like you're just saying this because you want it to be true, not because you have any concrete proof that Apple can sell competitive inference.

Sorry, I didn't mean to state that Apple A/M-series will be competitive on inference performance compared to other solutions. There is no sufficient data for this at the moment. But this is not the competition I expect to happen.

I expect them to stiffle competition and setting themselves up as the primary player in the Apple ecosystem for AI services, simply because they are making "Apple Intelligence" an ecosystem orchestration layer (and thus themselves the gatekeeper).

1. They made a deal with OpenAI to close Apple's competitive gap on consumer AI, allowing users to upgrade to paid ChatGPT subscriptions from within the iOS menu. OpenAI has to pay at least (!) the usual revenue share for this, but considering that Apple integrated them directly into iOS I'm sure OpenAI has to pay MORE than that. (also supported by the fact that OpenAI doesn't allow users to upgrade to the 200USD PRO tier using this path, but only the 20USD Plus tier) [1]

2. Apple's integration is set up to collect data from this AI digital market they created: Their legal text for the initial release with OpenAI already states that all requests sent to ChatGPT are first evaluated by "Apple Intelligence & Siri" and "your request is analyzed to determine whether ChatGPT might have useful results" [2]. This architecture requires(!) them to not only collect and analyze data about the type of requests, but also gives them first-right-to-refuse for all tasks.

3. Developers are "encouraged" to integrate Apple Intelligence right into their apps [3]. This will have AI-tasks first evaluated by Apple

4. Apple has confirmed that they are interested to enable other AI-providers using the same path [4]

--> Apple will be the gatekeeper to decide whether they can fulfill a task by themselves or offer the user to hand it off to a 3rd party service provider.

--> Apple will be in control of the "Neural Engine" on the device, and I expect them to use it to run inference models they created based on statistics of step#2 above

--> I expect that AI orchestration, including training those models and distributing/maintaining them on the devices will be a significant part of Apple's AI strategy. This could cover alot of text and image processing and already significantly reduce their datacenter cost for cloud-based AI-services. For the remaining, more compute-intensive AI-services they will be able to closely monitor (via above step#2) when it will be most economic to in-source a service instead of "just" getting revenue-share for it (via above step#1).

[1] https://help.openai.com/en/articles/7905739-chatgpt-ios-app-...

[2] https://www.apple.com/legal/privacy/data/en/chatgpt-extensio...

[3] https://developer.apple.com/apple-intelligence/

[4] https://9to5mac.com/2024/06/10/craig-federighi-says-apple-ho...


"A strange game. The only winning move is not to play."



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: