Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

True - but what model would they want to use on-device?

All the feed ranking stuff is done serverside I assume... And apart from that, the app is just for posting and reading text and pictures. I don't really see any use for any ML there.



> True - but what model would they want to use on-device?

Video/image filters. Very popular on snapchat, instagram, tiktok, etc. If you want to compete with those services you have them.


All sorts of crap like facial recognition or object identification.

Every bit of ML you can offload on your clients is one you don't have to pay for.


If one could profitably offload serverside CPU to the client then every app would be mining bitcoins on-device.

But it turns out that user complaints of a laggy app, hot phone and bad battery life (and therefore lost users and lost ad revenue) far outweigh any CPU time savings on the server.


I have a suspicion that it could be something that does voice recognition via the gyroscope data. I believe so for many reasons-

1. Gyroscope creates a lot of data for every millisecond, so it doesn't makes any sense to send all of it over to the server.

2. You only need to recognise certain keywords/movements of the user for somewhat accurate ad-tracking and behavourial analysis.

3. You don't need to prompt the user to allow access to it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: