Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can vouch for this. I haven’t really needed the background blur feature personally, but I’ve tried it and both myself, colleagues, and friends — pretty much everyone I’ve talked to that has used it — loathe Google Meet’s background blur, and prefer Zoom’s by far.

In my experience, it doesn’t completely cover the background most of the time, and if you move at all, as you point out, it can’t keep up.

Kind of funny to see Google engineering blogging about it when it feels extremely half baked.

This makes me sad, because in all other areas, I think Meet excels well beyond the competition.

EDIT: removed my general sentiment on Google



"Half baked" misrepresents the difficulty this task. Yes, Zoom does it better, but it's _still_ an excellent and interesting engineering accomplishment.

I've always wondered what proportion of modern real-time video effects rely on ML vs. classical image processing; this not only answers that question, but provides details down to the level of model architecture and the final latency and IOU benchmarks.

Of course I'd be more interested to read how Zoom manages to do even better, but I'm not holding my breath for them to publish those details.


> I've always wondered what proportion of modern real-time video effects rely on ML vs. classical image processing;

ML is a classical tool for image processing, what do you mean here?


"Turns out our training set was comprised entirely of backdrops from Google HQ. Sorry everyone else!"


Everyone is wfh @ Google




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: