I can vouch for this. I haven’t really needed the background blur feature personally, but I’ve tried it and both myself, colleagues, and friends — pretty much everyone I’ve talked to that has used it — loathe Google Meet’s background blur, and prefer Zoom’s by far.
In my experience, it doesn’t completely cover the background most of the time, and if you move at all, as you point out, it can’t keep up.
Kind of funny to see Google engineering blogging about it when it feels extremely half baked.
This makes me sad, because in all other areas, I think Meet excels well beyond the competition.
"Half baked" misrepresents the difficulty this task. Yes, Zoom does it better, but it's _still_ an excellent and interesting engineering accomplishment.
I've always wondered what proportion of modern real-time video effects rely on ML vs. classical image processing; this not only answers that question, but provides details down to the level of model architecture and the final latency and IOU benchmarks.
Of course I'd be more interested to read how Zoom manages to do even better, but I'm not holding my breath for them to publish those details.
In my experience, it doesn’t completely cover the background most of the time, and if you move at all, as you point out, it can’t keep up.
Kind of funny to see Google engineering blogging about it when it feels extremely half baked.
This makes me sad, because in all other areas, I think Meet excels well beyond the competition.
EDIT: removed my general sentiment on Google