Following up on this point, we’ve updated our terms of service (in section 10.4) to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.
This addresses concerns about Zoom Video Communications, Inc. itself using e.g. recordings for purposes of training their own AI models. It does not address the potentially much greater risks arising from the company potentially selling access to the collection of zoom recordings to other companies for purposes of training AI models of such other companies. Here’s a somewhat-in-depth analysis: https://zoomai.info/
Thanks for following up, Michael, it is much appreciated. It does leave me (and judging my adjacent comments, also others) with questions, including:
* That wording seems very specific - is there a reason you did not just say "we will not use Customer Input or Customer Content to train our AI" given you have defined those terms? Are you leaving scope for something else (such as uploaded files or presentation content) to still be used?
* Can you also clarify exactly which (and whose) "consent" is applicable here? In meetings between multiple equal parties there may not be any one party with standing to consent for everyone involved. Your blog post seems to assume there can be, but the ToS don't appear to define "consent".
This is Michael Adams, Zoom’s CISO. I want to reiterate our thanks for your feedback and emphasize our continuing commitment to protecting our customers’ data as we make exciting improvements to Zoom products.
At Zoom, we do not use customer audio, video, and chat to train our generative AI models without customer consent.
>At Zoom, we do not use customer audio, video, and chat to train our generative AI models without customer consent.
What is the mechanism you use for "customer consent"?
IIUC, you have a pop-up at call initiation for which you either provide "consent," or drop the call, with no option to deny consent and continue.
If that's correct, then your definition of "customer consent" doesn't comport with the broadly understood idea of consent. Rather, it's closer to "if you enter my store, you consent to spending at least USD$20. If you don't actually spend that much on our offerings, we will charge the balance as a fee," than to true consent.
That's not to say that Zoom shouldn't have the right to require such consent to use their service, but based on my understanding, the mechanism for obtaining such consent is coercive and exploitative.
Most importantly - we’ve updated our terms of service (in section 10.4) to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.
I think a large number of people here will still find these terms lacking. Specifically, we will reject the idea that one account admin can make a consent for all users.
I would much prefer to see users empowered to not just be "notified" of an account admin's decision but to have per-user consent. Regardless of what my employer (or a collaborator's employer) might think, I do not consent to feeding my voice, video, or screenshare content into AI training.
And I think there should be a middle ground between "tolerate the admin's decision" and "don't participate". I understand certain scenarios require this, such as recording of all contributions to an important meeting. But, I don't think you should apply this heavy-handed approach to other derivative use. A user should be able to communicate with peers without being coerced into consent for all other usage an account admin might be interested in...
> At Zoom, we do not use customer audio, video, and chat to train our generative AI models without customer consent.
I think it is misleading to use the word "customer" here without drawing attention to the fact that is not the meeting participant.
It that intentional? Different people have a different idea of what "customer" means, in the context of a discussion about an online meetings product, when they are not paying attention. As a CISO who is surely aware of social engineering's role in data security, I'm sure you know this.
And in a context where people individiully agree the terms of service by clicking "I agree", often on their own computer, people mostly think of the customer as themselves.
So who is meant by "customers" in your comment, and your COO's comment, who consent (or not) to AI training on their data? All the people attending the meeting, or the entity who owns the Zoom account only?
Does each person attending a Zoom meeting have to give consent for their likeness and personal data to be used?
The people who actually care about their personal data being used in this way are the people attending the meeting. Everyone knows this intuitively, and that's why wordings around whose consent is involved need to be clear and unambiguous, and sound like weasel-wording if they are not, or if they imply it is delegated to one party in the meeting that the others may not entirely trust with that kind of data.
Scratch that. I read your link. It says quite clearly that "Zoom account owners and administrators control whether to enable these AI features for their accounts."
In other words, people attending the meeting are not in control of whether their personal likeness and data is used to train AI models.
Indeed that's the case. There is a reply to your COO's comment on here, saying that when a person joins a meeting they are given no option to consent or decline, except to leave the meeting.
In real life, that means things like a person attends a job interview online, and suddenly faces the surprise realisation at the moment they are about to start that they have no realistic choice but to continue with an AI being trained on how they talk and smile at their job interview, for example. Or they start a job, and discover that's what's happening with their team standups. Not everyone will mind, but it sure feels like Zoom may be facilitating something more invasive than before, where you could normally assume a meeting was private to the participants and ephemeral, if it's not explicitly recorded.
I expect you're aware that people would not take that interpretation from the comment you posted here on a casual reading.
Please don't do that. If you must explain under what conditions Zoom uses user data, please don't use the kind of flexible wording, like "customer", that will cause many people to think each person installing Zoom and clicking "I agree" is in control. Please be more consistent in public statements by saying something more like:
The Zoom account owner and administrators control when the meeting participants' personal likeness and data is used for AI training.
Your COO's comment on here says Zoom "currently do not use audio, video or chat content to train AI models [and we would not do so without customer consent]". But your statement imply that Zoom does currently use audio, video and chat to train AI models under some circumstances.
That has led one commenter to say that your COO is straight up lying, but I would be inclined to say the wording urgently needs clarification, in plain English using words that people are not likely to misinterpret, or feel like they are being tricked by.
All this is not a good look if you care to maintain a trustworthy reputation. That people using Zoom can have confidence in. As someone correctly pointed out, Zoom already has a tainted reputation around user data. I think your product is great and I use it often in large meetings. I prefer it over all alternatives I've tried because it works well. I also think Zoom has managed to mostly recover from previous reputation hits. But other products are catching up, and I'll end up advocating against Zoom myself if I think the C-suite is a tag team of weasel-worders who'd like to keep the door of ambiguity open ajar on something as central to meetings as this. People's ability to feel safe when talking to each other is sacrosanct.
You can see that now clearly stated in our blog: https://blog.zoom.us/zooms-term-service-ai/