Apparently this prohibition only applies to "situations related to the workplace and education", and, in this context, "That prohibition should not cover AI systems placed on the market strictly for medical or safety reasons"
So it seems to be possible to use this in a personal context.
> Therefore, the placing on the market, the putting into service, or the use of AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education should be prohibited. That prohibition should not cover AI systems placed on the market strictly for medical or safety reasons, such as systems intended for therapeutical use.
This is true, though it may not make sense commercially for them to offer an API that can't be used for workplace (business) applications or education.
I see what you mean, but I think that "workplace" specifically refers to the context of the workplace, so that an employer cannot use AI to monitor the employees, even if they have been pressured to agree to such a monitoring. I think this is unrelated to "commercially offering services which can detect emotions".
But then I don't get the spirit of that limitation, as it should be just as applicable to TVs listening in on your conversations and trying to infer your emotions. Then again, I guess that for these cases there are other rules in place which prohibit doing this without the explicit consent of the user.
In a nutshell, this uncertainty is why firms are going to slow-roll EU rollout of AI and, for designated gatekeepers, other features. Until there is a body of litigated cases to use as reference, companies would be placing themselves on the hook for tremendous fines, not to mention the distraction of the executives.
Which, not making any value judgement here, is the point of these laws. To slow down innovation so that society, government, regulation, can digest new technologies. This is the intended effect, and the laws are working.
Companies like OpenAI definitely have the resources to let some lawyers analyze the situation and at this point it should be clear to them if they can or can't do this. It's far more likely that they're holding back because of limitations in hardware resources.
I use those words because I've never read any of the points in the EU AIA.
They definitely do have the resources, but laws and regulations are frequently ambiguous. This is one reason the outcome of litigation is often unpredictable.
I would wager this -- OpenAI lawyers have looked that the situation. They have not been able to credibly say "yes, this is okay" and so management makes the decision to wait. Obviously, they would prefer to compete in Europe if it were a no-brainer decision.
It may be possible that the path to get to "yes, definitely" includes some amount of discussion with the relevant EU authorities and/or product modification. These things will take time.
So it seems to be possible to use this in a personal context.
https://artificialintelligenceact.eu/recital/44/
> Therefore, the placing on the market, the putting into service, or the use of AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education should be prohibited. That prohibition should not cover AI systems placed on the market strictly for medical or safety reasons, such as systems intended for therapeutical use.