All that law says is that the applicant 'shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.'
And even then, only if a job application rejection 'produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights'.
So as long as the company is recording the decisions taken and the reasons for those decisions, and providing those to candidates on request, they're in the clear.
If they're using a LLM to make those decisions, then they're fundamentally unable to provide the reasons for those decisions, because of how LLMs work.
Not to mention you can't trust that the AI is actually filtering out applications properly. I've run into that myself when I was responsible for hiring at my last role. The AI solution my boss insisted we use was awful. It highly rated completely unqualified applicants and ignored the few good ones.
FYI, the very recently released Marathon with the BattleEye rootkit works fine on a maximally trimmed down Windows 10 LTSC, which is what I'm running on my PC (personal console).
Windows 10 LTSC is not available outside of volume licencing.
That you pirate an OS they refuse to sell to you to get a better experience is your choice, but it's unrealistic to suggest that it's a solution for the average person.
You're probably breaking EU law by building this nightmare.
https://artificialintelligenceact.eu/article/86/
reply