> if they don't want to sell you something because you are anonymous, they have the right to reject your browser requests. Thats an important cornerstone of capitalism.
Maybe in a perfect world where healthy competition is a thing. But (potentially due to under-regulation elsewhere) that's not what we have in practice - for a lot of services, you only really have a choice between a handful of providers, and you're out of luck if they all decide to stalk and spam you.
Competition is not currently an effective solution to data protection, so something else was needed. The GDPR's approach to it is to outlaw personal data and spam as a payment method - you can't make non-functionally-required data processing mandatory for using a given service or product. I think it's a good approach - less spam, tracking and incentives for hoarding personal data is always a good thing.
> They can train that model and throw away the training data, and then hand that model off to any regulator that is going to have no idea what to do with a bunch of floats in matrices, and claim that there is no user data stored in there, and nobody could prove otherwise.
At least in theory, the regulator should be able to see through that scheme. But even if let's assume they actually did train an ML model and got away with it, the GDPR mandates that users should be able to decide how their personal data is processed, so they can just not opt into targeted advertising, and their personal data must not be processed using that model. The model can be there, it'll just sit unused.
Article 21 of the GDPR allows an individual to object to processing personal information for marketing or non-service related purposes.[24] This means the data controller must allow an individual the right to stop or prevent controller from processing their personal data.
There are some instances where this objection does not apply. For example, if:
1. Legal or official authority is being carried out
2. "Legitimate interest", where the organisation needs to process data in order to provide the data subject with a service they signed up for
3. A task being carried out for public interest.
2 is the key here. Make an entire ML model that generates a website layout based on the request, claim its core business logic, oh and btw, it just happens to load advertisements from companies based on this contextual data, but that contextual data has nothing to do with the user. Look, we don't store any advertising cookies or session data, don't request any either, and here is our model. Investigate it as much as you want, and we don't have the training data anymore because we delete that.
Maybe in a perfect world where healthy competition is a thing. But (potentially due to under-regulation elsewhere) that's not what we have in practice - for a lot of services, you only really have a choice between a handful of providers, and you're out of luck if they all decide to stalk and spam you.
Competition is not currently an effective solution to data protection, so something else was needed. The GDPR's approach to it is to outlaw personal data and spam as a payment method - you can't make non-functionally-required data processing mandatory for using a given service or product. I think it's a good approach - less spam, tracking and incentives for hoarding personal data is always a good thing.
> They can train that model and throw away the training data, and then hand that model off to any regulator that is going to have no idea what to do with a bunch of floats in matrices, and claim that there is no user data stored in there, and nobody could prove otherwise.
At least in theory, the regulator should be able to see through that scheme. But even if let's assume they actually did train an ML model and got away with it, the GDPR mandates that users should be able to decide how their personal data is processed, so they can just not opt into targeted advertising, and their personal data must not be processed using that model. The model can be there, it'll just sit unused.