I wouldn't be surprised if we saw a headline in a few years when we find out other actors (e.g. China, Russia) have been buying this data en-masse too.
A lot of them don't know they're doing it. The tracking itself is embedded in dependencies of dependencies. SDKs you add for legitimate purposes. Along the way it's sent from platform to platform. Analytics, add targets, and eventually data brokers. Data brokers then sell it to other data brokers or the government.
If you're lucky, it's pseudo-anonymous. Of course it's actually not - aggregated location data is inherently not anonymous.
Yes. The french newspaper Le Monde recently did a piece on how easy it was to find every moves and the home adress of sensitive people (elite special forces, nucleat submarine engineers, president bodyguards, etc) by exploiting the free sample of a data broker.
They were stunned to see lemonde's app appeared as sources inside that excel file because of SDKs in their app.
Because capitalism would happily burn the world to ash if the capitalists thought it would make them richer. It makes them think they are winning at life.
Did you request a SSL certificate? Those are public, actors use those to scan any newly requested website for known vulnerabilities and other misconfigurations. I suspect you're just looking at standard internet noise :)
Do you run a dedicated "AI SRE" instance for each customer or how do you ensure there is no potential for cross-contamination or data leakage across customers?
Basically how do you make sure your "AI SRE" does not deviate from it's task and cause mayhem in the VM, or worse. Exfiltrates secrets, or other nasty things? :)
We run a dedicated AI SRE for each instance with scoped creds for just their instance. OpenClaw by nature has security risks so we want to limit those as much as possible. We only provision integrations the user has explicitly configured.
I think the underlying point is valid. Agents are a potential tool to add to your arsenal in addition to "throw shit at the wall and see what sticks" tools like WebInspect, Appscan, Qualys, and Acunetix.
Feel free to correct me, but the ML classifier appears to be rather bare. Less than 20 hardcoded payloads with randomized URL encoding as the only augmentation. How does this generalize to novel evasion techniques? Genuinely curious what your eval numbers look like against real traffic.