The idea is you wouldn't mix innerHTML and setHTML, you would eliminate all usage of innerHTML and use the new setHTMLUnsafe if you needed the old functionality.
Right. Like how any potential reader is familiar with the risks of sql injection which is why nothing has ever been hacked that way.
Or how any potential driver is familiar with seat belts which is why everybody wears them and nobody’s been thrown from a car since they were invented.
So we shouldn’t mark anything as unsafe then? And give no indication whatsoever?
The issue isn’t that the word “safe” doesn’t appear in safe variants, it’s that “unsafe” makes your intentions clear: “I know this is unsafe, but it’s fine because of X and Y”.
If you want to adopt this in your project, you can add a linter that explicitly bans innerHTML (and then go fix the issues it finds). Obviously Mozilla cannot magically fix the code of every website on the web but the tools exist for _your_ website.
I kinda like the way JS evolved into a modern language, where essentially ~everyone uses a linter that e.g. prevents the use of `var`. Sure, it's technically still in the language, but it's almost never used anymore.
(Assuming transpilers have stopped outputting it, which I'm not confident about.)
Depending on the transpiler and mode of operation, `var` is sometimes emitted.
For example, esbuild will emit var when targeting ESM, for performance and minification reasons. Because ESM has its own inherent scope barrier, this is fine, but it won't apply the same optimizations when targeting (e.g.) IIFE, because it's not fine in that context.
Yeah, using a kilowatt GPU for string replacement is going to be the killer feature. I probably shouldn't even be joking, people are using it like this already
This one is literally matching "innerHTML = X" and setting "setHTML(X)" instead. Not some complex data format transformation
But I can see what you mean, even if then it would still be better for it to print the code that does what you want (uses a few Wh) than doing the actual transformation itself (prone to mistakes, injection attacks, and uses however many tokens your input data is)
My experience is that they somehow print quite modern code despite things like ES6 being too new to be standard knowledge even for me and I'm not even middle-aged yet
Maybe the last 10 years saw so much more modern code than the last cumulative 40+ years of coding and so modern code is statistically more likely to be output? Or maybe they assign higher weights to more recent commits/sources during training? Not sure but it seems to be good at picking this up. And you can always feed the info into its context window until then
This is not my experience. Claude has been happily generating code over the past week that is full of implicit any and using code that's been deprecated for at least 2 years.
>> Maybe the last 10 years saw so much more modern code than the last cumulative 40+ years of coding and so modern code is statistically more likely to be output?
The rate of change has made defining "modern" even more difficult and the timeframe brief, plus all that new code is based on old code, so it's more like a leaning tower than some sort of solid foundation.
Exactly, I learned coding JS before 2015 (it was my first language, picked up during what is probably called middle school in english). I haven't had to learn it again from scratch, so I need to go out of my way to find if there is maybe a better way to do the thing I can already do fine. It's not automatic knowledge, yet the LLM seems to have no trouble with it, so I'm pointing out that they seem to not have problems upgrading. The grandparent comment suggested it would need to be trained anew to use this new method instead. Given how much old (non-ES6) JS there is, apparently it gets it quite easily so any update that includes some amount of this new code will probably do it just fine
Theoretically, if you could train your own, and remove all references to the deprecated code in the training data, it wouldn't be able to emit deprecated code. Realistically that ability is out of reach at the hobbiest level so it will have to remain theoretical for at least a few more iterations of Moore's law.
You can use a custom domain that you own with gmail. But of course domains aren't that great either as they are only somewhat decentralized and it's still pretty easy to lose your domain.
When I've looked into these cases it often seems that there are additional issues at play like harassment/stalking of ex's. So the prosecutor is thinking they can get an easy plea deal on the "real" case by piling on additional charges.
What I don't understand about this approach is if it's truly completely privacy preserving what stops me from making a service where anyone can use my ID to verify? If the site owner really learns nothing about me except for my age then they can't tell that it's the same id being used for every account. And if the government truly knows nothing about the sites I verify on they can't tell that I'm misusing the id either. So someone must know more then you are letting on.
One possible solution idea I just had is having the option of registered providers (such as Discord). They would have a public key, and the user has a private key associated to their eID. They could be mingled in such a way to create a unique identifier, which would be stored by the provider (and ofc the scheme would be such that the provider can verify that the mingled identifier, was created from a valid private key and their public key).
This would in total make sure that only one account can be created with the private key, while exposing no information about the private key aka user to the provider. I am fairly certain that should work with our cryptographic tools. It would ofc put the trust on the user not to share their eID private key, but that is needed anyway. Either you manage it or it gets managed (and you lose some degree of privacy).
The hole is closed with per-site pseudonyms. Your wallet generates a unique cryptographic key pair for each site so same person + same site = same pseudonym, same person + different sites = different, unlinkable pseudonyms.
"The actual correct way" is an overstatement that misses jfaganel99's point. There are always tradeoffs. EUDI is no exception. It sacrifices full anonymity to prevent credential sharing so the site can't learn your identity, but it can recognize you across visits and build a behavioral profile under your pseudonym.
I fail to see how that solves the problem? That's what I'm saying my service would provide. Unless the eID has some kind of client side rate limiting built in I can generate as many of them as I want. And assuming they are completely privacy preserving no one can tell they were all generated by the same ID.
> Since Proof of Age Attestations are designed for single use, the system must support the issuance of attestations in batches. It is recommended that each batch consist of thirty (30) attestations.
It sounds like application would request batch of time-limited proofs from government server. Proofs gets burned after single use. Whether or not you've used any, app just requests another batch at a set interval (e.g. 30 once a month). So you're rate limited on the backend.
Edit: seems like issuing proofs is not limited to the government, e.g. banks you're client of also can supply you with proofs? (if they want to partake for some reason). I guess that would multiply numbers of proof available to you.
Ok I have been convinced this is a technically feasible solution that could preserve privacy while reasonably limiting misuse. That said I'm worried that the document you linked does not require relying parties implement the zero knowledge proof approach. It only requires that they implement the attestation bearer token approach which is much weaker and allows the government to unmask an account by simply asking the relying party which attestation token was submitted to verify the account.
> Relying Party SHALL implement the protocols specified in Annex A for Proof of Age attestation presentation.
> A Relying Party SHOULD implement the Zero-Knowledge Proof verification mechanism specified in Annex A
the solution to this seems to be to issue multiple "IDs". So essentially the government mints you a batch of like 30 "IDs" and you can use each of those once per service to verify an account (30 verified accounts per service). That allows for the use case of needing to verify multiple accounts without allowing you to verify unlimited accounts (and therefor run into the large scale misuse issue I pointed out).
If you need to verify even more accounts the government can have some annoying process for you to request another batch of IDs.
Characters are copyrightable, its a similar situation to song compositions vs song masters. There is the copyright of the original picture/song master but separately there is the copyright of the song composition/character. Making your own work derived from the same character or song composition is still a derivative work even if it doesn't directly copy the song master/original picture.
I think it just means each starlink satellite has multiple star trackers. Probably pointed in different directions so that if one is blinded by the sun the others can still see the stars.
The AI Act qualifies as such a legal obligation.
reply