Hacker Newsnew | past | comments | ask | show | jobs | submit | voxic11's commentslogin

GDPR permits retention where necessary for compliance with a legal obligation (Article 6(1)(c)).

The AI Act qualifies as such a legal obligation.


No but if you defame them and that causes others to boycott then you could be legally responsible.

The idea is you wouldn't mix innerHTML and setHTML, you would eliminate all usage of innerHTML and use the new setHTMLUnsafe if you needed the old functionality.

I looked up setHTMLUnsafe on MDN, and it looks like its been in every notable browser since last year.

Good idea to ship that one first, when it's easier to implement and is going to be the unsafe fallback going forward.


I looked up setHTMLUnsafe on MDN, and it looks like its been in every notable browser since last year.

Oddly though, the Sanitizer API that it's built on doesn't appear to be in Safari. https://developer.mozilla.org/en-US/docs/Web/API/Sanitizer


If I need the old functionality why not stick to innerHTML?

because the "unsafe" suffix conveys information to the reader, whereas `innherHTML` does not?

Any potential reader should be familiar with innerHTML.

Right. Like how any potential reader is familiar with the risks of sql injection which is why nothing has ever been hacked that way.

Or how any potential driver is familiar with seat belts which is why everybody wears them and nobody’s been thrown from a car since they were invented.


yes, and bugs shouldn't exist because everyone should be familiar with everything.

But if some are marked unsafe and others are not it gives a false sense of security if something is not marked unsafe.

So we shouldn’t mark anything as unsafe then? And give no indication whatsoever?

The issue isn’t that the word “safe” doesn’t appear in safe variants, it’s that “unsafe” makes your intentions clear: “I know this is unsafe, but it’s fine because of X and Y”.


Maybe we should add the word safe and consider everything else as unsafe

Like life, things should default to being safe. Unsafe, unexpected behaviours should be exception and thus require an exceptional name.

Legacy and backwards compatibility hampers this, but going forward…


Because then your linter won't be able to tell you when you're done migrating the calls that can be migrated.

Because sooner or later it'll be removed.

No because the web has to remain backwards compatible with older sites. This has always been the case.

And break millions of sites?

You can't rename an existing method. It would break compatibility with existing websites.

> you would eliminate all usage of innerHTML

The mythical refactor where all deprecated code is replaced with modern code. I'm not sure it has ever happened.

I don't have an alternative of course, adding new methods while keeping the old ones is the only way to edit an append-only standard like the web.


If you want to adopt this in your project, you can add a linter that explicitly bans innerHTML (and then go fix the issues it finds). Obviously Mozilla cannot magically fix the code of every website on the web but the tools exist for _your_ website.

I kinda like the way JS evolved into a modern language, where essentially ~everyone uses a linter that e.g. prevents the use of `var`. Sure, it's technically still in the language, but it's almost never used anymore.

(Assuming transpilers have stopped outputting it, which I'm not confident about.)


Depending on the transpiler and mode of operation, `var` is sometimes emitted.

For example, esbuild will emit var when targeting ESM, for performance and minification reasons. Because ESM has its own inherent scope barrier, this is fine, but it won't apply the same optimizations when targeting (e.g.) IIFE, because it's not fine in that context.

https://github.com/evanw/esbuild/issues/1301


for some values of "everyone" and "never".


Ah yeah, I remember that. General point still stands: in terms of the lived experience of developers, `var` is essentially deprecated.

I touch JS that uses var heavily on a daily basis and I would be incredibly surprised to find out that I am alone in that.

That is indeed why I added qualifiers to "everyone" and "never".

It for sure happens for drop in replacements.

Nobody's talking about old code here.

Having an alternative to innerHTML means you can ban it from new code through linting.


Finally, a good use case for AI.

Yeah, using a kilowatt GPU for string replacement is going to be the killer feature. I probably shouldn't even be joking, people are using it like this already

This ship has sailed unfortunately, no later than yesterday I've seen coworkers redact a screenshot using chatGTP.

When the condition for when you want to replace is hard to properly specify, AI shines for such find and replaces.

This one is literally matching "innerHTML = X" and setting "setHTML(X)" instead. Not some complex data format transformation

But I can see what you mean, even if then it would still be better for it to print the code that does what you want (uses a few Wh) than doing the actual transformation itself (prone to mistakes, injection attacks, and uses however many tokens your input data is)


That can break the site if you do the find and replace blindly. The goal here is to do the refactor without breaking the site.

> When the condition for when you want to replace is hard to properly specify, AI shines for such find and replaces.

And, in your opinion, this is one of those cases?


It is because the new API purposefully blocks things the old API did not.

Wouldn't AI be trained on data using innerHTML?

My experience is that they somehow print quite modern code despite things like ES6 being too new to be standard knowledge even for me and I'm not even middle-aged yet

Maybe the last 10 years saw so much more modern code than the last cumulative 40+ years of coding and so modern code is statistically more likely to be output? Or maybe they assign higher weights to more recent commits/sources during training? Not sure but it seems to be good at picking this up. And you can always feed the info into its context window until then


This is not my experience. Claude has been happily generating code over the past week that is full of implicit any and using code that's been deprecated for at least 2 years.

>> Maybe the last 10 years saw so much more modern code than the last cumulative 40+ years of coding and so modern code is statistically more likely to be output?

The rate of change has made defining "modern" even more difficult and the timeframe brief, plus all that new code is based on old code, so it's more like a leaning tower than some sort of solid foundation.


ES6 is 11 years old. It's not that new.

Hence the example of how long it takes non-LLMs to pick that up, whereas LLMs seem to get it despite there being loads of old code out there

See also my reply to the sibling comment with the same remark https://news.ycombinator.com/item?id=47151211

My mistake for saying 10 instead of 11 years btw, but I don't think it changes the point


> "ES6 being too new to be standard knowledge"

Huh? It's been a decade.


Exactly, I learned coding JS before 2015 (it was my first language, picked up during what is probably called middle school in english). I haven't had to learn it again from scratch, so I need to go out of my way to find if there is maybe a better way to do the thing I can already do fine. It's not automatic knowledge, yet the LLM seems to have no trouble with it, so I'm pointing out that they seem to not have problems upgrading. The grandparent comment suggested it would need to be trained anew to use this new method instead. Given how much old (non-ES6) JS there is, apparently it gets it quite easily so any update that includes some amount of this new code will probably do it just fine

Which is why it can easily understand how innerHTML is being used so that it can replace it with the right thing.

Honest question: Is there a way to get an LLM to stop emitting deprecated code?

Theoretically, if you could train your own, and remove all references to the deprecated code in the training data, it wouldn't be able to emit deprecated code. Realistically that ability is out of reach at the hobbiest level so it will have to remain theoretical for at least a few more iterations of Moore's law.

You can use a custom domain that you own with gmail. But of course domains aren't that great either as they are only somewhat decentralized and it's still pretty easy to lose your domain.


When I've looked into these cases it often seems that there are additional issues at play like harassment/stalking of ex's. So the prosecutor is thinking they can get an easy plea deal on the "real" case by piling on additional charges.


What I don't understand about this approach is if it's truly completely privacy preserving what stops me from making a service where anyone can use my ID to verify? If the site owner really learns nothing about me except for my age then they can't tell that it's the same id being used for every account. And if the government truly knows nothing about the sites I verify on they can't tell that I'm misusing the id either. So someone must know more then you are letting on.


One possible solution idea I just had is having the option of registered providers (such as Discord). They would have a public key, and the user has a private key associated to their eID. They could be mingled in such a way to create a unique identifier, which would be stored by the provider (and ofc the scheme would be such that the provider can verify that the mingled identifier, was created from a valid private key and their public key).

This would in total make sure that only one account can be created with the private key, while exposing no information about the private key aka user to the provider. I am fairly certain that should work with our cryptographic tools. It would ofc put the trust on the user not to share their eID private key, but that is needed anyway. Either you manage it or it gets managed (and you lose some degree of privacy).


The hole is closed with per-site pseudonyms. Your wallet generates a unique cryptographic key pair for each site so same person + same site = same pseudonym, same person + different sites = different, unlinkable pseudonyms.

"The actual correct way" is an overstatement that misses jfaganel99's point. There are always tradeoffs. EUDI is no exception. It sacrifices full anonymity to prevent credential sharing so the site can't learn your identity, but it can recognize you across visits and build a behavioral profile under your pseudonym.


Ok but we were talking about users on discord who have to verify their age. I was under the impression that

> it can recognize you across visits and build a behavioral profile under your pseudonym

is the default Discord experience for users with an account, long before age verification entered the chat.


presumably you'd just use unique one time codes derived from the eID


I fail to see how that solves the problem? That's what I'm saying my service would provide. Unless the eID has some kind of client side rate limiting built in I can generate as many of them as I want. And assuming they are completely privacy preserving no one can tell they were all generated by the same ID.


https://github.com/eu-digital-identity-wallet/av-doc-technic...

> Since Proof of Age Attestations are designed for single use, the system must support the issuance of attestations in batches. It is recommended that each batch consist of thirty (30) attestations.

It sounds like application would request batch of time-limited proofs from government server. Proofs gets burned after single use. Whether or not you've used any, app just requests another batch at a set interval (e.g. 30 once a month). So you're rate limited on the backend.

Edit: seems like issuing proofs is not limited to the government, e.g. banks you're client of also can supply you with proofs? (if they want to partake for some reason). I guess that would multiply numbers of proof available to you.


Ok I have been convinced this is a technically feasible solution that could preserve privacy while reasonably limiting misuse. That said I'm worried that the document you linked does not require relying parties implement the zero knowledge proof approach. It only requires that they implement the attestation bearer token approach which is much weaker and allows the government to unmask an account by simply asking the relying party which attestation token was submitted to verify the account.

> Relying Party SHALL implement the protocols specified in Annex A for Proof of Age attestation presentation.

> A Relying Party SHOULD implement the Zero-Knowledge Proof verification mechanism specified in Annex A


You could do some scheme that hashes a site specific identifier with an identifier on the smart element of the id.

If that ever repeats, the same I'd was used twice. At the same time, the site ID would act as salt to prevent simple matching between services.


People do, in fact, have multiple profiles. For very valid reasons.


the solution to this seems to be to issue multiple "IDs". So essentially the government mints you a batch of like 30 "IDs" and you can use each of those once per service to verify an account (30 verified accounts per service). That allows for the use case of needing to verify multiple accounts without allowing you to verify unlimited accounts (and therefor run into the large scale misuse issue I pointed out).

If you need to verify even more accounts the government can have some annoying process for you to request another batch of IDs.


This is a solved problem in the authentication space. Short lived tokens backed by short lived keys.

A token is generated that has a timestamp and is signed by a private key with payload.

The public key is available through a public api. You throw out any token older than 30 seconds.

Unlimited IDs.

That's basically what you want.


Which either allows to use a fingerprint of the signing key to be used for the same.

Or would open the system up to the originally posted attack of providing ~an open relay.


Characters are copyrightable, its a similar situation to song compositions vs song masters. There is the copyright of the original picture/song master but separately there is the copyright of the song composition/character. Making your own work derived from the same character or song composition is still a derivative work even if it doesn't directly copy the song master/original picture.


No, not in general anyways. There are specific regulations on smelting pennies and nickels (and it's specifically nickels, not half dimes).

So you can smelt any silver coins minted by the US except for the WW2 silver nickel.


I think it just means each starlink satellite has multiple star trackers. Probably pointed in different directions so that if one is blinded by the sun the others can still see the stars.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: