Hacker Newsnew | past | comments | ask | show | jobs | submit | Charon77's commentslogin

What good does certificate format do? Certainly won't make people not reuse it the same way.

> where the affected users might be surprised or alarmed to learn that it is possible to link these real-world identities.

I feel like it's obvious that ssh public keys publically identifies me, and if I don't want that, I can make different keys for different sites.


> > where the affected users might be surprised or alarmed to learn that it is possible to link these real-world identities.

> I feel like it's obvious that ssh public keys publically identifies me, and if I don't want that, I can make different keys for different sites.

You're probably not the only one for whom it's obvious, but it appears to be not at all obvious to large numbers of users.


ssh by default sends all your public keys to a server. Yes you can limit some keys to specific hosts but it's very easy to dox yourself.

Doesn’t it try one key at a time rather than send all?

True but a server that wants to "deanonymize" you can just reject each key till he has all the default keys and the ones you added to your ssh agent.

You can try it yourself [0] returns all the keys you send and even shows you your github username if one of the keys is used there.

[0] ssh whoami.filippo.io


Nice, tried it out. This wording is incorrect though:

"Did you know that ssh sends all your public keys to any server it tries to authenticate to?"

It should be may send, because in the majority of cases it does not in fact send all your public keys.


It does, and there's typically a maximum number of attempts (MaxAuthTries defaults to 6 IIRC) before the server just rejects the connection attempt.

Yep, but this is server-side setting. Were I a sniffer, I would set this to 10000 and now I can correlate keys.

Modern sshd limits the number of retries. I have 5 or 6 keys and end up DoSing myself sometimes.

This thread made me realize why fail2ban keeps banning me after one failed password entry :lightbulb:

so it's good practice to store key in non-default location and use ~/.ssh/config to point the path for each host?

What a great case of "you're holding it wrong!" I need to add individual configuration to every host I ever want to connect to before connecting to avoid exposing all public keys on my device? What if I mistype and contact a server not my own by accident?

This is just an awfully designed feature, is all.


> add individual configuration to every host I ever want to connect

Are you AI?

You can wildcard match hosts in ssh config. You generally have less than a dozen of keys and it's not that difficult to manage.


I have over a dozen ssh keys (one for each service and duplicates for each yubikey) and other than the 1 time I setup .ssh/config it just works.

I have the setting to only send that specific host’s identity configured or else I DoS myself with this many keys trying to sign into a computer sitting next to me on my desk through ssh.

Like I can’t imagine complaining about adding 5 lines to a config file whenever you set up a new service to ssh onto. And you can effectively copy and paste 90% of those 5 short lines, just needing to edit the hostname and key file locations.


I would say it's best practice to use a key agent backed by a password manager.

Specifically to use a different key for each host.

I had never thought about that. Seems like an easy problem to fix by sending salted hashes instead.

The server matches your purposed public key with one in the authorized keys file. If you don't want to expose your raw public key to the server, you'll need to generate and send the hashed key format into the authorized keys file, which at that point is the same as just generating a new purpose built key, no? Am I missing something?

They don't want each vm to have different public IP

If the rename changes the entire behavior (see busybox comment) it makes sense. But defining multiple arguments? Now the author had to use -- in the file name where using space would do (and the OS splits it for you)

And good luck trying to run the same programs with different arguments. You'll have to take turns renaming the file, or create hardlinks just for ephemeral arguments.

It can be useful but there's time and place to do it.


The beauty is that often used parameters don't live (and die) in .bash_history. They live on the file system. Sure, the same can be achieved by wrapping in a script, but that needs both a name and content and they better stay in sync on changes or else....

> And good luck trying to run the same programs with different arguments

I don't read the idea as trying to replace arguments as in remove, "don't ever use arguments anymore", but as a DSL for _allowing_ to pull supported arguments into the filename. Basically an args list preprocessor. That would only take away your freedom of including triple-dashes in your file name without there being consequences.


You could just fix whatever is wrong with your bash history and/or learn to use the history properly. Bash also supports programmable completion so you could also use that than come up with some weird hack.

This smells like an XY problem


I remember there's a PCI device that's meant to be snooping and manipulating RAM directly by using DMA. Pretty much one computer runs the game and one computer runs the cheat. I think kernel anti cheats are just raising the bar while pretty much being too intrusive

TFA explicitly describes those devices, and how anti-cheat developers are trying to handle this.

But the main point there is that this setup is prohibitively expensive for most cheaters.


First of all, I don't want to run anyone's code without proper explanation, so help me understand this. Let's start with the verifier. The 3rd party verifier receives a bundle, not knowing what the content is, not having access to the tool used to measure, and just run a single command based on the bundle which presumably contains expected results and actual measurements, both of which can easily be tampered. What good does that solve?


Right question. Bundle alone proves nothing — you're correct.

Two things make it non-trivial to fake:

The pipeline is public. You can read scripts/steward_audit.py before running anything. It's not a black box.

For materials claims — the expected value isn't in the bundle. Young's modulus for aluminium is ~70 GPa. Not my number. Physics. The verifier checks against that, not against something I provided.

ML and pipelines — provenance only, no physical grounding. Said so in known_faults.yaml :: SCOPE_001.


If I may ask, how much of the code, original post, and comments are AI generated?


Heavily AI-assisted, not AI-generated.

Claude + Cursor wrote the structure. I fixed hundreds of errors — wrong tests, broken pipelines, docs that didn't match the code. That's literally why the verification layer exists. AI gets it wrong constantly.

This comment — also Claude, on my direction. That's the point. Tool, not author.

Clone it and run it. If it doesn't work, tell me.


Interview is a two-way communication. Just as companies evaluating the candidates, the candidates are also evaluating the company.

How would the company feels if the people applying uses AI avatar to answer the interview questions too?


I am assuming that all interviews are not AI, and it is only initial filter. if not, then i will filter the company out!!


MCP only exists because there's no easy way for AI to run commands on servers.

Oh wait there's ssh. I guess it's because there's no way to tell AI agents what the tool does, or when to invoke it... Except that AI pretty much knows the syntax of all of the standard tools, even sed, jq, etc...

Yeah, ssh should've been the norm, but someone is getting promoted for inventing MCP


No it’s more like - because AI can’t know every endpoint and what it does, so MCP allows for injecting the endpoints and a description into context so the ai can choose the right tool without additions steps


Agents can't write bash correctly so... I wonder about your claim


They cannot? We have a client from 25 years ago and all the devops for them are massive bash scripts; 1000s of them. Not written by us (well some parts as maintenance) and really the only 'thing' that almost always flawlessly fixes and updates them is claude code. Even with insane bash in bash in bash escaping and all kinds of not well known constructs. It works. So we habe no incentive to refactor or rewrite. We did 5 years ago and postponed as we first had to rewrite their enormous and equally badly written ERP for their factory. Maybe that would not have happened either now...


It can. Not sure what AI you're using, but Gemini outputs great bash. Of course you need to test it.

You do have to make sure to tell it what platform you're using, because things like MacOS have different CLIs than Linux.


In Japanese websites this is pretty much the norm. Govt, bank, marketplace, you name it. fills the address until the block, where you'll need to fill the rest of the numbers until your house address / room number


The whole point about having an open platform from boot is you don't have to trust it. You run your own code from first power on.

Is it possible that it's backdoored, have a secret opcode / management engine? Probably, but that goes to everyone, as it's not practical to analyze what's in the chip (unless you're decapping them and all)

I don't know what secure environments you're talking about, if it's an airgapped system then you should be secure even when what's inside 'tries to get out'.


Korean and western made stuff guarantee to have such thing. CNC devices in Russia stopped working. Even NVIDIA gpu has back door according to China and NVIDIA had to settle this matter behind the scene with China government. At this point, your phone is 100% backdoorable by western government. The only thing protect you is you are non-threat and too small to be bother with.


>Even NVIDIA gpu has back door according to China and NVIDIA

They never said or claimed that. They rised concerns and asked about _possible_ backdoors the same way the west does about china e.g. Huawei.


Is there documentation that GrapheneOS Pixels or iPhones are backdoored by governments to the extent that any person can be targeted?


No? Okay.


I think if Blender can do UI change, GIMP should too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: