Hacker Newsnew | past | comments | ask | show | jobs | submit | hnlmorg's commentslogin

MCP, as a concept, is a great idea.

The problem isn’t having a standard way for agents to branch out. The problem is that AI is the new Javascript web framework: there’s nothing wrong with frameworks, but when everyone and their son are writing a new framework and half those frameworks barely work, you end up with a buggy, fragmented ecosystem.

I get why this happens. Startups want VC money, established companies then want to appear relevant, and then software engineers and students feel pressured to prove they’re hireable. And you end up with one giant pissing contest where half the players likely see the ridiculousness of the situation but have little choice other than to join party.


I think the saying “the road to hell is paved with good intentions” is more apt.

I think what’s happening isn’t some evil plot to quell opposing voices, but more likely the UK government thinking they’re actually passing laws to reduce rioting and online abuse. And the censorship effects are a side effect of these laws.

Some might consider this opinion naive but take this counterpoint: laws require a majority to pass. So if these censorship laws were written to squash opposing voices, then we’d be dealing with a literal conspiracy involving hundreds of people. I don’t believe all politicians are only in it for themselves (though I do believe many are), so you’d expect at least 1 MP to speak out if such a conspiracy existed.


This. Governments are signatory to a huge number of agreements, and are members of various NGOs. Things start out as being representative of some will of the people, but over time it becomes a millstone around the government's neck if it the arrangement becomes politically difficult at home. And of course, those arrangements often morph to be to the benefit of those in charge.

What happens is that you get arrangements like the EU demanding migration quotas that the populations of various individual countries despise, or an automobile market that gets progressively more expensive as environmental legislation puts ever more pressure on manufacturers. And of course, if you're saving the world, who needs cars anyway? We should all be living Hong Kong style to save the environment, so we need more urban density.


Every time I hear that PSA I’m reminded of the acid track “where is your child”

https://youtu.be/sDyxyRcZWBA?si=sqDnodWQ-jWKCdCH

(I know the song came long after the PSA)


I think the problem lies with the fact that you cannot write kernel code without relying on unsafe blocks of code.

So arguably both camps are correct. Those who advocate Rust rewrites, and those who are against it too.


I don't think this code needed to be unsafe. This code doesn't involve any I/O or kernel pointer/CPU register management; it's just modifying a list.

I'm sure the people who wrote this code had their reasons to write the code like this (probably simplicity or performance), but this type of kernel code could be done safely.


As pointed out by yuriks [0], it seems the patch authors are interested in looking into a safer solution [1]:

> The previous patches in this series illustrate why the List::remove method is really dangerous. I think the real takeaway here is to replace the linked lists with a different data structure without this unsafe footgun, but for now we fix the bugs and add a warning to the docs.

[0]: https://news.ycombinator.com/item?id=46307357

[1]: https://news.ycombinator.com/item?id=46307357


You can’t write rust code without relying on unsafe code. Much of the standard library contains unsafe, which they have in parts taken the time to formally verify.

I would presume the ratio of safe to unsafe code leads to less unsafe code being written over time as the full ”kernel standard library” gets built out allowing all other parts to replace their hand rolled implementations with the standard one.


I don’t understand the comparison. Purrtran isn’t an esoteric language.

LOLCODE isn’t much of one either? It’s fundamentally a BASIC more or less.

…but with intentionally weird semantics picked for its humour rather than legibility.

It might not be a challenging language, but it is designed more for art than utility.

This firmly makes it an esoteric language.

Whereas Purrtran has conventional semantics. The cuteness of Purrtran is in the documentation rather than the language design. The esoteric part is really more in the story telling rather than the language semantics.


I personally preferred the sequel to the original.

I loved the original but its pacing wasn’t all that great. I also felt II had better cohesion too.


I think the OP is aware of that and I agree with them that it’s bad practice despite how common it is.

For example with AWS, you can use the AWS CLI to sign you in and that goes through the HTTPS auth flow to provide you with temporary access keys. Which means:

1. You don’t have any access keys in plain text

2. Even if your env vars are also stolen, those AWS keys expire within a few hours anyway.

If the cloud service you’re using doesn’t support OIDC or any other ephemeral access keys, then you should store them encrypted. There’s numerous ways you can do this, from password managers to just using PGP/GPG directly. Just make sure you aren’t pasting them into your shell otherwise you’ll then have those keys in plain text in your .history file.

I will agree that It does take effort to get your cloud credentials set up in a convenient way (easy to access, but without those access keys in plain text). But if you’re doing cloud stuff professionally, like the devs in the article, then you really should learn how to use these tools.


> If the cloud service you’re using doesn’t support OIDC or any other ephemeral access keys, then you should store them encrypted. There’s numerous ways you can do this, from password managers to just using PGP/GPG directly. Just make sure you aren’t pasting them into your shell otherwise you’ll then have those keys in plain text in your .history file.

This doesn't really help though, for a supply chain attack, because you're still going to need to decrypt those keys for your code to read at some point, and the attacker has visibility on that, right?

Like the shell isn't the only thing the attacker has access to, they also have access to variables set in your code.


I agree it doesn’t keep you completely safe. However scanning the file system for plain text secrets is significantly easier than the alternatives.

For example, for vars to be read, you’d need the compromised code to be part of your the same project. But if you scan the file system, you can pick up secrets for any project written in any language, even those which differ from the code base that pulled the compromised module.

This example applies directly to the article; it wasn’t their core code base that ran the compromised code but instead an experimental repository.

Furthermore, we can see from these supply chain attacks that they do scan the file system. So we do know that encrypting secrets adds a layer of protection against the attacks happening in the wild.

In an ideal world, we’d use OIDC everywhere and not need hardcoded access keys. But in instances where we can’t, encrypting them is better than not.


It's certainly a smaller surface that could help. For instance, a compromised dev dependency that isn't used in the production build would not be able to get to secrets for prod environments at that point. If your local tooling for interacting with prod stuff (for debugging, etc) is set up in a more secure way that doesn't mean long-lived high-value secrets staying on the filesystem, then other compromised things have less access to them. Add good, phishing-resistant 2FA on top, and even with a keylogger to grab your web login creds for that AWS browser-based auth flow, an attacker couldn't re-use it remotely.

(And that sort of ephemeral-login-for-aws-tooling-from-local-env is a standard part of compliance processes that I've gone through.)


> 1. You don’t have any access keys in plain text

That's not correct. The (ephemeral) keys are still available. Just do `aws configure export-credentials --profile <YOUR_OIDC_PROFILE>`

Sure, they'll likely expire in 1-24 hours, but that can be more than enough for the attacker.

You also can try to limit the impact of the credentials by adding IP restrictions to the assumed role, but then the attacker can just proxy their requests through your machine.


> That's not correct. The (ephemeral) keys are still available. Just do `aws configure export-credentials --profile <YOUR_OIDC_PROFILE>`

That’s not on the file system though. Which is the point I’m directly addressing.

I did also say there are other ways to pull those keys and how this isn’t completely solution. But it’s still vastly better than having those keys in clear text on the file system.

Arguing that there are other ways to circumvent security policies is a lousy excuse to remove security policies that directly protect you against known attacks seen in the wild.

> Sure, they'll likely expire in 1-24 hours, but that can be more than enough for the attacker.

It depends on the attacker, but yes, in some situations that might be more than long enough. Which is while I would strongly recommend people don’t set their OIDC creds to 24 hours. 8 hours is usually long enough, shorter should be required if you’re working on sensitive/high profile systems. And in the case of this specific attack, 8 hours would have been sufficient given the attacker probed AWS while the German team were asleep.

But again, i do agree it’s not a complete solution. However it’s still better than hardcoded access keys in plain text saved in the file system.

> You also can try to limit the impact of the credentials by adding IP restrictions to the assumed role, but then the attacker can just proxy their requests through your machine.

In practice this never happens (attacks proxying) in the wild. But you’re right that might be another countermeasure they employ one day.

Security is definitely a game of ”cat and mouse”. But I wouldn’t suggest people use hardcoded access keys just because there are counter attacks to the OIDC approach. That would be like “throwing the baby out with the bath water.”


> That’s not on the file system though.

They are. In `~/.aws/cli/cache` and `~/.aws/sso/cache`. AWS doesn't do anything particularly secure with its keys. And none of the AWS client libraries are designed for the separation of the key material and the application code.

I also don't think it's even possible to use the commonly available TPMs or Apple's Secure Enclave for hardware-assisted signatures.

> 8 hours is usually long enough. And in the case of this specific attack, 8 hours would have been sufficient given the attacker probed AWS while the German team were asleep.

They could have just waited a bit. 8 hours does not materially change anything, the credential is still long-lived enough.

I love SSO and OIDC but the AWS tooling for them is... not great. In particular, they have poor support for observability. A user can legitimately have multiple parallel sessions, and it's more difficult to parse the CloudTrail. And revocation is done by essentially pushing the policy to prohibit all the keys that are older than some timestamp. Static credentials are easier to manage.

> In practice this never happens (attacks proxying) in the wild. But you’re right that might be another countermeasure they employ one day.

If I remember correctly, LastPass (or was it Okta?) was hacked by an attacker spying on the RAM of the process that had credentials.

And if you look at the timeline, the attack took only minutes to do. It clearly was automated.

I tried to wargame some scenarios for hardware-based security, but I don't think it's feasible at all. If you (as a developer) have access to some AWS system, then the attacker running code on your behalf can also trivially get it.


You can use keyring/keychain with credential_process although it's only a minor shift in security from "being able to read from the fs" to "being able to execute a binary"

> They are. In `~/.aws/cli/cache` and `~/.aws/sso/cache`. AWS doesn't do anything particularly secure with its keys.

Thanks for the correction. That’s disappointing to read. I’d have hoped they’d have done something more secure than that.

> And none of the AWS client libraries are designed for the separation of the key material and the application code.

The client libraries can read from env vars too. Which isn’t perfect either, but on some OSs, can be more secure than reading from the FS.

> If I remember correctly, LastPass (or was it Okta?) was hacked by an attacker spying on the RAM of the process that had credentials.

That was a targeted attack.

But again, I’m not suggesting OIDC solves everything. But it’s still more secure than not using it.

> And if you look at the timeline, the attack took only minutes to do. It clearly was automated.

Automated doesn’t mean it happens the moment the host is compromised. If you look at the timeline, you see that the attack happened over night; hours after the system was compromised.

> They could have just waited a bit. 8 hours does not materially change anything, the credential is still long-lived enough.

Except when you look at the timeline of those specific attack, they probed AWS more than 8 hours after the start of the working day.

A shorter TTL reduces the window of attack. That is a material change for the better. Yes I agree on its own it’s not a complete solution. But saying “it has no material benefit so why bother” is clearly ridiculous. By the same logic, you could argue “why bother rotating keys at all, we might as well keep the same credentials for years”….

Security isn’t a Boolean state. It’s incremental improvements that leave the system, as a whole, more of a challenge.

Yes there will always be ways to circumvent security policies. But the harder you make it, the more you reduce your risk. And having ephemeral access tokens reduces your risk because an attacker then has a shorter window for attack.

> I tried to wargame some scenarios for hardware-based security, but I don't think it's feasible at all. If you (as a developer) have access to some AWS system, then the attacker running code on your behalf can also trivially get it.

The “trivial” part depends entirely on how you access AWS and what security policies are in place.

It can range anywhere from “forced to proxy from the hosts machine from inside their code base while they are actively working” to “has indefinite access from any location at any time of day”.

A sufficiently advanced attack can gain access but that doesn’t mean we shouldn’t be hardening against less sophisticated attacks.

To use an analogy, a burglar can break a window to gain access to your house, but that doesn’t mean there isn’t any benefit in locking your windows and doors.


Agreed.

> A sufficiently advanced attack can gain access but that doesn’t mean we shouldn’t be hardening against less sophisticated attacks.

I'm a bit worried that with the advent of AI, there won't be any real difference between these two. And AI can do recon, choose the tools, and perform the attack all within a couple of minutes. It doesn't have to be perfect, after all.

I've been thinking about it, and I'm just going to give up on trying to secure the dev environments. I think it's a done deal that developers' machines are going to be compromised at some point.

For production access, I'm going to gate it behind hardware-backed 2FA with a separate git repository and build infrastructure for deployments. Read-write access will be available only via RDP/VNC through a cloud host with mandatory 2FA.

And this still won't protect against more sophisticated attackers that can just insert a sneaky code snippet that introduces a deliberate vulnerability.


They are on the filesystem though.

Login then check your .aws/login/cache folder.


Oh that’s disappointing. Thanks for the correction.

The problem is that roads are continually being built and maps aren’t always as quick to be updated.

There are also private paths that aren’t public roads but are still intended for vehicles.


That's no excuse for disbelieving GPS for extended periods of time.

Google Maps gets it right: it tried to keep you on road, but only for a few tens of seconds. After that, if you are in the middle of uncharted territory, it'll show the marker there.

(This is probably because Google Maps can be used for walking/biking too)


Well, when I’m driving in Kyiv, and there is an air raid alert, usually my car navigation starts to derp, and after a few minutes it thinks that it’s suddenly in Lima, Peru.

Not that I mind too much, I know how to get around without navigation.


It does teleport to Peru but it also fast-forwards time to about a year into the future, which caused my car to think its overdue for that oil change. It even synced that back to the headquarters and I got an email asking me to take it to service.. (and arriving there on the wrong side of the Dnieper, I just decided to wait it out)

Wish we could put it into a manual mode where you just reset it's position once and then it updates based on wheel encoders & snapping to roads.


GPS jamming for incoming drones?


Well you can also drive off road too.

> That's no excuse for disbelieving GPS for extended periods of time.

You've got it backwards. I was explaining why GPS should take priority over mapping data. Not the other way around.


They’re saying that not prioritising (disbelieving) GPS data is bad, hence there’s no excuse for it

So that’s the same thing you’re saying


Yes, I know. And saying that as a rebuttal to a comment where I’d said the same. Hence my reply ;)

The technology should be resilient against GPS spoofing. If it “knows” it never left the mountain road, it’s not crazy to design it to reject an anomalous GPS signal, which might be wrong or tampered with.

I think the likelihood of that happening is significantly less than the likelihood that a car took a new road or other path not show in the cars mapping data.

>(This is probably because Google Maps can be used for walking/biking too)

Please don't do that. The map is simply not good enough and does not have enough context (road quality, terrain, trail difficulty) for anything but very causal activity. Even then I highly recommend to use a proper map, electronic or paper.


yeah I'm not gonna open some paid trail map or buy a paper map so I can walk across my local city park and give my friends a pin to find me...

I've found Organic Maps to be better than any paid app for hiking (and I've tried a bunch) for what it's worth

I find Gaia Maps even better for the boonies.

It has a lot more map data accessible and you can even overlay National Park Service maps, land ownership, accurate cell service grids, mountain biking trails, weather conditions and things like that.

Disclaimer: Just because you see a route on a map, digital or paper, does not mean it is passable today. Or it may be passable but at an extremely arduous pace.


For anything other than driving Organic Maps (iOS) or OSMand (Android) are the very best.

We used the walking directions for dual sport motorcycles once. It was pretty nice. We did have a few places where it became sketchy. Those and maybe more places would be sketchy for walking too. Not that google maps could do much about it. Terrain is a living thing. These were mostly huge cracks in the earth due to rain water.

Trail? Terrain? I use it for walking for 10-20mins around a (mostly flat) city and I expect that’s what 90% of people use it for, the comment didn’t mention hiking

It depends what you are doing but for hill walking in Italy I found the footpathapp.com app good. There are no decent paper maps in the area I go and Google maps are also rubbish for local paths but the app kind of draws in paths based on satellite images I think and you can draw on it to mark the ones you've been on.

That’s clearly bullshit because if the user sets a system wide theme and your appLICATION follows that theme, then your appLICATION is not going to be any harder to use than the system itself nor any other appLICATION using native widgets.

What is actually happening is designers are forcing non-native controls, in part because web technologies have infested every corner of software development these days. Unsurprisingly, those non-native widgets break in a plethora of ways when the system diverges even marginally from the OS defaults.

And instead of those designers admitting that they fucked up, they instead double down on their contempt for their users.

Also, can we please not call desktop applications “apps” in response to an article about an OS that predates smartphones by several decades.


Odds are, you’re not going to get any contributions even if you do want them. So they could just upload regardless.

And if the README explicitly says the project isn’t open to contributors nor feature requests, then you’re even less likely to see that (and have a very valid reason to politely close any issues on the unlikely scenario that someone might create one).

The vast majority of stuff on GitHub goes unnoticed by the vast majority of people. And only a very small minority of people ever interact with the few projects they do pull from GH.


> Odds are, you’re not going to get any contributions even if you do want them. So they could just upload regardless.

This is not my personal experience nor the experience of a number of folks that I know personally. I think it's pretty hard to generalize about this.

> The vast majority of stuff on GitHub goes unnoticed by the vast majority of people. And only a very small minority of people ever interact with the few projects they do pull from GH.

So what? It's probably not going to impact you, so it's okay and we just have to deal with it? I reject that logic entirely.


> This is not my personal experience nor the experience of a number of folks that I know personally. I think it's pretty hard to generalize about this.

I think it’s pretty easy to generalise because public repositories are public, so the data is available.

The vast majority of repositories on GH has between 0 and 10 stars and no issues raised by other people.

Even people (like myself) who have repos with thousands of stars and other GH members “following” them, will have other repos with in GH with zero interaction.

> So what? It's probably not going to impact you, so it's okay and we just have to deal with it? I reject that logic entirely.

That’s a really uncharitable interpretation of my comment.

A more charitable way of reading it would be:

“Worrying about a minor problem that is easily remediated and likely wouldn’t happen anyway isn’t a strong reason to miss out.”

If we were talking about something high stakes, where one’s career, family or life would be affected; then I’d understand. But the worst outcome here is an assumption gets proven true and they delete the repo.

Please don’t take this as a persuasive argument that someone should do something they don’t want to do. If people don’t want to share their code then that’s their choice.

Instead this is responding to the comment that your friend DID want to share but was scared of a theoretical but low risk and unlikely scenario. That nervousness isn’t irrational, but it’s also not a good reason by itself to miss out on doing something you said they did want to do.

If however, that was really just an excuse and they actually had no real desire to share their code, then they should just be honest and say that. There’s no obligation that people need to open source their pet projects so they don’t need to justify it with arguments about GHs lack controls. They can just said “I don’t want my code public” and that’s a good enough reason itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: