Hacker Newsnew | past | comments | ask | show | jobs | submit | kstrauser's commentslogin

The following is based on my interpretation of information that's been made public:

A Vercel user had their Google Workspace compromised.

The attacker used the compromised workspace to connect to Vercel, via Vercel's Google sign-on option.

The attacker, properly logged into the Vercel console as an employee of that company, looked at the company's projects' settings and peeked at the environment variables section, which lists a series of key:value pairs.

The user's company had not marked the relevant environment variables as "sensitive", which would have hidden their values from the logged-in attacker. Instead of

  DATABASE_PASSWORD: abcd_1234 [click here to update]
it would have shown:

  DATABASE_PASSWORD: ****** [click here to update]
with no way to reveal the previously stored value.

And that's how the attacker enumerated the env vars. They didn't have to compromise a running instance or anything. They used their improperly acquired but valid credentials to log in as a user and look at settings that user had access to.


Astonishing that high damage actions were authorized by authentication delegated to Google and furthermore not subject to hard token 2FA.

Ideally, you can have a couple of working versions at any given time. For instance, an AWS IAM role can have 0 to 2 access keys configured at once. To rotate them, you deactivate all but one key, create a new key, and make that new key the new production value. Once everything's using that key, you can deactivate the old one.

My understanding is this is exactly how Vercel works. The users hadn’t checked the “don’t ever reveal, even to me” box next to the sensitive values. If they had, the attacker would only have been able to see the names of the variables and not their values.

Ah. The article has since been updated to point out that it’s not plaintext, but encrypted at rest (which would be expected). OK.

I think this is wrong about what “sensitive” means here. AFAIK, all Vercel env cars are encrypted. The sensitive checkbox means that a develop looking at the env var can’t see what value is stored there. It’s a write-only value. Only the app can see it, via an env var (which obviously can’t be encrypted in such a way that the app can’t see it, otherwise it’d be worthless). If you don’t check that box, you can view the value in the project UI. That’s reasonable for most config values. Imagine “DEFAULT_TIME_ZONE” or such. There’s nothing gained from hiding it, and it’d be a pain in the ass come troubleshooting time.

So sensitive doesn’t mean encrypted. It means the UI doesn’t show the dev what value’s stored there after they’ve updated it. Not sensitive means it’s still visible. And again, I presume this is only a UI thing, and both kinds are stored encrypted in the backend.

I don’t work for Vercel, but I’ve use them a bit. I’m sure there are valid reasons to dislike them, but this specific bit looks like a strawman.


> Only the app can see it, via an env var (which obviously can’t be encrypted in such a way that the app can’t see it, otherwise it’d be worthless)

Yeah, I'm very confused. It's not possible to encrypt env vars that the program needs; even if it's encrypted at rest, it needs to be decrypted anyway before starting the program. Env vars are injected as plain text. This is just how this works, nothing to do with Vercel.

This situation could some day improve with fully homomorphic encryption (so the server operates with encrypted data without ever decrypting it), but that would have very high overhead for the entire program. It's not realistic (yet)


You always get people screaming about 'it should have been encrypted!' when there's a leak without understanding what encryption can and can't do in principle and in practice (it most certainly isn't a synonym for 'secure' or 'safe').

Encryption turns your data confidentiality problem into a key management problem.

Also if you want to keep a secret a secret forever, encrypted but saved data may be easily decrypted in the future. Most secrets though in reality are less useful in X years time.

Theoretically maybe, but there's no indication that a quantum-resistant algorithm can't encrypt something that's secure for the coming million+ years.

Sure there is, just use a one time pad and never repeat the message.

Oops - you said the opposite of what I read, my mistake.


Whenever someone says "But it should have been encrypted!" about things like configs on a server, I ask them how they'd implement that in practice.

PoC or GTFO.

I think you'll find it's a bit harder to do than you expect.


> Whenever someone says "But it should have been encrypted!" about things like configs on a server, I ask them how they'd implement that in practice.

Short practical answer: Use a USB HSM plugged into your server and acknowledge that it is an imperfect solution.

For configs, I used to setuid the executable so that it starts up as a user that can read the file, it reads the file into RAM in the first 5 lines in `main` then drops privs immediately to a user that can't read the file, and then continues as normal.

This was to ensure that if the application was compromised, the config could not be changed by the application itself, nor could it be read once the program was running.

If you wanted to keep it encrypted without leaking the key, you could do the same, except that the key would also be read at startup (or, preferably, get a data key from the USB HSM, and use that for decryption).

Of course, that moves the problem of "read the first key from disk" to "read the HSM pin from disk".

You can have your supervising program, like a K8 cluster, inject the correct keys into the pod as it's created, but that cluster itself needs a root key to decrypt those correct keys, and that has to come from somewhere too.

There is, at the end of the day, only one perfect solution: when the program starts up it waits for user input - either the decryption key or the HSM pin - that it uses as a root key to decrypt everything else.

There is no other way that isn't "store some root key, credential, token, etc on the computer".


I don't know how it works on Vercel, but on other platforms it usually means that the value will be redacted in logs as well.

Where I work we started using Vault and you store the vault key (as in looup key) in as a regular non-hidden env var. I think this is probably more solid.

Yeah, the Vault model, where you just refer to the secret’s path (where it is hopefully also dynamically generated and revoked after use), based on short-lived OIDC-style auth, is about the safest mechanism possible for this sort of secrets management. I’ve been trying to spread this pattern everywhere I’ve worked for a decade now. But it’s a lot of work to set up and maintain.

This is also how other cloud providers do it, eg DigitalOcean.

But if they are readable to the “developer” then they are readable to anyone who gets access to the developer’s Vercel credentials. If Vercel provides a way to avoid that that didn’t get used, that’s the failure. Sure, you can quibble with the exact understanding of the author over whether they were “encrypted” or not. That’s not really the key factor here.

There are appropriate uses for both. Your database password should be write-only and not viewable later. Your time zone should be read-write for easy debugging when things to wrong. Vercel gives you both options. The user chose badly here, and IMO that’s not Vercel’s fault.

The replier was wrong, though. They misread it and skipped over the part they thought wasn’t there.

Jinx! Owe me a coke.

I ctrl-f'd a minute faster than you!

Your link says otherwise. From the Article 11 link, ANNEX II, A.1.1.(5):

(a) From 20 June 2025, manufacturers, importers or authorised representatives shall ensure that the process for replacement of the display assembly and of parts referred to in point 1(a), with the exception of the battery or batteries, meets the following criteria: [...]

[...]

(c) From 20 June 2025, manufacturers, importers or authorised representatives shall ensure that the process for battery replacement:

(i) meets the following criteria:

— fasteners shall be resupplied or reusable;

- the process for replacement shall be feasible with no tool, a tool or set of tools that is supplied with the product or spare part, or basic tools;

— the process for replacement shall be able to be carried out in a use environment;

— the process for replacement shall be able to be carried out by a layman.

(ii) or, as an alternative to point (i), ensure that:

— the process for battery replacement meets the criteria set out in (a);

— after 500 full charge cycles the battery must, in addition, have in a fully charged state, a remaining capacity of at least 83 % of the rated capacity;

— the battery endurance in cycles achieves a minimum of 1 000 full charge cycles, and after 1 000 full charge cycles the battery must, in addition, have in a fully charged state, a remaining capacity of at least 80 % of the rated capacity;

— the device is at least dust tight and protected against immersion in water up to one meter depth for a minimum of 30 minutes.

---

So manufacturers must make the battery replaceable, or meet all the conditions from (a) for replacing non-battery components, and meet the 1000 cycle / 80% capacity requirement.


There is no Article 11 in the second link i mistakenly linked the same link twice . Here is the correct link https://eur-lex.europa.eu/eli/reg/2023/1542/oj/eng

So if it weren't for that exemption, would iPhone batteries qualify? You can do it with regular tools and a YouTube tutorial, but it's not easy.

The tools are not regular. They’re teeny tiny security bits.

> a tool or set of tools that is supplied with the product or spare part, or basic tools

It's or, not and


The special screwdriver isn't supplied with the product, and Apple doesn't sell spare parts. I guess "regular" isn't the right word, but it's easy and inexpensive to buy the right tools.

Grew up in Springfield, posting this from California. There's a reason for that.

It’s the weather, right? Not a big fan of west coast politics compared to back home but I’ll tolerate it in exchange for the sun :)

The winters ain't bad either.

I think it's a good PR move. "Hey, look at how reasonable we've been in spite of the legal craziness. We've put money on the table and are moving forward with a plan that benefits everyone." Now anyone who blocks the plan will be seen as the problem.

Not quite the same, but fish shell has programmable abbreviations. I type “tf<space>” and it expands that inline to “opentofu”. It use to say “terraform” before we upgraded; I didn’t even have to change the commands I type.

fish isn't POSIX though, I'm guessing ZSH can probably do something similar but command completion just isn't the same as being shortening "mkdir test" to "mkd test"

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: