IAM is unnecessarily bad. I recently had to set a trivial policy, and was doing it correctly.
The console kept warning me that I was giving root AWS access to my external application because they want people to use the locked in AWS path, and I was running off cloud.
On top of that, they break copy paste on the web console, so you can’t just ctrl-c ctrl-v and then ask Claude to explain their WTF-ery. Instead, you have to OCR or send a PNG.
I honestly did not think they could make IAM worse, yet here we are. Bastards.
You could probably open the developer tools, find the console elements and extract the data from there to get around copy/paste limitations. I’m not familiar with the AWS console but let’s say it’s an input, select it in the dev tools and then in the dev tools console do $0.value
You think that you're complaining about IAM, but really you're complaining about the web ui. I rarely use the web console, I use terraform or the cli. I'd you're vibe coding your infra with Claude, point it at the cli / terraform. Skip the ui.
With terraform you get the amazing experience of having to iterate, one at a time, through the five hundred and thirty seven new permissions you need to grant having decided that a lambda configuration needs to be ever so slightly different than it was yesterday, because there's no documentation linking terraform creation of resources and the IAM permissions required to successfully make the AWS API call behind the scenes. Or those for updating a resource, which are different, so you get to do it all again tomorrow. Or deleting - different again. Fun for the day after.
I guess I should also point out that I’ve used AWS at extremely large scale in the past, which is why I’m running this subproject on another cloud.
As for simple permissions, go read the UNIX paper. It spends a page or two on their approach and is all you need.
Then, read the paper on mapping between NTFS SMB ACLs and NFS. It’s either impossible or undecidable, depending on the deployment. IAM is from the windows acl lineage which is known pessimal from a usability and security perspective.
IAM is NOT from any lineage. It has grown organically and is complicated, just as any other policy language. AWS even uses an automatic proof assistant to verify IAM policies.
However, the secret to IAM in AWS is to NOT use IAM. Just create separate AWS accounts for separate services and only share whatever resources are needed. Then you can have dead simple IAM policies because you won't need to do granular permissions ("AWS role X can access database Y").
First amazon was abusive. They abused their monopoly position to gain market dominance over upstream and didn’t contribute back monetarily or with code.
Next, upstream responded with a license change, then amazon escalated with the fork.
Benjamin Franklin famously taught himself to write well by doing what you describe: Read a piece of a book, then rewrite it, then compare.
At first his copies were badly degraded. Eventually, he was considered one of the best writers of his time.
I feel like there's probably some way "the copy is better" could be quantified (at least to the point where it fools most of the people most of the time). If so, then expect LLMs to learn the same trick within a generation or two.
Assuming O_DIRECT actually blocks until the SSD has acked (this isn't actually what O_DIRECT's contract says, but what they rely on), you have to wait until each page write acks whenever you need a persistence barrier.
My guess is the preallocation + zeroing is what got them most of the win, and the O_DIRECT is actually hurting, not helping throughput. This has been the case 100% of the time I've benchmarked such things.
If you're doing this sort of stuff for real under Linux, check out sync_file_range. It's the only non-broken and performant sync API for ext4 (note that it's broken by design for many other file systems, and the API is terribly difficult to use correctly).
If you really care, it's probably just easier to use SPDK or something. Linux has historically been pretty hostile towards DBMS implementations.
That's a lot of valuable information and thanks for the input. Yes the original blog post is mainly focusing on reducing the metadata overhead due to fsync(), and I got a lot of good feedback from here and a lot of discussion is beyond our original scenario settings. We would like to incorporate all these enhancement suggestions without re-introducing fsync(), and make it work for more general environments.
Not damaging their relationship with Google as a vendor most likely. For better or worse, GrapheneOS is depend on Android which is controlled by Google.
Is there a way to just ban all these sites? Like a firefox plugin or whatever that detects this crap, and just bounces over to some place more reputable, like archive.is.
As someone that uses AI agents, this makes me want to install a browser plugin for "public windows" that just archives everything I see, and then farms out clicks of content that are missing from those sites.
The result of this would be to upload it all to a bot-friendly alternative to archive.org.
Nice, I understand it is similar to ArchiveBox + its web extension.
Now to be honest, while it's optimal to archive pages from you browser view I am not sure I want a random web extension to be in everything I see from a security point of view.
I would rather have a local proxy doing it. Maybe something like the InternetArchive warcproc [0]. Haven't tried yet.
for a short time i had warcprox sitting behind my firefox and auto feeding its output to pywb, it seemed to work but i had connections failing randomly after having warcprox running for more than a few hours~days. not sure if it's an issue with pywb or warcprox but there were some urls missing that i did browse on firefox, and many dynamic pages couldn't be replayed at all.
$600K+ went to kickbacks, er… “lobbying”, and thorn was hit with some pretty nasty scandals involving sex crimes.
reply