whether people want to admit it or not, agent encoding is kind of the norm right now and I think the fear is the stories coming out of places like Block, Inc where they announced they fired 4,000 engineers a couple days ago because of what's the obvious truth today versus 6 months ago.... one expert software engineer can do the work of 20-40 people, so why do we need so many people? it's a hard pill to swallow, it's easier to claim that agentic coding doesn't work or that the code is sloppy and it doesn't work when in reality most companies are currently using it everyday, especially the large ones.
No fear. No claim that AI assisted programming is inherently bad.
The indications are that this instance of AI assisted programming is bad because of the launch post, the name, and the history of Claudeflare doing this before.
I agree with you, if you're already a competent engineer, your productivity only is improved by orders of magnitude by using coding agents that are at this point producing very good code as long as you give it the right prompts and you test your code and remove any bugs... if the code tests and all the bugs are removed, what you've got is a working product that is hard to argue that it doesn't work especially if there's been a lot of QA done on it and there's no bugs....
you could probably get away with just running nginx with certbot on the front end of that domain name and then have it proxy back to a script that talks to the brother printer on the back end of it to do printing, although I'm not sure why you'd want to print via the public internet
i wonder how much that costs per hour to run any normal load? what benefit does this have versuss using mysql (or any similar rdbms) for the queries? mysql/pgsql/etc is free remember, so using S3 obviously charges by the request, or am i wrong?
Cost really depends on your workflow, your durability needs, prefetch aggression (how quickly you need all the data locally), and how aggressively you cache. The storage is what it is, and it's cheap. The variation comes in transaction counts.
If you evict the cache on every read/write, then all reads and writes do S3 GET. PUTs are 10-100x more expensive than GETs usually. Check the benchmark/README.md for GET counts, but it's usually 5-50 per cold read (with interior B-tree pages on disk).
S3 GETs are $0.0004/1000, S3 Express is $0.00003. So 10 queries per minute all day long averaging 20 GET operations, with full eviction on each request, would be 20*10*$0.0004*60*24/1000*30 = $3.45 per month. With S3 Express One Zone that's $0.26/mo. Both plus storage costs, which would probably be lower.
On Wasabi, that would be just the cost of storage (but they have minimum 1TB requirements at $6.99/mo).
If you checkpoint after every write and write 10 times per minute, each write hitting e.g. 5 page groups (S3 objects) plus the manifest file, the analysis looks like: 6*10*$0.005*60*24/1000*30=$12.96. Again that's worst case for the benchmark 1.5GB database. On S3 Express that' $2.92.
Point is - it's not too bad, and that's kind of worst-case scenario where you evict on every request every 6 seconds which isn't really realistic. If you evict the cache hourly, that cost goes is 1/600th - less than a $0.01 per month.
Summary: use S3 Express One Zone, don't evict the cache too often, checkpoint to S3 once a minute (turbolite lets you checkpoint either locally (disk-durability) or locally+S3 separately), and you're running a decent workload for pennies every month.
forcing the OS to authenticate the age of the person using the machine? age verification can easily be done by using the DMV as a conduit or using login.gov as a conduit or using id.me as a conduit. these interfaces are already used by dozens of government agencies to authenticate citizens of the United States, and there doesn't need to be any special software installed on the machine
i dont know, i'm in my 50's, and been doing software engineering work every day professionally since i was 15, and i can say claude code (max) has made me at least 20x more productive... Its definitely an improvement. I think what they've got is top notch, doesnt come close to what the competition are offering, at this point.
I tried to use Google's Gemini CLI from the command line on linux and I think it let me type in two sentences and then it told me that I was out of credits... and then I started reading comments that it would overwrite files destructively [0] or worse just try to rewrite an entire existing codebase [1]. it just doesn't sound ready for prime time. I think they wanted to push something out to compete with Claude code but it's just really really bad.
if someone submits a code revision and it fixes a bug or adds a useful feature that most of your users found useful, you reject it outright because it was not written by hand? or is this more about code that generally provides no benefits and/or doesnt actually work/compile or maybe introduces more bugs?
If you know what you're doing, you can achieve good results with more or less any tool, including a properly-wielded coding agent. The problem is people who _don't_ know what they're doing.
> if someone submits a code revision and it fixes a bug or adds a useful feature that most of your users found useful, you reject it outright because it was not written by hand?
If they didn't read it, then neither will I, otherwise we have this weird arms race where you submit 200 PRs per day to 200 different projects, wasting 1hr of each project, 200 hrs total, while incurring only 8hrs of your time.
If your PR took less time to create and submit than it takes the maintainer to read, then you didn't read your own PR!
Your PR time is writing time + reading time. The maintainer time is reading time only, albeit more carefully.
reply