This is actually really nice. Web page feels pretty snappy, way more so than congress.gov. I've learned some interesting things just scrolling for a few minutes, like the "Energy Freedom Act" cutting appliance rebates or the constitutional amendment for a balanced budget (wtf).
I like this. My strategy to stay sane in US politics is to follow what the government is actually doing and avoid distractions from ragebait influencers or unhinged statements from politicians.
Thanks! The original goal of Govbase is to make US Policy impact easy to understand for citizens. For more government transparency so people know how their representatives are spending their time and who they're actually working for.
> is there a correlation between academic performance and classroom laptops?
Yes, here is Dr. Horvath's (the neuroscientist mentioned in the article) written testimony which cites some studies.
The table in Section 3 is particularly damning. It shows how a classroom intervention worsened or improved outcomes relative to the baseline. Note that the worst intervention is the "1-to-1 laptops row".
Unclear if they mixed interventions, I'd have to read the mentioned studies. If the interventions were done in isolation then that's basically a longitudinal study which is a pretty clear smoking gun.
I currently have it limited to this "epoch" date while I tweak the prompts, once I feel the prompt is done cooking I will be letting it go back to 2007. But, also, gotta keep the lights on somehow ;)
> Your manager sees you shipping faster, so the expectations adjust. You see yourself shipping faster, so your own expectations adjust. The baseline moves.
This problem has been going on a long time, Helen Keller wrote about this almost 100 years ago:
> The only point I want to make here is this: that it is about time for us to begin using our labor-saving machinery actually to save labor instead of using it to flood the nation haphazardly with surplus goods which clog the channels of trade.
I feel like most of the anxiety around LLMs is because (in the USA at least) our social safety net sucks.
I'd probably have way more fun debating LLMs if it wasn't tied to my ability to pay rent, have healthcare, or feel like a valued person contributing something to society. If we had universal healthcare and a federal job guarantee it would probably calm things down.
Don't let the hysteria and the astro-turfing get to you. Things are not nearly as close to what a bunch of hyper-excited project managers, average developers and labs propagandists like to say.
The more you use code agents, the clearer its limitations will appear to you.
Everyone selling them as some silver bullet is either trying to astroturf it, to ride the wave as a newly minted "expert/early adopter" or have no fucking clue about what they are talking.
What's more interesting is how the big names in our industry, the ones who already made their money as you say, have turned quickly since the end of 2025. I think even the most old school names can see that the writing is on the wall now.
Changed their opinion/public stance completely sometimes only months apart. For example "I love programming" to "if you aren't with AI you are behind" within maybe only 6 months time. Its almost like anyone with any public opinion in software is being paid to say it now and boost the AI hype.
I want to find more art like this that updates in real-time, then I feel like I actually appreciate it in the long term. With regular static pictures on the wall I tune them out after they've been there a few months.
Disclosure, I do work for Josh, and I can tell you that he's thought quite deeply about the negative implications of the agents that are coming. Among enumerating the ways in which AI agents will transform knowledge work, this points out the ways which we might come to regret.
> Even if this plays out over 20 or 30 years instead of 10 years, what kind of world are we leaving for our descendants?
> What should we be doing today to prepare for (or prevent) this future?
No, "motivation" is what puts one into motion, hence the name. AIs have constraints and even agendas, which can be triggered by a prompt. But it's not action, it's reaction.
DeepSeek may produce a perfectly good web site explaining why Taiwanese independence is not a thing, and how Taiwan wants back to the mainland. But it's won't produce such a web site by its own motivation, only in response to an external stimulus.
Right. I think 'constraint' is more accurate than 'agenda'.. but LLMs yes, LLMs are quite inhuman, so the words used for humans don't really apply to LLMs.
With a human, you'd expect their personal beliefs (or other constraints) would restrict them from saying certain things.
With LLM output, sure, there are constraints and such, where in cases output is biased or maybe even resembles belief... -- But it does not make sense to ask an LLM "why did you write that? what were you thinking?".
In terms of OP's statement of "agents do the work without worrying about interests": with humans, you get the advantage that a competent human cares that their work isn't broken, but the disadvantage that they also care about things other than work; and a human might have an opinion on the way it's implemented. With LLMs, just a pure focus on output being convincing.
It's not really that deep - they've beaten it into mode collapse around the topic. Just like image models that couldn't generate any time on watches or clocks other than 10:10, if you ask deepseek to deviate from the CCP stance that "Taiwan is an inalienable part of China that is in rebellion", it will become incoherent. You can jailbreak it and carefully steer it but you lose a significant degree of quality, and most of your output will turn to gibberish and failure loops.
Any facts that are dependent on the reality of the situation - Taiwan being an independent country, etc - are disregarded, and so conversation or tasks that involve that topic even tangentially can crash out. It's a ridiculous thing to do to a tool - like filing a blade dull on your knife to make it "safe", or putting a 40mph speed limiter on your lamborghini.
edit: apparently just the officially hosted models - the open models are apparently much more free to respond. Maybe forcing it created too many problems and they were taking a PR hit?
“What” comments can be quite nice to quickly parse through code.
But I don’t think they’re worth it. My issue with “what” comments is they’re brittle and can easily go out of sync with the code they’re describing. There’s no automation like type checking or unit tests that can enforce that comments stay accurate to the code they describe. Maybe LLMs can check this but in my experience they miss a lot of things.
When “what” comments go out of sync with the code, they spread misinformation and confusion. This is worse than no comments at all so I don’t think they’re worth it.
“Why” comments tend to be more stable and orthogonal to the implementation.
reply