"What's the good of Mercator's North Poles and Equators,
Tropics, Zones, and Meridian Lines?"
So the Bellman would cry: and the crew would reply
"They are merely conventional signs!
I feel like we would disagree on the role of immigration in the US but I really appreciate you calling out how the current administration’s approach is only effective at making viral clips online. Meta comment, but it’s refreshing to talk with people who have different goals while still referencing a shared reality. Removing the masks and adding cameras shouldn’t be controversial unless your goal really is to make a paramilitary force for the president.
Illegal immigration has been a political issue in the USA for a long time now. Trump is, however cruelly, fulfilling a campaign promise. One of the few he's managed to do.
The unstated but obvious (to me?) goal of what ICE is doing is not to get large numbers of people out of the country, but to drive costs down for migrant labor by further disenfranchising them, making them scared, marginal, etc.
If they actually thoroughly evicted non-status migrant workers they'd have a outright revolt on their hands from farmers and other businesses that depend on them.
Instead those businesses can now take further advantage of the fear of harassment and/or deportation to drive down compensation and rights.
Contrast with countries like Canada that have a legal temporary foreign agriculture worker program that provides a regulated source of seasonal migrant farm worker labour under a non-citizen temporary status, but with some rights (still often abused). It's notable to me as a Canadian that I don't see this being advocated on any large scale by either party in the US.
Anyways, all this just to say that the jackboot clown theater is the point, not a side effect.
Limiting the supply of migrant labor drives costs up, not down, and the ICE raids have had a significant negative effect on businesses reliant on illegal immigrants.
Do you have numbers on how many migrant farm workers have actually been deported or detained?
Because going around and harassing and deporting other or non-essential non-status immigrants would drive labor costs down because of the chill it would put through those who are grudgingly tolerated.
And besides, given the quality of personality ICE seems to be employing even (especially) at its highest levels, I simply assume there's corruption such that if I'm a large orchard or whatever I simply pay ICE to stay away.
"There was a significant drop-off in entries to the United States in 2025 relative to 2024 and an increase in enforcement activity leading to removals and voluntary departures. We estimate that net migration was between –10,000 and –295,000 in 2025, the first time in at least half a century it has been negative."
Honestly think it’s just a matter of resources and they would rather play theater for their leader than actually do the job. However, the effect has been felt.
It's not merely a matter of detainment or deportation. Racial minorities, not just immigrants, face intimidation tactics. These guys are walking into schools, they're walking into social security offices, and courthouses. They stand around menacingly just to scare people. They harass random passerby's on the street, or in the grocery store. You would feel unsafe and stressed if this happened to you, no matter your circumstances.
Unfortunately, the most extreme is that it's the new normal that now, there's >0 chance that someone, whether they are a US citizen or not apparently, child or adult, can end up in a camp, with no due process.
I think it would be a useful exercise to look at all the revocations of legal access in the us, and then do the division to see how we've increased the likelihood of becoming an illegal, and therefore targeted.
I dont think youre as right as you want to believe. Certainly not as right as I want you to believe
This wasn’t a statement about capability. It’s just a detail about what model they used to compare the speed of two chips for this purpose. You want a bigger model, run a bigger model.
This problem space is not small enough to stay within current LLM attention span. A sufficiently good agent setup might be able to help maintain docs somewhat through changes, but organizing them in an approachable way covering all the heuristics spread across so many places and external systems with a huge amount of time and versioning multivariate factors is hugely troublesome for current LLM capabilities. They're better at simpler problems, like typing the code.
The comment you're responding to is obnoxious, but authoring in Google Docs and exporting to HTML+CSS would be viable and is 10x more accessible than the simultaneously over- and underengineered toolchains and work practices that the professional web developer class has turned out, and doesn't produce output substantially worse than the now-widespread practice of sending mangled/minified payloads to UAs, which in the worst cases involves turning content into opaque blobs that you have to squirt through a JS runtime to get anything meaningful on screen.
The state of the art in Web publishing is such a mess that paid practitioners have, seemingly without realizing it, quietly eliminated the main reason why anyone should even have an expert handle the "lowering" from concept to HTML+CSS instead of using a quasi-WYSIWYG tool or some other crummy sitebuilder and working with whatever shoddy markup they give you.
berkshirehathaway.com (<https://berkshirehathaway.com/>) makes the rounds every now and then, and people ooh and aah over it in the comments, but you can tell they never really get it because then they just turn around and dump their next Vercel-hosted monstrosity on the world.
Same? Not quite as good as that. But google’s Gemma 3 27B is highly similar to their last Flash model. The latest Qwen3 variants are very good, to my need at least they are the best open coders, but really— here’s the thing:
There’s so many varieties, specialized to different tasks or simply different in performance.
Maybe we’ll get to a one-size fits all at some point, but for now trying out a few can pay off. It also starts to build a better sense of the ecosystem as a whole.
For running them: if you have an Nvidia GPU w/ 8GB of vram you’re probably able to run a bunch— quantized. It gets a bit esoteric when you start getting into quantization varieties but generally speaking you should find out the sort of integer & float math your gpu has optimized support for and then choose the largest quantized model that corresponds to support and still fits in vram. Most often that’s what will perform the best in both speed and quality, unless you need to run more than 1 model at a time.
To give you a reference point on model choice, performance, gpu, etc: one of my systems runs with an nvidia 4080 w/ 16GB VRAM. Using Qwen 3 Coder 30B, heavily quantized, I can get about 60 tokens per second.
I get tolerable performance out of a quantized gpt-oss 20b on an old RTX3050 I have kicking around (I want to say 20-30 tokens/s, or faster when cache is effective). It's appreciably faster on the 4060. It's not quite ideal for more interactive agentic coding on the 3050, but approaching it, and fitting nicely as a "coding in the background while I fiddle on something else" territory.
Yeah, tokens per second can very much influence the work style and therefore mindset a person should bring to usage. You can also build on the results of a faster but less than SOTA class model in different ways. I can let a coding tuned 7-12b model “sketch” some things at higher speed, or even a variety of things, and I can review real time, and pass off to a slower more capable model to say “this is structural sound, or at least the right framing, tighten it all up in the following ways…” and run in the background.
The run at home was in the context of $2k/mo. At that price you can get your money back on self-hosted hardware at a much more reasonable pace compared to 20/mo (or even 200).
Well theres an open source GPT model you can run locally. I dont think running models locally is all that cheap considering top of the line GPUs used to be $300 now you are lucky if you get the best GPU for under $2000. The better models require a lot more VRAM. Macs can run them pretty decently but now you are spending $5000 plus you could have just bought a rig with a 5090 with mediocre desktop ram because Sam Altman has ruined the RAM pricing market.
Fully aware, but who the heck wants to spend nearly 10 grand, and that's with just a 1TB hard drive (which needs to be able to fit your massive models mind you). Fair warning not ALL the RAM is fully unified. On my 24GB RAM Macbook Pro I can only use 16GB of VRAM, but its still better than me using my 3080 with only 10 GB of RAM, but I also didn't spend more than 2 grand on it.
I got some decent mileage out of aider and Gemma 27B. The one shot output was a little less good, but I don’t have to worry about paying per token or hitting plan limits so I felt more free to let it devise a plan, run it in a loop, etc.
Not having to worry about token limits is surprisingly cognitively freeing. I don’t have to worry about having a perfect prompt.
reply