I suppose even then it would have been obvious to anyone traveling outside London that, hmmm, the fog/smog goes away out here. Only in major cities… What could it be?
I strongly suspect that most of the things we now know to be problematic were also known to be problematic to the ancients, but were thought still to be worth it for their rewards. That’s pretty much where we still are today. Nobody likes breathing pollution, everybody likes modernity.
> Basically, every city hall is like the show Parks and Recreation from a competency situation. Then it’s about rubbing each other’s back and staying in power.
Fun side note: the outdoor establishing shots for Pawnee city hall used Pasadena’s City Hall which is right next to LA. It always threw me off when watching Parks and Rec, especially since they never had a shot of it snowing.
> Great testers are worth their weight in gold and delight in ruining programmer's days all day long.
Site note: all the great testers I've know when my employers had separate QA departments all ended up becoming programmers, either by studying on the side or through in-house mentorship. By all second hand accounts they've become great programmers too.
Whether it's unique IDs or names, the problem is the same: topology changes destroy the things you’re identifying. When you have a box and you assign ID face_007 (or a generated unique ID) to its top face, that works fine until you fillet an edge adjacent to that face. Now the kernel has to recompute the geometry and depending on the operation, face_007 might still exist in a different shape, might split into multiple faces, or be destroyed completely.
The geometric kernel is doing boundary representation operations so when you do a boolean or a fillet, it doesn’t “edit” existing faces, it computes an entirely new b-rep from scratch. The old faces, edges, and vertices are gone and new ones are created to replace then. There’s nothing to hang a persistent ID on because the entities themselves are ephemeral.
There are solutions to the problem but they all break down eventually. I think freeCAD uses topological tracing and naming schemes so it encodes a face’s identity by how it was created. e.g., “the face generated by the intersection of extrude_1 and the XY plane.” The problem then is that parameter changes or operation insertions in the history can destroy those too, creating a new feature that can’t be easily mapped to the old ones. That’s where all the heuristics come in.
Unique IDs are used internally, but they only last for the lifetime of one evaluation. The hard part is establishing the equivalence between entities across re-evaluations when the topology itself may have changed.
This isn’t just basic CRUD software that just needs some Postgres constraints to model trivial business logic. These are genuinely hard problems that mathematicians have been working on for decades.
So why is it that every 3D CAD program other than FreeCAD seems to have something that solves this problem "well enough" that most people doing simple designs (aka everything you could possibly print on a 3D printer) don't seem to bump into it?
Do you know of any free/open-source examples that solve this problem well?
I'm not an expert, but these types of heuristics intuitively seem hard to model. The goal is to guess the user's design intent. There's often no single correct answer, it may require information from parts of the system that the core application's model doesn't have, there may be many heuristics depending on what's being designed.
These heuristics seem like exactly the sort of thing that commercial CAD applications can afford to spend resources on, and that open-source community-driven applications would struggle with.
> Do you know of any free/open-source examples that solve this problem well?
No. Sorry, I should have been clearer. None of the open source programs handle this.
> These heuristics seem like exactly the sort of thing that commercial CAD applications can afford to spend resources on, and that open-source community-driven applications would struggle with.
This seems like one of those things where "the industry" converged to a solution and the people outside of it simply don't know what it is.
Most CAD software have a mapping algorithm that remaps the new features to the old features after a topological restructure using a combination of topological id systems and heuristics.
Solidworks and Onshape don’t “hide” it better, their algorithms are better and break down in much more complex models than FreeCAD. Each one also tends to have its own quirks so as you learn to use the software you get a bit of intuition on how to best model certain features to avoid angering the topological naming gods.
I don’t think I’ve ever seen Solidworks break down in a simple model, it’s always been in complex shapes using advanced features.
In that era I remember they had mostly fidget spinners. I think the staff were really bored at that point because half the fidget spinner inventory was in those clear plastic security boxes for what was a useless $10 toy.
I have a collection of photos from a trip I took to a Fry's shortly before its demise. Some of the things I saw on display were:
- Multiple aisles of nothing but cheap, no-name hand sanitizer (all the same kind, too - not a broad selection)
- Another entire aisle of pepper spray (again, a single item, just spread out really thin)
- Cheap LED bulbs (the screw-in kind, not components)
- Portable fans
- Bluetooth party speakers (really big ones that looked like oversized roll-aboard luggage)
I don't remember seeing any fidget spinners - but I wouldn't be surprised if some of the shelf-filler items were stocked on a store-by-store basis. It certainly didn't give me the impression of a carefully planned operation.
Haven’t we been seeing libraries that implement this pattern going on two years now? Take the docstring and monkey patch the function with llm generated code, with optional caching against an AST hash key.
The reason it hasn’t take off is that it’s a supremely bad and unmaintable idea. It also just doesn’t work very well because the LLM doesn’t have access to the rest of the codebase without an agentic loop to ground it.
The real reason its bad is because its not really easier to be more productive doing this:
> You write a Python function with a natural language specification instead of implementation code. You attach post-conditions – plain Python assertions that define what correct output looks like.
Vs
> You write a Python function with ~~a natural language specification instead of~~ implementation code.
Local liquidation firms and those in nearby metros like Baltimore where the biotechs are might also be able to help. I’ve seen entire palettes of Chromebooks sell relatively cheap at auction. I doubt the liquidator will be able to do it for free but they can ask clients if they want to donate certain lots instead of auctioning them off or if you find a sponsor they might be able to notify you and give you first dibs when a lot shows up.
largest FPGAs have on the order of tens of millions of logic cells/elements. They’re not even remotely big enough to emulate these designs except to validate small parts of it at a time and unlike memory chips or GPUs, companies don’t need millions of them to scale infrastructure.
(The chips also cost tens of thousands of dollars each)
London first tried to ban burning coal within the city in 1306 due to the air quality.
reply