Maybe consent is not an appropriate term. Perhaps an acknowledgement and a way to say "I don't want this" would be a more suitable approach. I feel like a flag to turn off LLMs is useful. Firefox added something like this in a recent release. I don't know how much they're downloading or how much they run it, nor would I be a good judge if it's necessary or not, but I don't want that functionality in my browser so turned it off.
the subject has been faced many years ago an super well applied in EU privacy regulations: Google knows it very well, and in super details and I have no doubt they will be fined for this despite all reduction of it thanks to their lobbying (and corruptions, too, in my super personal opinion): this fact well explain EU fines based on company's income.
why would they be fined for this? In fact a local LLM is exactly the opposite direction of a privacy concern. The local LLM gives an answer generated locally and never uploaded to a server.
There's a setting in `chrome://flags` mentioned in the post that allows users to turn this off. I guess people want opt-in consent rather opt-out consent which there's always debate about. Some people say it degrades the experience for the majority of users who would opt-in for the happiness of the few possibly already detracting users.
Exactly, for all the hate of Windows, I could at least just look for shit named co-pilot and uninstall it for a pretty nice experience on my new computer. Phones aren't always as straightforward (especially jarring as "Google services" are required in Sweden on Android for stuff like mobile identity systems).
This is so absurd... I have to keep an old (rooted in order to hide that adb is enabled) phone connected to my home server just to use such app, because grapheneos without google services is apparently not secure enough.
Oh and why is it there? Do you really think it's not loaded and executed automatically by default, so some Google executive can justify their "AI" spend?
Do I look like law enforcement? I don't have to do innocent until proven guilty.
It's the tech company's problem to convince me they are trying to do something useful to me. Come to think of it, it's their problem to convince me they still understand "useful to the customer" first.
Kimi is nowhere near GPT or Opus unfortunately. I really wish it was. I’m running evals where models have to generate code that produces 3D models and it’s obvious that it lacks spatial understanding and makes many more code errors before it succeeds.
Maybe it’s better in one particular case here and there and I think this blog post is example of that.
Slightly OT, but after using DeepSeek V4 Pro for the last few weeks, I’ve found that it’s basically on par with Opus…except when it comes to driving Blender. This isn’t even a visual issue (DS isn’t multimodal), for whatever reason Opus just understands the Blender API a lot better.
There always seem to be pockets where closed frontier models perform slightly better.
Been following you guys a while, seems like you've been gaining some traction recently, lets goo and congrats!
I have been working on GrandpaCAD[0] for a while, a very similar product. I thought of you as my biggest competitors but noticed recently you are focusing more and more on professionals while I am focusing on total noobs in modeling who just want to whip out a quick model. So I guess we are not competitors anymore?
My evals[1] show that Opus 4.7 and GPT 5.5 are very comparable in terms of generation quality, but GPT 5.5 is slower and costs sooo much more in my harness. And the original breakthrough model was Gemini 3.1. I'm curious do you have more written about your benchmarks setup?
If you want to chat email is in my profile. Btw, just met "your"(?) neighbour on a plane a couple of days ago. World is small.
Another library I have to integrate and benchmark against OpenSCAD for my AI SaaS[0]. I am really curious how constructive solid geometry compares to sketching and extruding that CadQuery is build on.
Anyone curious in the writeup? I have a pretty good harness for evaluating 3d generation performance.
As far as I know reddit does the same thing. If you don’t follow normal human patterns you will quickly get banned. I read some guy’s report where he got banned 8 times detailing each behaviour and indeed it was nothing crazy.
Does this use its own backend/engine? I've been working on LLM to CAD tool[0] and have realised there are so many backends and options to choose from. Since the realisation I'm trying to find the best representation for an LLM. I think OpenSCAD is currently the best and most feature complete choice, but I definitely need to dig a bit deeper. If anyone has any pointers I welcome them!
> I think OpenSCAD is currently the best and most feature complete choice
As much as I love OpenSCAD, I would strongly disagree with your conclusion.
All the OpenSCAD language can do is boolean operations and moreover, the engine can only implement those on polygonal (triangle actually) meshes.
That's a very far cry from what a modern commercial CAD engine can do.
For example, the following things are very, very hard to do, or even specify using OpenScad:
- Smooth surfaces, especially spline-based
- Fillets / Chamfers between two arbitrary surfaces
- Trimming surfaces
- Querying partly built models and using the outcome in the subsequent construction (e.g. find the shortest segment between two smooth surfaces, building a cylinder around it and filleting it with the two surfaces, this is an effing nightmare to do within the confines of OpenSCAD)
- Last but not least: there is no native constraint solver in OpenSCAD, neither in the language nor in the engine (unlike - say - SolveSpace)
I might have misunderstood what you're looking to do, but, yeah, digging deeper feels very much like the right thing to do.
using BOSL2 alleviates most issues I've run into with OpenScad for chamfers and the like, but it is an extra set of functions you need to remember sadly
> BOSL2 ... but it is an extra set of functions you need to remember sadly
It's also extremely slow: it implements chamfers and fillets using morpho, and if you have a large number of fillets, the morpho algorithms (minkowski / hull) are very much non linear in time on polygonal meshes, which leads to compute time explosion if you want a visually smooth result.
cool! it's way faster on desktop. I also recompiled to include 3MF support. However on mobile phones compile is still slow even if I set fa to be lower.
This "screenshot -> refine loop" is a great strategy and I have built it into my 3D Modeling product as well[0], but had to disable it because it would often quadruple the costs and the product is already expensive.
I am on standby to enable it though, just need a price to drop a bit more!
My late maternal grandfather was Slovenian, so I enjoyed your project's backstory. I've mucked around with ChatGPT and OpenSCAD so can identify with that also. Great concept and best of luck!
In the coworking I am in people are hitting limits on 60$ plan all the time. They are thinking about which models to use to be efficient, context to include etc…
I’m on claude code $100 plan and never worry about any of that stuff and I think I am using it much more than they use cursor.
Tell them to use the Composer 1.5 model. It's really good, better than Sonnet, and has much higher usage limits. I use it for almost all of my daily work, don't have to worry about hitting the limit of my 60$ plan, and only occasionally switch to Opus 4.6 for planning a particularly complex task.
reply