Hacker Newsnew | past | comments | ask | show | jobs | submit | ponyous's commentslogin

The site is currently unavailable 503 so I can't read it. But I wonder, what should you consent to? Every dependency? Every dependency above 1GB?

Maybe consent is not an appropriate term. Perhaps an acknowledgement and a way to say "I don't want this" would be a more suitable approach. I feel like a flag to turn off LLMs is useful. Firefox added something like this in a recent release. I don't know how much they're downloading or how much they run it, nor would I be a good judge if it's necessary or not, but I don't want that functionality in my browser so turned it off.

Isn't that asking for consent?

the subject has been faced many years ago an super well applied in EU privacy regulations: Google knows it very well, and in super details and I have no doubt they will be fined for this despite all reduction of it thanks to their lobbying (and corruptions, too, in my super personal opinion): this fact well explain EU fines based on company's income.

why would they be fined for this? In fact a local LLM is exactly the opposite direction of a privacy concern. The local LLM gives an answer generated locally and never uploaded to a server.

There's a setting in `chrome://flags` mentioned in the post that allows users to turn this off. I guess people want opt-in consent rather opt-out consent which there's always debate about. Some people say it degrades the experience for the majority of users who would opt-in for the happiness of the few possibly already detracting users.

Extra power and ram usage without your permission, for example.

Exactly, for all the hate of Windows, I could at least just look for shit named co-pilot and uninstall it for a pretty nice experience on my new computer. Phones aren't always as straightforward (especially jarring as "Google services" are required in Sweden on Android for stuff like mobile identity systems).

This is so absurd... I have to keep an old (rooted in order to hide that adb is enabled) phone connected to my home server just to use such app, because grapheneos without google services is apparently not secure enough.

Read the article, it's not about that, but a mere 4GB of storage.

4GB of storage is not a “mere” thing, to the contrary.

It is in 2026. Average daily household usage is at ~25gig. That's average, so 50% are more than that

It sounds like you’re talking about network usage, but this is about storage.

Also, average doesn’t mean 50% lower and 50% higher.


I did mention "storage".

Oh and why is it there? Do you really think it's not loaded and executed automatically by default, so some Google executive can justify their "AI" spend?

I don’t. Do you have any actual evidence they’re doing that beyond the vibe?

Do I look like law enforcement? I don't have to do innocent until proven guilty.

It's the tech company's problem to convince me they are trying to do something useful to me. Come to think of it, it's their problem to convince me they still understand "useful to the customer" first.


Does that include the CPU burning cat girl captchas or not?

It absolutely should.

Hello iOS upgrade.

Don't install chrome in the first place then

I'm logged in to work in Chrome and to personal stuff in Firefox :)

That ship has sailed on the web a long time ago.

Sounds like the words of someone that doesn't pay for their data use.

Silicon Valley is not the world.


Kimi is nowhere near GPT or Opus unfortunately. I really wish it was. I’m running evals where models have to generate code that produces 3D models and it’s obvious that it lacks spatial understanding and makes many more code errors before it succeeds.

Maybe it’s better in one particular case here and there and I think this blog post is example of that.


Slightly OT, but after using DeepSeek V4 Pro for the last few weeks, I’ve found that it’s basically on par with Opus…except when it comes to driving Blender. This isn’t even a visual issue (DS isn’t multimodal), for whatever reason Opus just understands the Blender API a lot better.

There always seem to be pockets where closed frontier models perform slightly better.


Not everyone needs 3D models to be fair.

Been following you guys a while, seems like you've been gaining some traction recently, lets goo and congrats!

I have been working on GrandpaCAD[0] for a while, a very similar product. I thought of you as my biggest competitors but noticed recently you are focusing more and more on professionals while I am focusing on total noobs in modeling who just want to whip out a quick model. So I guess we are not competitors anymore?

My evals[1] show that Opus 4.7 and GPT 5.5 are very comparable in terms of generation quality, but GPT 5.5 is slower and costs sooo much more in my harness. And the original breakthrough model was Gemini 3.1. I'm curious do you have more written about your benchmarks setup?

If you want to chat email is in my profile. Btw, just met "your"(?) neighbour on a plane a couple of days ago. World is small.

[0]: https://grandpacad.com

[1]: https://grandpacad.com/en/blog/public-benchmarks-misled-me-o...


Lol which neighbour? That's so random

Gregor! Not sure I want to say more on here.

Ahhhhh mr browser use gotcha

Another library I have to integrate and benchmark against OpenSCAD for my AI SaaS[0]. I am really curious how constructive solid geometry compares to sketching and extruding that CadQuery is build on.

Anyone curious in the writeup? I have a pretty good harness for evaluating 3d generation performance.

[0]: https://grandpacad.com


Why does M1 Max project significantly higher revenue than M3 Max with double the ram?


http://grandpacad.com - 3D Modeling tool intended for creating 3D prints. AI based. Allows for dimensionally accurate parts as well as organic shapes.

I massively improve it every month. Pretty proud of it.


As far as I know reddit does the same thing. If you don’t follow normal human patterns you will quickly get banned. I read some guy’s report where he got banned 8 times detailing each behaviour and indeed it was nothing crazy.


Does this use its own backend/engine? I've been working on LLM to CAD tool[0] and have realised there are so many backends and options to choose from. Since the realisation I'm trying to find the best representation for an LLM. I think OpenSCAD is currently the best and most feature complete choice, but I definitely need to dig a bit deeper. If anyone has any pointers I welcome them!

[0]: https://GrandpaCAD.com


> I think OpenSCAD is currently the best and most feature complete choice

As much as I love OpenSCAD, I would strongly disagree with your conclusion.

All the OpenSCAD language can do is boolean operations and moreover, the engine can only implement those on polygonal (triangle actually) meshes.

That's a very far cry from what a modern commercial CAD engine can do.

For example, the following things are very, very hard to do, or even specify using OpenScad:

   - Smooth surfaces, especially spline-based

   - Fillets / Chamfers between two arbitrary surfaces

   - Trimming surfaces

   - Querying partly built models and using the outcome in the subsequent construction (e.g. find the shortest segment between two smooth surfaces, building a cylinder around it and filleting it with the two surfaces, this is an effing nightmare to do within the confines of OpenSCAD)

   - Last but not least: there is no native constraint solver in OpenSCAD, neither in the language nor in the engine (unlike - say - SolveSpace)
I might have misunderstood what you're looking to do, but, yeah, digging deeper feels very much like the right thing to do.


(my) fncad doesn't have the querying, but it does have smooth csg! https://fncad.github.io/


using BOSL2 alleviates most issues I've run into with OpenScad for chamfers and the like, but it is an extra set of functions you need to remember sadly

https://github.com/BelfrySCAD/BOSL2


> BOSL2 ... but it is an extra set of functions you need to remember sadly

It's also extremely slow: it implements chamfers and fillets using morpho, and if you have a large number of fillets, the morpho algorithms (minkowski / hull) are very much non linear in time on polygonal meshes, which leads to compute time explosion if you want a visually smooth result.


you can get around this somewhat by having less visually smooth previews when editing and higher quality when you want an stl

  $fn = $preview ? 32 : 256;


I just ran into this today: https://github.com/gumyr/build123d - seems like an LLM should have no problem writing python code...



Yeah it does. In fact I believe it was written to demonstrate improved sketch constraint solving (there's a 2D version too).

Unfortunately aside from the better sketching the engine is not as capable as OpenCascade.


I have tried OpenSCAD, it seems very slow to compile to display on web. are you using the official wasm or some other ways?


you may find this useful: https://phaestus.app/blog/blog0031

Edit: Forgot I also got doom running in openscad: https://www.mikeayles.com/blog/openscad-doom/

and doom running in openscad in the browser at https://doom.mikeayles.com/


cool! it's way faster on desktop. I also recompiled to include 3MF support. However on mobile phones compile is still slow even if I set fa to be lower.


I export it as .3mf file and display it with threejs on the web. Compilation seemed fast enough - few seconds tops.


This "screenshot -> refine loop" is a great strategy and I have built it into my 3D Modeling product as well[0], but had to disable it because it would often quadruple the costs and the product is already expensive.

I am on standby to enable it though, just need a price to drop a bit more!

[0]: https://grandpacad.com


My late maternal grandfather was Slovenian, so I enjoyed your project's backstory. I've mucked around with ChatGPT and OpenSCAD so can identify with that also. Great concept and best of luck!


Thank you!


Cool button


In the coworking I am in people are hitting limits on 60$ plan all the time. They are thinking about which models to use to be efficient, context to include etc…

I’m on claude code $100 plan and never worry about any of that stuff and I think I am using it much more than they use cursor.

Also, I prefer CC since I am terminal native.


Tell them to use the Composer 1.5 model. It's really good, better than Sonnet, and has much higher usage limits. I use it for almost all of my daily work, don't have to worry about hitting the limit of my 60$ plan, and only occasionally switch to Opus 4.6 for planning a particularly complex task.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: