Hacker Newsnew | past | comments | ask | show | jobs | submit | extrabajs's commentslogin

I don’t think this is a good comparison. Ada (on which Spark is based) has every safety feature and guardrail under the sun, while C++ (or C) has nothing.

There is a lot of tooling for C though, just not in mainstream compilers.

A control? This is just a list of incidents, not an experiment.

The "Why this matters" section at the bottom is clearly drawing conclusions as if it were an experiment.

Not really, no.

Yes really, yes.

You mean like consoles?


Kinda, games ported to mobile also.


Porting to mobile comes with a dumbing down of the control scheme and thus dumbing down of game complexity since you can't control it.

And they turn into "free" with IAPs.

Rather not.


> lacking the skill set to leverage AI

It possible that your job is simply not that difficult to begin with?


yes, but so are most jobs like mine


What job is so difficult that LLMs cant allow an experienced user an order of magnitude gain in efficiency?


An order of magnitude, really? An experienced user with an LLM is going to accomplish in 2026 what would have otherwise taken until 2036?


Yes, I personally think so. In the hands of an experienced user you can crank out work that would take days or weeks even, and get to the meat of the problem you care about much quicker. Just churning out bespoke boilerplate code is a massive time saver, as is using LLMs to narrow in on docs, features etc. Even high level mathematicians are beginning to incorporate LLM use (early days though).

I cant think of an example where an LLM will get in the way of 90% of the stuff people do. The 10% will always be bespoke and need a human to drive forward as they are the ones that create demand for the code / work etc.


sounds about right


The problem is many users are not experienced. And the more they rely on AI to do their work, the less likely they are to ever become experienced.

An inexperienced junior engineer delegating all their work to an LLM is an absolute recipe for disaster, both for the coworkers and product. Code reviews take at least 3x as long. They cannot justify their decisions because the decisions aren't theirs. I've seen it first hand.


I agree totally; most people are no experienced, and there is a weird situation where the productivity gains are bifurcated. I have also seen a lot of developers unable to steer the LLM as they can’t pick up on issues they would otherwise have learned through experience. Interesting to see what will happen but probably gonna be a shit show for younger devs.


> They were very inconvenient.

They were also very affordable!


They must’ve had a really robust kind of CDs wherever you lived, then. Like everyone else, I wore out a lot of discs simply by storing them outside their case.


Do you mean the OG audio CD's made at the audio CD factory, or those newfangled CD-R's?


Both, until I discovered the toothpaste-buffing trick.


Did that work? I heard everything already, from it being a wonder solution to it destroying the discs even further (if i had to guess they used the kind of toothpaste with little stones in them?)


Statistically significant... sample size? Support the hypothesis?


What is Fig. 1 showing? Is it the value of the integral compared with two approximations? Would it not be more interesting to show the error of the approximations instead? Asking for a friend who isn’t computing a lot of integrals.


Fig 1 could use a rethink. It uses log scale, but the dynamic range of the y-axis is tiny, so the log transform isn't doing anything.

It would be better shown as a table with 3 numbers. Or, maybe two columns, one for integral value and one for error, as you suggest.


Yeah - my guess is this was just a very roundabout solution for setting axis limits.

(For some reason, plt.bar was used instead of plt.plot, so the y axis would start at 0 by default, making all results look the same. But when the log scale is applied, the lower y limit becomes the data’s minimum. So, because the dynamic range is so low, the end result is visually identical to having just set y limits using the original linear scale).

Anyhow for anyone interested the values for those 3 points are 2.0000 (exact), 1.9671 (trapezoid), and 1.9998 (gaussian). The relatives errors are 1.6% vs. 0.01%.


Guessing from the text that they’re running the (interactive) bytecode compiler + interpreter version of OCaml, which is much slower.


I don’t see the connection to dependent types. But anyway, is ‘redef’ part of your language? What type would you give it?


I just wrote redef to emphasize that I'm not shadowing the original definition.

    def a := 1
    def f x := a * x
    -- at this point f 1 evaluates to 1
    redef a := 2
    -- at this point f 1 evaluates to 2
But with dependent types, types can depend on prior values (in the previous example the type of x depends on the value t in the most direct way possible, as the type of x is t). If you redefine values, the subsequent definitions may not type-check anymore.


I see what you mean. But would you not experience the same sort of issue simply from redefining types in the same way? It seems this kind of destructive operation (whether on types or terms) is the issue. As someone who's used to ML, it seems strange to allow this kind of thing (instead of simply shadowing), but maybe it's a Lisp thing?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: