I don’t think this is a good comparison. Ada (on which Spark is based) has every safety feature and guardrail under the sun, while C++ (or C) has nothing.
Yes, I personally think so. In the hands of an experienced user you can crank out work that would take days or weeks even, and get to the meat of the problem you care about much quicker. Just churning out bespoke boilerplate code is a massive time saver, as is using LLMs to narrow in on docs, features etc. Even high level mathematicians are beginning to incorporate LLM use (early days though).
I cant think of an example where an LLM will get in the way of 90% of the stuff people do. The 10% will always be bespoke and need a human to drive forward as they are the ones that create demand for the code / work etc.
The problem is many users are not experienced. And the more they rely on AI to do their work, the less likely they are to ever become experienced.
An inexperienced junior engineer delegating all their work to an LLM is an absolute recipe for disaster, both for the coworkers and product. Code reviews take at least 3x as long. They cannot justify their decisions because the decisions aren't theirs. I've seen it first hand.
I agree totally; most people are no experienced, and there is a weird situation where the productivity gains are bifurcated. I have also seen a lot of developers unable to steer the LLM as they can’t pick up on issues they would otherwise have learned through experience. Interesting to see what will happen but probably gonna be a shit show for younger devs.
They must’ve had a really robust kind of CDs wherever you lived, then. Like everyone else, I wore out a lot of discs simply by storing them outside their case.
Did that work? I heard everything already, from it being a wonder solution to it destroying the discs even further (if i had to guess they used the kind of toothpaste with little stones in them?)
What is Fig. 1 showing? Is it the value of the integral compared with two approximations? Would it not be more interesting to show the error of the approximations instead? Asking for a friend who isn’t computing a lot of integrals.
Yeah - my guess is this was just a very roundabout solution for setting axis limits.
(For some reason, plt.bar was used instead of plt.plot, so the y axis would start at 0 by default, making all results look the same. But when the log scale is applied, the lower y limit becomes the data’s minimum. So, because the dynamic range is so low, the end result is visually identical to having just set y limits using the original linear scale).
Anyhow for anyone interested the values for those 3 points are 2.0000 (exact), 1.9671 (trapezoid), and 1.9998 (gaussian). The relatives errors are 1.6% vs. 0.01%.
I just wrote redef to emphasize that I'm not shadowing the original definition.
def a := 1
def f x := a * x
-- at this point f 1 evaluates to 1
redef a := 2
-- at this point f 1 evaluates to 2
But with dependent types, types can depend on prior values (in the previous example the type of x depends on the value t in the most direct way possible, as the type of x is t). If you redefine values, the subsequent definitions may not type-check anymore.
I see what you mean. But would you not experience the same sort of issue simply from redefining types in the same way? It seems this kind of destructive operation (whether on types or terms) is the issue. As someone who's used to ML, it seems strange to allow this kind of thing (instead of simply shadowing), but maybe it's a Lisp thing?
reply