Hacker Newsnew | past | comments | ask | show | jobs | submit | ForceBru's commentslogin


I don't see any issues with the title of Figure 2.2, but the legend and the x-axis label have weird letter spacing indeed. It seems like images like this are standalone (https://github.com/rikhuijzer/phd-thesis/blob/main/images/pe...) and probably aren't generated by Typst. So perhaps the weird spacing is not Typst's fault.

Looks like the SVG was converted from an EPS file, and the resulting SVG contains individual glyph positions (advances) for the characters in "Personality score", but it doesn't specify a valid font, probably because the font name was mangled in the original EPS file (which is pretty typical).

So whether the resulting file looks right depends on whether the rendering engine chooses the correct font. Looks like it's supposed to be Nimbus Sans or something metric compatible with that, but the serif font chosen by Typst looks obviously wrong.


Emotional support. Some human doctors absolutely radiate confidence and a kind of "you're gonna be okay" attitude. For me, this helps a lot. I'm not sure a machine can do this.

But I hate if the human doctor "radiates confidence" when I know he is not doing the proper scan, because I have to get back with worse symptoms first for him to take it serious. I don't need emotional support from a human doctor. I need the adequate scans and a proper analysis. I am pretty sure that a competent human will be still way better than AI, but AI even now will likely be better than a doctor not really paying attention.

You can hopefully get emotional support from your loved ones. If not a coach seems much more appropriate.

"Human problems can't be solved with technology" is just wrong, unless you have narrower definitions of a "human problem" or "technology".

For instance, transportation is a "human problem". It's being successfully solved with such technologies as cars, trains, planes, etc. Growing food at scale is a "human problem" that's being successfully solved by automation. Computing... stuff could be a "human problem" too. It's being successfully solved by computers. If "human problems" are more psychological, then again, you can use the Internet to keep in touch with people, so again technology trying to solve a human problem.


I think you may be misunderstanding the concept of 'human problem'. A human problem is caused by humans, it isn't something like transportation. That is a physics problem. An example of a human problem is cheating; you can't solve cheating with technology. Just add [incentive] after human and it should make more sense.

IMO "human problem" isn't a well-defined concept, so it's not really possible to misunderstand it. I think a "human problem" is a problem that _humans have_: how to move around? (transportation) what to eat? (agriculture, etc) how to prevent cheating? (some kind of surveillance) how to communicate over long distances? (radio, the internet, etc)

Sure, some kinds of such "human problems" can be reduced to physics and technology, that's the point. This also doesn't necessarily mean that solutions produced by such reductions are effective: is surveillance good at preventing cheating during exams? Kind of. Does it often fail to catch cheating students? Absolutely.

However, indeed, there can be many different (perhaps equally correct) definitions of what a "human problem" is.


It's still true that softmax transforms arbitrary vectors into probability vectors.

In your example you'll also get the original `p` with just `exp(logits)`. Softmax normalizes the output to sum to one, so it can output a probability vector even if the input is _not_ simply `log(p)`.


I thought Zed was using tree-sitter: https://zed.dev/blog/syntax-aware-editing? Shouldn't it address all of these issues? Does tree-sitter not understand Python (basically the most popular language out there) and Rust "beyond superficial syntax"? I thought its whole point was that it understands everything about a language's syntax because it builds a concrete syntax tree?

TreeSitter is an amazing tool but is (purposefully) quite limited compared to an IDE--it doesn't even cross file boundaries, so go to definition is a non-starter. Zed uses LSPs like Rust Analyzer to fill that role.

IMO "thinking" here means "computation", like running matrix multiplications. Another view could be: "thinking" means "producing tokens". This doesn't require any proof because it's literally what the models do.

As I understand it, the claim is: more tokens = more computation = more "thinking" => answer probably better.


I don't agree with GP's take on anthropomorphising[0], but in this particular discussion, I meant something even simpler by "thinking" - imagine it more like manually stepping a CPU, or powering a machine by turning a crank. Each output token is kinda like a clock signal, or a full crank turn. There's lots of highly complex stuff happening inside the CPU/machine - circuits switching/gears turning - but there's a limit of how much of it can happen in a single cycle.

Say that limit is X. This means if your problem fundamentally requires at least Y compute to be solved, your machine will never give you a reliable answer in less than ceil(Y/N) steps.

LLMs are like this - a loop is programmed to step the CPU/turn the crank until the machine emits a magic "stop" token. So in this sense, asking an LLM to be concise means reducing the number of compute it can perform, and if you insist on it too much, it may stop so early as to fundamentally have been unable to solve the problem in computational space allotted.

This perspective requires no assumptions about "thinking" or anything human-like happening inside - it follows just from time and energy being finite :).

--

[0] - I strongly think the industry is doing a huge disservice avoiding to anthropomorphize LLMs, as treating them as "little people on a chip" is the best high-level model we have for understanding their failure modes and role in larger computing systems - and instead, we just have tons of people wasting their collective efforts trying to fix "lethal trifecta" as if it was a software bug and not fundamental property of what makes LLM interesting. Already wrote more on it in this thread, so I'll stop here.


This is the OP promoting their project — makes sense to me


Apple: here's an affordable laptop. This comment: but the poor kids are going to feel inferior to the rich kids with this affordable laptop! Of course the poor kids are going to get cheaper & slower computers, cheaper clothes, etc. And they won't feel great about it because being poor isn't great.

But now they'll have more options! If they like Apple, they'll have a (likely pretty good) Apple laptop! It's great! I think a more affordable Mac is _good_ (at least better than no affordable Mac) and will make the poor kids happier.


I found this paper (https://www.cs.uni-potsdam.de/bs/research/docs/papers/2025/l...) from around 2025 (it cites papers from 2025) which shows that the Julia version of SRAD (along with some other benchmarks) is about 5 times slower than the slowest FORTRAN implementation and consumes at least 5 times more energy, see Table 4 and Figure 1. This paper, however, doesn't seem to be peer-reviewed.


Yes, that's the paper my predecessors worked on! I replicated the measurements with an upgraded version of Julia (1.12), but despite the claimed performance benefits, Julia still performed poorly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: