Hacker Newsnew | past | comments | ask | show | jobs | submit | aix1's commentslogin

Another potential factor at play is the accuracy of delivery. It is generally easier to accurately deliver one quick dose vs daily doses over multiple weeks (due to patient positioning errors, the patient losing weight, soft tissues moving around etc).

The 42 -> 137 also jumped out at me. On the face of it, the associated improvement sure does sound like overfitting to the eval set.

Would love to hear more, if you are happy sharing!

I have a Metabo vacuum (ASA 30 H PC) and absolutely love it. What's bad about the ergonomics of yours?

I have a Sebo. The primary thing I dislike is the weight. You'd think something this heavy would have some kind of performance advantage, but it doesn't. I've seen battery powered shit from Walmart suck harder than this machine does.

Matt Levine put it really well: "We will create God and then ask it for money."

The answer is easy: more corruption. No AI God needed

I was just reading Norbert Wiener's "The Human Use of Human Beings" (1950) and this quote gave me a good chuckle:

"One may get a remarkable semblance of a language like English by taking a sequence of words, or pairs of words, or triads of words, according to the statistical frequency with which they occur in the language, and the gibberish thus obtained will have a remarkably persuasive similarity to good English."


As the contest entry page explains:

> ChatIOCCC is the world’s smallest LLM (large language model) inference engine - a “generative AI chatbot” in plain-speak. ChatIOCCC runs a modern open-source model (Meta’s LLaMA 2 with 7 billion parameters) and has a good knowledge of the world, can understand and speak multiple languages, write code, and many other things. Aside from the model weights, it has no external dependencies and will run on any 64-bit platform with enough RAM.

(Model weights need to be downloaded using an enclosed shell script.)

https://www.ioccc.org/2024/cable1/index.html


Good reminder of the fact that an LLM is not a program.

Only every implementation is [through] a program?

Interestingly the UK Supreme Court ruled on this in the Emotional Perception AI case - though I'd need to check if that was obiter (not part of the legal ruling itself).


This sounded surprising and so I picked the first fuse I could find on RS and looked at its datasheet [1].

The characteristic curve shows that the 10A fuse is expected to blow after about 4s at 20A. Of course there's sample-to-sample variation and different ambient conditions etc, but how do those four seconds become "an hour to blow or not blow at all"?

[1] https://docs.rs-online.com/bc0e/0900766b81585c97.pdf


Fuses can vary a lot -- even amongst examples with the same ratings, from the same box (and presumably, the same production batch).

Dave Jones of EEVBlog fame did some experiments with this several years ago: https://www.youtube.com/watch?v=WG11rVcMOnY

(I'm not arguing for or against any concept here; I'm just presenting some non-datasheet data.)


An unusually dense road network?

edit: This page has some data: https://www.researchgate.net/figure/Landscape-metrics-for-ro...

Southern Ontario has 4x the road density of the province average, so might be a contributing factor?


I obviously don't know anything about your situation, but the article specifically talks about:

"Lost baggage is when a passenger’s baggage is lost or goes missing due to an error on the part of the [Kansai] airport."

Perhaps in your case they considered the delay to have been due to somebody else's fault?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: