Hacker Newsnew | past | comments | ask | show | jobs | submit | boccaff's commentslogin

llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.

Llama wasn’t Yann LeCun’s work and he was openly critical of LLMs, so it’s not very relevant in this context.

Source: himself https://x.com/ylecun/status/1993840625142436160 (“I never worked on any Llama.”) and a million previous reports and tweets from him.


He founded FAIR and the team in Paris that ultimately worked on the early Llama versions.

FAIR was founded in 2015 and Llama's first release was in 2023. Musk co-founded OpenAI in 2015 but no reasonable person credits ChatGPT in 2022 to him.

> My only contribution was to push for Llama 2 to be open sourced.

Quite a big contribution in practice.


Sure, but I don't that's relevant in a startup with 1B VC money either. Meta can afford to (attempt to) commoditize their complement.

tree algorithms on sklearn use parallel arrays to represent the tree structure.

short answer: No.

longer answer: Random forests use the average of multiple trees that are trained in a way to reduce the correlation between trees (bagging with modified trees). Boosting trains sequentially, with each classifier working on the resulting residuals so far.

I am assuming that you meant boosted decision trees, sometimes gradient boosted decisions trees, as usually one have boosted decision trees. I think xgboost added boosted RF, and you can boost any supervised model, but it is not usual.


Do you have any pointer to search for that?


You can see aggregated results on `stats` [1] for every year. In general, half the people drop in the first 3-4 days. For last year, by day 12 there is less than 1/5 of day 01. While the stats do count people that completed later, the shape appears to track well with what I saw during the events since 2021.

[1] https://adventofcode.com/2024/stats


I am not aware of Eric saying something about that alternative, but this comment on reddit[1] makes a lot of sense to me:

> Given that part 2 is often a very simple modification of part 1, this could lead to many of the days being total letdowns. I can enjoy a simple puzzle, but I'd be a bit disappointed if one day is a single line change to the previous day.

I'd also add that not having to be worried everyday about something makes a lot of sense. He can have fewer days "on call" in December with.

[1]https://www.reddit.com/r/adventofcode/comments/1ocwh04/chang...


Plus, I like doing Part 2 immediately after Part 1, cause then I don't have to remember how my solution worked.


You could think about how most people can get away without doing anything physical to survive, so we must artificially exercise to be healthy. The question then is if this analogy hold for mental capacities, and I think it does.


hard, and expensive, but doable as long as carbon credits are a thing: https://re.green/en/?force_locale=1

there are a few others in Brazil, like Biomas and Mombak


>2. be very careful when using rm command (use alias rm='rm -i' )

and treat mv/cp/rsync like rm



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: