Hacker Newsnew | past | comments | ask | show | jobs | submit | Mxbonn's commentslogin

Now that PyTorch is also ending their anaconda package distribution, I think a lot of ml/ds people should give uv a shot.


I had no idea this was happening. This comment just made my week. Thank God.


Used a framework for the last years of my PhD and would very much recommend it! Now got a macbook from work and I have to say that I do prefer it but it also comes with a higher cost.. As I was no longer using my framework I turned it into my homeserver using this server kit: https://frame.work/be/en/products/framework-laptop-13-mainbo.... Any other laptop would have started gathering dust but my framework keeps being useful thanks to its modular design.


What happened to text prompts that were shown as early results in SAM1? I assume they never really got them working well?


I love framework. Have been using as my main device during the last years of my PhD. However now that I got a company mac book I was using it less and less, so I decided to buy the Cooler Master mainboard case and now I'm using it as a home server. Wouldn't have been able to do that with any other laptop.


I've been using the claude model over the past months and I have to say I usually prefer it over ChatGPT.

Some things that impressed me particular are:

* It can often give you working urls of images related to your query. (e.g. give me some images to use with this blog paragraph).

* It can list relevant publications corresponding to paragraphs, chatgpt often halucinates new papers but claude consistenly gives highly cited and relevant suggestions.

* It can work with pdf inputs.


You didn't say which version of GPT you prefer it over. My experience of Claude hasn't been good at all - I get obvious hallucinations all the time. Whereas I get pretty consistent accurate results with GPT-4.

I just tried again now and asked it about HTMX and Django since I had just been reading an article about it. Claude invented a package called `htmx.django` and a decorator that could be imported called `@hx_request`. This is typical of my experience with Claude.


Claude is bad at code and great at everything else, I thought that was pretty common knowledge by now. I second OP where I prefer using it over OpenAI models these days


Why didn't you let it implement it and start a new project?


I’ve been using Claude for months now, as has my dev team and a number of other people I know, and we don’t have this problem at all. In my experience is hallucinates less. This is compared to ChatGPT with GPT4. Anecdotal evidence is worthless it seem.


hallucination is stochastic.

Out of 1000 people flipping coins over and over again, on average people get a mix of hits and misses - there will always be someone who just get 10 heads in a row and someone else with 10 tails in a row, this corresponds to some users being wowed by the superintelligence nailing the answer every time and other users getting made up citation after made up citation.


It won't hallucinate almost anything if you give it the entire documentation for the libraries you want to use. It can ingest many hundreds of pages of documentation without problem. Used like this it is far more powerful than GPT-4 or any existing model.


i know some people say its extended context is more hackery than not, but I've found better success with Claude in roughing out some paragraphs before editing when i provide it a lot of context


As a counter point, I've been actively assessing feasibility of migrating my experiments to Claude with not much success. Trying to migrate prompts from GPT 4 to Claude often yields half-baked results. Claude seems to require a lot more handholding to get the desired result.


In my information extraction prompts Claude is about as good as GPT-3.5 but slower. Its only advantage over OpenAI is context length. GPT-4 is better, 3.5 is faster.

But I am using it often to chat with PDFs and it works great for that. I would not want to give this ability up.


> But I am using it often to chat with PDFs and it works great for that. I would not want to give this ability up.

Side note: this would get you a crackpot moniker a year ago. Chat with pdfs? Dude what are you smoking I need some too.

Perfectly normal today, in fact if you aren’t doing that you’re falling behind.


When using stuff like this you let the cream of top surface then use it. It will surface when enough people/publication give praise, otherwise you waste time evaluating too many models, it’s not a fun task as they can give you illusions of being great at some but end up with overall worse. Right now gpt4 is king and haven’t been any consistent chatter saying otherwise


ChatGPT 3.5 or 4? The difference between them is much more massive than "0.5" appears to suggest


This is largely due to Meta maintaining an internal performance-oriented production version of CPython: https://github.com/facebookincubator/cinder .


Couldn't immediately find it but who sponsors/pays for the compute?


Finishing PhD on efficient deep learning for computer vision.

Looking for a job as research engineer/scientist with anything related to ML efficiency or computer vision.

--

Location: Belgium

Remote: Yes

Willing to relocate: Possibly

Technologies: Deep learning, mainly PyTorch but knowledge of most of the stack.

Résumé/CV: https://maxim.bonnaerens.com/resume.pdf

Email: found in resume.


Alright so nearly all the 'cheap' stars got detected, but did any of the 'premium' stars get detected?


I tried to look up some of those accounts and they don't seem to exist anymore


"... a better project might be to read papers until you find something you’re really interested in that comes with clean code, and trying to implement an extension to it."

Doesn't have to be clean but starting from a reproducible codebase at least gives you the guarantees that you have everything. Too often papers omit details that are crucial for a successful reproduction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: