Hacker Newsnew | past | comments | ask | show | jobs | submit | grandma_tea's commentslogin

Oh wow, now I can finally have terminal that creates fire effects the faster I type! (if I ever get the time to make the plugin)

Is there anyway for plugins to interact with shaders?


Intenser flames the more you type, great idea!

Plugins can't currently get the shader pixels. But that's just because I haven't added them to the plugin protocol yet. But interestingly shaders actually have access to the terminal contents in the form of a pixelated version of the text. And the mouse and cursor position too. So maybe there's something you could do purely in a shader.


I'm with you. I thought this was going to be an article about Numba.


I think a more pedantic way to describe what I mean is:

"What if we could compile Python into raw native code *without having a Python interpreter*?"

The key distinguishing feature of this compiler is being able to make standalone, cross-platform native binaries from Python code. Numba will fallback to using the Python interpreter for code that it can't jit.


Can you expand on that? Efficient in what way?


Efficient in the way of bringing the model to meet the criteria of autonomy faster. On one hand it may be something specifically efficient at reaching some autonomy qualities. OTOH it could be just something that efficiently uses the improvement in the model during training to make the subsequent training faster.


I mostly use tools like this for data exploration in Jupyter notebooks and PyVista was fine when I tried it out last year. However, I found I could get results much faster with Vedo[0].

No hate though, I'm so glad there are options in this space!

[0]https://vedo.embl.es/


Nice! I'm looking forward to trying it out. This seems very similar to https://github.com/cgarciae/pypeln/


We came across this at one point and thought it was a very innovative and interesting package!

The important design point we're differing on is that Pyper implements 'pipelines' as functions, whereas pypeln seems to implement 'pipelines' as iterable objects.


Do you have any example prompts or suggestions for coming up with them?


Absolutely! I just picked up a 2023 for $17k. It's basically the perfect commuter car.


Your point aside, counting letters is not a good task for LLMs since they aren't trained on letters. They are trained on tokens which represent words or parts of words.


In other words, they don't really understand "language" and it's constituent parts.


Preferred by who? It sounds like these people have a strong say in what constitutes open source.


Preferred by anyone who's actually using and modifying the work.

No one trains an existing model from scratch, even those who have access to all of the data to do so. There's just no compelling reason to retrain a model to make a change when you have the weights already—fine tuning is preferred by everyone.

The only people I've seen who've asserted otherwise are random commenters on the internet who don't really understand the tech.


> Preferred by anyone who's actually using and modifying the work.

> ...fine tuning is preferred by everyone

How do you know this? Did you take a survey? When? What if preferences change or there is no consensus?

> The only people I've seen who've asserted otherwise are random commenters on the internet who don't really understand the tech.

There are lots of things that can be done with the training set that don't involve retraining the entire model from scratch. As a random example, I could perform a statistical analysis over a portion of the training set and find a series of vectors in token-space that could be used to steer the model. Something like this can be done without access to the training data, but does it work better? We don't know because it hasn't been tried yet.

But none of that really matters, because what we're discussing is the philosophy of open source. I think it's a really bad take to say that something is open source because it's in a "preferred" format.


> I think it's a really bad take to say that something is open source because it's in a "preferred" format.

Preferred form and under a free license. Llama isn't open source, but that's because the license has restrictions.

As for if it's a bad take that the preferred form matters—take it up with the GPL, I'm just using their definition:

> The “source code” for a work means the preferred form of the work for making modifications to it.


Today, the weights may be the preferable format indeed, due to the cost. Are you going to change the definition tomorrow, when the cost drops?


Sure, why not?


A good definition should not depend on transient state of affairs.


It's tough because proprietary software also has risks.

See the Unity license change fiasco. https://www.engadget.com/unity-apologizes-and-promises-to-ch...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: