Plugins can't currently get the shader pixels. But that's just because I haven't added them to the plugin protocol yet. But interestingly shaders actually have access to the terminal contents in the form of a pixelated version of the text. And the mouse and cursor position too. So maybe there's something you could do purely in a shader.
I think a more pedantic way to describe what I mean is:
"What if we could compile Python into raw native code *without having a Python interpreter*?"
The key distinguishing feature of this compiler is being able to make standalone, cross-platform native binaries from Python code. Numba will fallback to using the Python interpreter for code that it can't jit.
Efficient in the way of bringing the model to meet the criteria of autonomy faster. On one hand it may be something specifically efficient at reaching some autonomy qualities. OTOH it could be just something that efficiently uses the improvement in the model during training to make the subsequent training faster.
I mostly use tools like this for data exploration in Jupyter notebooks and PyVista was fine when I tried it out last year. However, I found I could get results much faster with Vedo[0].
No hate though, I'm so glad there are options in this space!
We came across this at one point and thought it was a very innovative and interesting package!
The important design point we're differing on is that Pyper implements 'pipelines' as functions, whereas pypeln seems to implement 'pipelines' as iterable objects.
Your point aside, counting letters is not a good task for LLMs since they aren't trained on letters. They are trained on tokens which represent words or parts of words.
Preferred by anyone who's actually using and modifying the work.
No one trains an existing model from scratch, even those who have access to all of the data to do so. There's just no compelling reason to retrain a model to make a change when you have the weights already—fine tuning is preferred by everyone.
The only people I've seen who've asserted otherwise are random commenters on the internet who don't really understand the tech.
> Preferred by anyone who's actually using and modifying the work.
> ...fine tuning is preferred by everyone
How do you know this? Did you take a survey? When? What if preferences change or there is no consensus?
> The only people I've seen who've asserted otherwise are random commenters on the internet who don't really understand the tech.
There are lots of things that can be done with the training set that don't involve retraining the entire model from scratch. As a random example, I could perform a statistical analysis over a portion of the training set and find a series of vectors in token-space that could be used to steer the model. Something like this can be done without access to the training data, but does it work better? We don't know because it hasn't been tried yet.
But none of that really matters, because what we're discussing is the philosophy of open source. I think it's a really bad take to say that something is open source because it's in a "preferred" format.
Is there anyway for plugins to interact with shaders?