It may be worth taking a look at LFM [1]. I haven't had the need to use it so far (running on Apple silicon on a day to day basis so my dailies are usually the 30B+ MoEs) but I've heard good things from the internet from folks using it as a daily on their phones. YMMV.
The real problem vdom and more complex frameworks solve for me is dealing with much more complex state i.e. lists.
When dealing with lists there are so many possible ways of updating them (full updates, insertion/removal at an index, update at an index, ...) that manually mounting and unmounting single items by hand gets unbearable. You must then do some kind of diffing at the framework level to get good performance and readable code.
I would like to see "VanillaJS" articles talk both more and more in depth about this problem.
I think we are just used to it. Like we are used to so many suboptimal solutions in our professional and personal lives.
I mean, look something like C++ or the name "std::vector" specifically. There are probably 4 Trillion LoC containing this code out there - in production. I'm used to it, doesn't make it good.
I don't get how this would be more "ai friendly" than other frameworks, that kind of propositions should be backed by more concrete proof. I know that this is a kind of open problem but at least show me this can be easily generated with common models without an enormous reference prompt.
Another thing is that this looks like any other framework out there. I think you can map every one of it's features mostly 1-1 to SolidJS. What is the novelty here? The slightly changed js syntax with "component", "@" and "#"?
I would like to see more radical and new ideas in the js space, expecially in this period. Maybe a new take on Elm to get stronger UI stability guarantees. Or even just some very good tooling to reason about very large reactivity graphs at runtime and (maybe also at) compile time.
That said I still appreciate the work and in particular all the effort spent making the new syntax work in all common editors, I see they support vscode, intellij, sublime, ...
>I don't get how this would be more "ai friendly" than other frameworks, that kind of propositions should be backed by more concrete proof.
Most if not all llms are producing Markdown instead of HTML as the primary output. Markdown has a simpler syntax that basically uses fewer tokens compared to HTML Similarly, Ripple appears to express a complex structure in simple terms compared to React or HTML or whatever. No wonder most AI dev tools operate in React with web previews abstracting away the setup process.
Higher abstractions appear to be cost efficient(both training & inference time - output generation). All that is required is to provide the model with a document containing rules about ripplejs(in this case) and go from there...more like llms.txt or agent.md or simply documentation. Any DSL would fit in a single file and easily consumed by a model.
The erdos problem website tells the theorem is formalized in Lean but on the mathlib project there is just the theorem statement with a sorry. Does someone know where I can find the lean proof? I don't know maybe it's in some random pull request I didn't find.
The compression tricks used in standalone Javascript demos are significantly more cursed. The meta is to concatenate a compressed binary payload and a HTML/JS decompression stub in the same file, abusing the fact that HTML5 parsers are required to be absurdly tolerant of malformed documents. Nowadays it's done using raw DEFLATE and DecompressionStream, but before that was available they would pack the payload into the pixels of a PNG and use Canvas to extract the data.
I would actually merge html and js in a single language and bring the layout part of css too (something like having grid and flexbox be elements themselves instead of display styles, more typst kind of showed this is possible in a nice way) and keep css only for the styling part.
I know this is a bit out of scope for these image editing models but I always try this experiment [1] of drawing a "random" triangle and then doing some geometric construction and they mess up in very funny ways. These models can't "see" very well. I think [2] is still very relevant.
reply