Hacker Newsnew | past | comments | ask | show | jobs | submit | aziis98's commentslogin

I hope we get to good A1B models as I'm currently GPU poor and can only do inference on CPU for now

It may be worth taking a look at LFM [1]. I haven't had the need to use it so far (running on Apple silicon on a day to day basis so my dailies are usually the 30B+ MoEs) but I've heard good things from the internet from folks using it as a daily on their phones. YMMV.

[1] https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct


This simply solved icons for me

The real problem vdom and more complex frameworks solve for me is dealing with much more complex state i.e. lists.

When dealing with lists there are so many possible ways of updating them (full updates, insertion/removal at an index, update at an index, ...) that manually mounting and unmounting single items by hand gets unbearable. You must then do some kind of diffing at the framework level to get good performance and readable code.

I would like to see "VanillaJS" articles talk both more and more in depth about this problem.


Still the whole world runs on GC-ed languages so it must be an abstraction at least some people like to work with.

And I'm pretty sure using a GC in some cases it's the only option to not go crazy.


I think we are just used to it. Like we are used to so many suboptimal solutions in our professional and personal lives.

I mean, look something like C++ or the name "std::vector" specifically. There are probably 4 Trillion LoC containing this code out there - in production. I'm used to it, doesn't make it good.


I don't get how this would be more "ai friendly" than other frameworks, that kind of propositions should be backed by more concrete proof. I know that this is a kind of open problem but at least show me this can be easily generated with common models without an enormous reference prompt.

Another thing is that this looks like any other framework out there. I think you can map every one of it's features mostly 1-1 to SolidJS. What is the novelty here? The slightly changed js syntax with "component", "@" and "#"?

I would like to see more radical and new ideas in the js space, expecially in this period. Maybe a new take on Elm to get stronger UI stability guarantees. Or even just some very good tooling to reason about very large reactivity graphs at runtime and (maybe also at) compile time.

That said I still appreciate the work and in particular all the effort spent making the new syntax work in all common editors, I see they support vscode, intellij, sublime, ...

Edit: In the actual documentation they provide an llm.txt https://www.ripplejs.com/llms.txt


>I don't get how this would be more "ai friendly" than other frameworks, that kind of propositions should be backed by more concrete proof.

Most if not all llms are producing Markdown instead of HTML as the primary output. Markdown has a simpler syntax that basically uses fewer tokens compared to HTML Similarly, Ripple appears to express a complex structure in simple terms compared to React or HTML or whatever. No wonder most AI dev tools operate in React with web previews abstracting away the setup process.

Higher abstractions appear to be cost efficient(both training & inference time - output generation). All that is required is to provide the model with a document containing rules about ripplejs(in this case) and go from there...more like llms.txt or agent.md or simply documentation. Any DSL would fit in a single file and easily consumed by a model.


shorter syntax != higher level of abstraction

The erdos problem website tells the theorem is formalized in Lean but on the mathlib project there is just the theorem statement with a sorry. Does someone know where I can find the lean proof? I don't know maybe it's in some random pull request I didn't find.

Edit: Found it here https://github.com/plby/lean-proofs/blob/main/src/v4.24.0/Er...


There are many made with eval-unescape-escape that feel a bit like cheating. Still all very impressive


The compression tricks used in standalone Javascript demos are significantly more cursed. The meta is to concatenate a compressed binary payload and a HTML/JS decompression stub in the same file, abusing the fact that HTML5 parsers are required to be absurdly tolerant of malformed documents. Nowadays it's done using raw DEFLATE and DecompressionStream, but before that was available they would pack the payload into the pixels of a PNG and use Canvas to extract the data.

Try saving this Normal HTML File and open it in a text/hex editor: https://0b5vr.com/domain/domain.html


If it's possible to do with vanilla JS in any environment then it's not cheating.


I didn't know about JBang, it looks awesome. Does it work somewhat like uv?


I would actually merge html and js in a single language and bring the layout part of css too (something like having grid and flexbox be elements themselves instead of display styles, more typst kind of showed this is possible in a nice way) and keep css only for the styling part.

Or maybe just make it all a single lispy language


I know this is a bit out of scope for these image editing models but I always try this experiment [1] of drawing a "random" triangle and then doing some geometric construction and they mess up in very funny ways. These models can't "see" very well. I think [2] is still very relevant.

[1]: https://chatgpt.com/share/6941c96c-c160-8005-bea6-c809e58591...

[2]: https://vlmsareblind.github.io/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: