Hacker Newsnew | past | comments | ask | show | jobs | submit | antononcube's commentslogin

That Python package, "NLPTemplateEngine" has Raku and Wolfram Language counterparts:

- Raku, "ML::NLPTemplateEngine"

  - https://raku.land/zef:antononcube/ML::NLPTemplateEngine
- Wolfram Language, "NLPTemplateEngine"

  - https://resources.wolframcloud.com/PacletRepository/resources/AntonAntonov/NLPTemplateEngine/


Related Number Theory notebooks / discussions:

- «Numerically 2026 is unremarkable yet happy: semiprime with primitive roots» https://community.wolfram.com/groups/-/m/t/3594686

- «Happy √2²²-22 -- And other ways to calculate 2026» https://community.wolfram.com/groups/-/m/t/3599161


The integer 2026 is semiprime and a happy number, with 365 as one of its primitive roots. Although 2026 may not be particularly noteworthy in number theory, this provides a great excuse to create various elaborate visualizations that reveal some interesting aspects of the number.


Interesting variant. I might program it for some of the «Rock-Paper-Scissors extensions» here:

https://rakuforprediction.wordpress.com/2025/03/03/rock-pape...

Some of the extensions would need polyhedral dices:

https://demonstrations.wolfram.com/OpenDiceRolls/


This document (notebook) shows transformations of a movie dataset into a format more suitable for data analysis and for making a movie recommender system. It is the first of a three-part series of notebooks that showcase Raku packages for doing Data Science (DS).


Yes, Wolfram Language (WL) -- aka Mathematica -- introduced `Tabular` in 2025. It is a new data structure with a constellation of related functions (like `ToTabular`, `PivotToColumns`, etc.) Using it is 10÷100 times faster than using WL's older `Dataset` structure. (In my experience. With both didactic and real life data of 1_000÷100_000 rows and 10÷100 columns.)


This blog post (and related notebook) show how to utilize Large Language Model (LLM) Function Calling with the Raku package "LLM::Functions".

- Package: https://raku.land/zef:antononcube/LLM::Functions

- Notebook: https://github.com/antononcube/RakuForPrediction-blog/blob/m...


Mostly, because Python is not a good a "discovery" and prototyping language. It is like that by design -- Guido Van Rossum decided that TMTOWTDI is counter-productive.

Another point, which could have mentioned in my previous response -- Raku has more elegant and easy to use asynchronous computations framework.

IMO, Python's introspection matches that Raku's introspection.

Some argue that Python's LLM packages are more and better than Raku's. I agree on the "more" part. I am not sure about the "better" part:

- Generally speaking, different people prefer decomposing computations in a different way. - When few years ago I re-implemented Raku's LLM packages in Python, Python did not have equally convenient packages.


Ah, yes, Raku's "LLM::Graph" is heavily inspired by the design of the function LLMGraph of Wolfram Language (aka Mathematica.)

WL's LLMGraph is more developed and productized, but Raku's "LLM::Graph" is catching up.

I would like to say that "LLM::Graph" was relatively easy to program because of Raku's introspection, wrappers, asynchronous features, and pre-existing LLM functionalities packages. As a consequence the code of "LLM::Graph" is short.

Wolfram Language does not have that level introspection, but otherwise is likely a better choice mostly for its far greater scope of functionalities. (Mathematics, graphics, computable data, etc.)

In principle a corresponding Python "LLMGraph" package can be developed, for comparison purposes. Then the "better choice" question can be answered in a more informed manner. (The Raku packages "LLM::Functions" and "LLM::Prompts" have their corresponding Python packages implemented already.)


Specifications for asynchronous LLM computations with Raku's "LLM::Graph" detail how to manage complex, multi-step LLM workflows by representing them as graphs. By defining the workflow as a graph, developers can execute LLM function calls concurrently, enabling higher throughput and lower latency than synchronous, step-by-step processes.

"LLM::Graph" uses a graph structure to manage dependencies between tasks, where each node represents a computation and edges dictate the flow. Asynchronous behavior is a default feature, with specific options available for control.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: