I wrote a program that has programmable brushes about ten years ago, it's a bit different from moss in that it has a physics simulation underneath rather than a sort of shader, but I've always thought this kind of approach has a lot of potential.
It feels _amazing_ to draw a bird in a single stroke!
This was very interesting to read! My choice of drawing program now is Rebelle, which does have a "swarm" brush (they call them bristle brushes, designed to emulate real paintbrushes) and together with its physical simulation where paint applied on the canvas has a thickness instead of opacity, the results can look absolutely stunning. Have given me the itch to also experiment with simulation-based drawing programs.
I mean you get a random game in the authors example :)
But in real life you do not want a random game. That's what I mean, you need the great scaffolding + exact requirements. Then the prompt to do the implementation does not matter too much.
If I understand the author correctly, he chose the hyperbolic model specifically because the story of "the singularity" _requires_ a function that hits infinity.
He's looking for a model that works for the story in the media and runs with it.
Your criticism seems to be criticizing the story, not the author's attempt to take it "seriously"
I would love to have a nix flake to install it easily with nix! Given it's built in bash that should be basically no issue whatsoever.
Thanks a lot for this! I was interested in beads but found the author's approach to software development quite erratic and honestly a bit unprofessional. Yes, LLMs are great, but no they shouldn't be the lead developer.
Beads is an incredibly difficult-to-follow mess for something that is at its core a pretty simple idea. You distilled it to its core, I will absolutely be checking this out :)
A new kind of science is one of my favorite books, I read the entirety of the book during a dreadful vacation when I was 19 or 20 on an iPod touch.
It goes much beyond just cellular automata, the thousand pages or so all seem to drive down the same few points:
- "I, Stephen Wolfram, am an unprecedented genius" (not my favorite part of the book)
- Simple rules lead to complexity when iterated upon
- The invention of field of computation is as big and important of an invention as the field of mathematics
The last one is less explicit, but it's what I took away from it. Computation is of course part of mathematics, but it is a kind of "live" mathematics. Executable mathematics.
Super cool book and absolutely worth reading if you're into this kind of thing.
I would give the same review, without seeing any of this as a positive. NKS was bloviating, grandiose, repetitive, and shallow. The fact that Wolfram himself didn’t show that CA were Turing complete when most theoretical computer scientists would say “it’s obvious, and not that interesting” kinda disproves his whole point about him being an under appreciated genius. Shrug.
That CA in general were Turing complete is 'obvious'. What was novel is that Wolfram's employee proved something like Turing completeness for a 1d CA with two states and only three cells total in the neighbourhood.
I say something-like-Turing completeness, because it requires a very specially prepared tape to work that makes it a bit borderline. (But please look it up properly, this is all from memory.)
Having said all that, the result is a nice optimisation / upper bound on how little you need in terms of CA to get Turing completeness, but I agree that philosophically nothing much changes compared to having to use a slightly more complicated CA to get to Turing completeness.
> S-expressions are indisputably harder to learn to read.
Has this been studied? This is a very strong claim to make without any references.
What if you take two groups of software developers, one which has 5-10 years of experience in a popular language of choice, let's say C, and then take a group of people who write LISP professionally (maybe clojure? Common lisp? Academics who work with scheme/racket?) and then have scientists who know how to evaluate cognitive effort measure the difference in reading difficulty.
Isn't the space you're talking about the input images that are close to the textual prompt?
These models are trained on image+text pairs. So if you prompt something like "an apple" you get a conceptual average of all images containing apples. Depending on your dataset, it's likely going to be a photograph of an apple in the center.
It feels _amazing_ to draw a bird in a single stroke!
Maybe this can give you some inspiration!
https://laura.fm/generative-art/wind/wind.html
reply