At this point I wonder if AI's get updated just to recognize and deal with specific tests like this.
In comparison to solving the root issues, it's gotta be easier to add a few extra lines of code to intervene if someone is asking about walking or driving to the carwash or wanting to know how many "r"'s in the word strawberry.
I wonder if AI is the opaque interesting tech it says it is, but also it's thousands of extra if statements catching known/published/problematic/embarrassing inconsistencies.
Anyone here work for any of the big AI companies? Is it just one big black-box, or a black-box with thousands of intervention points and guard rails?
Let me ask a dumb question. Can this be run on a public server (I use dreamhost) with a web interface for others to see? Or is this strictly something that gets run on a local computer?
Well, I have to make some modifications, but that isn't recommended right now because I have a settings option with the API key right there for the free world to see, lol. I will work on making a version for hosting it, though.
You can throw it on a server and run it for you to see (or anyone else if you trust people or dont care about losing your free API keys) It's just a standard Next.js and FastAPI stack, and there are Dockerfiles in the repo so it should be pretty straightforward to spin up on a cheap VPS (like a DigitalOcean droplet or Hetzner).
Honestly, if you just want to show it off to a few people, running it locally and exposing it with a Cloudflare Tunnel or Ngrok is probably the path of least resistance.
I WILL work on having a version to host it where users have to bring their own keys to see it in the future though
Cloudflare Tunnel is solid for quick demos. One thing though — if you're planning the "bring your own keys" version, don't just throw them in a settings page. I went down that road and ended up with keys sitting in localStorage where any XSS could grab them. What worked better for me was having the backend hold the keys and issuing short-lived session tokens to the frontend. More moving parts but way less surface area if something goes wrong.
If you want to host for friends/trusted devices, you can put it on a Tailscale or Zerotier style network and just let trusted devices access the server wrt to the OP's point about open secrets. Or you could probably make a PR to load the settings from somewhere else.
Replying to myself here - I decided to just actually go read wikipedia about this. Here's the answer:
<quote>
By default, when a processor is executing an instruction, its LED is on. In a SIMD program, the goal is to have as many processors as possible working the program at the same time – indicated by having all LEDs being steady on. Those unfamiliar with the use of the LEDs wanted to see the LEDs blink – or even spell out messages to visitors. The result is that finished programs often have superfluous operations to blink the LEDs.
There is no documentation of what the LEDs were _actually_ doing. There are descriptions, like 'Random and Pleasing is an LFSR', but no actual information that maps to actual pixel coordinates spaced in time. Nearly zero code.
I'm saying this because I need this information, and the fastest way to get information is to state that it's impossible or doesn't exist.
Seems like CM-1 and CM-2 show CPU activity, so each light blinked when a CPU did something. Those were the ones that were designed by Tamiko Thiel.
Then, CM-5 did have the option of having "artistic" or "random patterns" on it, apparently designed or co-designed by Maya Lin. IIRC, the CM-5 is the one appearing in Jurassic Park.
I don't know if is there any firmware code or hardware design available to check how that function worked. Maybe the people from the Computer History Museum knows something. They have the first CM-1 and have at least one CM-5.
Check their library to see if maybe some of the technical docs say something:
As a developer you had explicit access to them, so you could use them for debugging. A lot of times, they were just running an RNG to look cool though.