Dwitter is a JavaScript-based social network where coders showcase their creativity by crafting mesmerizing animations and visual effects in just 140 characters of code. Check out this festive collection of fireworks, New Year, and Christmas-themed masterpieces!
Nice. I just remembered my 8 digit login and password and indeed, my wife also had an account back somewhere in 2005-2008. (I brought here there but didn't meet her there though)
Some samples from the article are similar to what is found on my favorite Generative Art site - Dwitter. It's a social network of coders where we can view, interact and remix live code. Everything under 140 characters of creative JavaScript code: https://www.dwitter.net/top/all
REQUEST: Please add an option for a monospace/code font. I see that <pre><code>...</code></pre> works very well (good monospace block style), but I have to inspect and edit the source.
There are IFS-like fractals that can be generated using small iterative functions. Here are examples in JavaScript that can be remixed (and they are very short, only 140 characters or less). E.g.:
https://www.dwitter.net/h/fractal and https://www.dwitter.net/h/fern
Dwitter is a cool social network where JavaScript programmers can share demos, fractals, art algorithms and interactive code viewed on <canvas>.
i feel like the three levels of representation needed to draw l-systems on a raster display (strings, turtle commands, and cartesian coordinates) sort of disfavor golfing. i managed to get an ascii-art ifs down to 259 strokes by using complex numbers instead of vectors: http://canonical.org/~kragen/sw/dev3/cifs.py
URGENT - Does anyone have an alternative to OpenAI's embeddings API?
I do have alternative to GPT's API (e.g. Anthropic Claude) but I'm not able to use them without embeddings API (used to generate semantic representation of my knowledge base and also to create embeddings from user's queries). We need to have an alternative to OpenAI's embeddings as a fallback in case of outages.
Highly recommend preemptively saving multiple types of embeddings for each of your objects; that way, you can shift to an alternate query embedding at any time, or combine the results from multiple vector searches. As one of my favorite quotes from Contact says: "first rule in government spending: why build one when you can have two at twice the price?" https://www.youtube.com/watch?v=EZ2nhHNtpmk
I've implemented alternate embeddings in SlothAI using Instructor, which is running an early preview at https://ai.featurebase.com/. Currently working on the landing page, which I'm doing manually because ChatGPT is down.
The plan is to add Llama 2 completions to the processors, which would include dictionary completion (keyterm/sentiment/etc), chat completion, code completion, for reasons exactly like what we're discussing.
To do Instructor embeddings, do the imports then reference the embed() function. It goes without saying that these vectors can't be mixed with other types of vectors, so you would have to reindex your data to make them compatible.
This reminds us that, what if our databases are maintained using OpenAI's embeddings, and the API suddenly goes down? How do we find alternatives to match the already generated database?
I don't think you can do that easily. If you already have a list of embeddings from a different model, you might be able to generate an alignment somehow, but in general, I wouldn't recommend it.
That's my point, maybe VectorDBs in production should have a fallback mechanism, for the documents inserted,
1. Generate embeddings using services such as OpenAI, which is usually more powerful;
2. Generate backup embeddings using local, more stable models, such as Llama2 embeddings or simply some BERT-family-model (which is more affordable).
When outages comes up you simply switch from one vector space to another. Though
possible, model alignments are much harder and more expensive to achieve.
There's been some success in creating translation layers that can convert between different LLM embeddings, and even between LLM and an image generation model.
Is this equivalent to a 14 gram bullet (10x less than your example) travelling at 1000 km/h (10x more than your example)? Or a 1,4 gram bullet at 10.000 km/h hitting you?
I don't think so, no. Kinetic energy grows with the square of velocity, and linearly with mass. So a 14g bullet travelling at 316km/h or a 1.4g bullet travelling at 1000km/h. But for what it's worth, I think most people have more experience catching baseballs than bullets (and I don't know how much bullets typically weigh or how fast they travel).
But, in order to generate the vectors, I understand that it's necessary to use the OpenAI's Embeddings API, which would grant OpenAI access to all client data at the time of vector creation. Is this understanding correct? Or is there a solution for creating high-quality (semantic) embeddings, similar to OpenAI's, but in a private cloud/on premises environment?
Enterprises with Azure contracts are using embeddings endpoint from Azure's OpenAI offering.
It is possible to use llama or bert models to generate embeddings using LocalAI (https://localai.io/features/embeddings/). This is something we are hoping to enable in LLMStack soon.