Hacker Newsnew | past | comments | ask | show | jobs | submit | rsiqueira's commentslogin

Dwitter is a JavaScript-based social network where coders showcase their creativity by crafting mesmerizing animations and visual effects in just 140 characters of code. Check out this festive collection of fireworks, New Year, and Christmas-themed masterpieces!


Here is a sample of an animated XOR texture in just 122 characters of JS code: https://www.dwitter.net/d/255

And here are several variations using other binary operators to create the Sierpinski triangle: https://www.dwitter.net/h/sierpinski


* Which languages is it available in?

* Does the system automatically detect the language?

* What are the hardware requirements for it to work?


In one of those random connections, I met my wife through ICQ.


Nice. I just remembered my 8 digit login and password and indeed, my wife also had an account back somewhere in 2005-2008. (I brought here there but didn't meet her there though)


Some samples from the article are similar to what is found on my favorite Generative Art site - Dwitter. It's a social network of coders where we can view, interact and remix live code. Everything under 140 characters of creative JavaScript code: https://www.dwitter.net/top/all


This is the true true


This is magical!


REQUEST: Please add an option for a monospace/code font. I see that <pre><code>...</code></pre> works very well (good monospace block style), but I have to inspect and edit the source.

With this, we could be able to use it for code!

THANKS!


Thank you for the suggestion!


There are IFS-like fractals that can be generated using small iterative functions. Here are examples in JavaScript that can be remixed (and they are very short, only 140 characters or less). E.g.: https://www.dwitter.net/h/fractal and https://www.dwitter.net/h/fern

Dwitter is a cool social network where JavaScript programmers can share demos, fractals, art algorithms and interactive code viewed on <canvas>.


i'd say these are still ifses, just not linear ones

dwitter looks pretty cool. like twitter if it was designed for creativity and beauty instead of trolling

i did a golfed emoji ifs the other day in python but it's 288 characters, not 140 or even 280: http://canonical.org/~kragen/sw/dev3/hilbert.py

i feel like the three levels of representation needed to draw l-systems on a raster display (strings, turtle commands, and cartesian coordinates) sort of disfavor golfing. i managed to get an ascii-art ifs down to 259 strokes by using complex numbers instead of vectors: http://canonical.org/~kragen/sw/dev3/cifs.py


That's great, thanks for sharing!


URGENT - Does anyone have an alternative to OpenAI's embeddings API? I do have alternative to GPT's API (e.g. Anthropic Claude) but I'm not able to use them without embeddings API (used to generate semantic representation of my knowledge base and also to create embeddings from user's queries). We need to have an alternative to OpenAI's embeddings as a fallback in case of outages.


https://www.anthropic.com/product recommends the open-source SBERT: https://www.sbert.net/examples/applications/computing-embedd...

Highly recommend preemptively saving multiple types of embeddings for each of your objects; that way, you can shift to an alternate query embedding at any time, or combine the results from multiple vector searches. As one of my favorite quotes from Contact says: "first rule in government spending: why build one when you can have two at twice the price?" https://www.youtube.com/watch?v=EZ2nhHNtpmk


Azure OpenAI Service is up, and provides the same models as OpenAI https://azure.status.microsoft/status


Is it still "private" as in you have to request access?


It’s publicly available, but you still do have to request access I believe.


Choose whichever one outperforms ada-002 for your task here: https://huggingface.co/spaces/mteb/leaderboard


Oh no the 3 line ai wrapper apps are panicking because they actually don't know to write any code.


I've implemented alternate embeddings in SlothAI using Instructor, which is running an early preview at https://ai.featurebase.com/. Currently working on the landing page, which I'm doing manually because ChatGPT is down.

The plan is to add Llama 2 completions to the processors, which would include dictionary completion (keyterm/sentiment/etc), chat completion, code completion, for reasons exactly like what we're discussing.

Here's the code for the Instructor embeddings: https://github.com/FeatureBaseDB/Laminoid/blob/main/sloth/sl...

To do Instructor embeddings, do the imports then reference the embed() function. It goes without saying that these vectors can't be mixed with other types of vectors, so you would have to reindex your data to make them compatible.


What about Azure? You can set up an ADA 002 Embeddings deployment there.


This reminds us that, what if our databases are maintained using OpenAI's embeddings, and the API suddenly goes down? How do we find alternatives to match the already generated database?


I don't think you can do that easily. If you already have a list of embeddings from a different model, you might be able to generate an alignment somehow, but in general, I wouldn't recommend it.


That's my point, maybe VectorDBs in production should have a fallback mechanism, for the documents inserted,

1. Generate embeddings using services such as OpenAI, which is usually more powerful;

2. Generate backup embeddings using local, more stable models, such as Llama2 embeddings or simply some BERT-family-model (which is more affordable).

When outages comes up you simply switch from one vector space to another. Though possible, model alignments are much harder and more expensive to achieve.


To my knowledge, you cannot mix embeddings from different models. Each dimension has a different meaning for each model.


There's been some success in creating translation layers that can convert between different LLM embeddings, and even between LLM and an image generation model.


Be careful because one embedding may not be compatible with your current embeddings


Amazon Bedrock has an embeddings option


Is this equivalent to a 14 gram bullet (10x less than your example) travelling at 1000 km/h (10x more than your example)? Or a 1,4 gram bullet at 10.000 km/h hitting you?


I don't think so, no. Kinetic energy grows with the square of velocity, and linearly with mass. So a 14g bullet travelling at 316km/h or a 1.4g bullet travelling at 1000km/h. But for what it's worth, I think most people have more experience catching baseballs than bullets (and I don't know how much bullets typically weigh or how fast they travel).


But, in order to generate the vectors, I understand that it's necessary to use the OpenAI's Embeddings API, which would grant OpenAI access to all client data at the time of vector creation. Is this understanding correct? Or is there a solution for creating high-quality (semantic) embeddings, similar to OpenAI's, but in a private cloud/on premises environment?


Enterprises with Azure contracts are using embeddings endpoint from Azure's OpenAI offering.

It is possible to use llama or bert models to generate embeddings using LocalAI (https://localai.io/features/embeddings/). This is something we are hoping to enable in LLMStack soon.


Sentence-Bert is at least as good as OpenAI embeddings. But I think more importantly Azure OpenAI model api is already soc2 and hipaa compliant.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: