Maybe I’m too naive but I can never tell when something is written by AI. If it works with next most likely token, doesn’t that mean it has encountered the patterns you’re picking out in lots and lots of text written by humans? Please educate me if I’m wrong.
> it has encountered the patterns you’re picking out in lots and lots of text written by humans?
In pre-training data, yes
There are post-training datasets, where the weights are changed to conform to human preference. These datasets are created by groups of thousands of people all following a 40-page guide, and these guides have example. People over-index on these examples and so sample sentences with these structures are over represented in these datasets and used for post-training.
Same. Everyone wants to feel smart by trying to point out that every piece of writing is AI generated now, but most of us (myself included) are just average writers. All of the LLMs generate phrasing I often use.
Okay, it means that since men have two and women have zero, the average person has one testicle. But if you use “average” as a meaningful guideline, you’re going to have trouble because very few people have one testicle; nearly all have two or zero. Here I am making commentary on the quality of AI writing with an analogy to how AI writes like the average person.
Try reading some different news sources. If you don’t trust the other sources in the US try other English language sources in Germany or France, or the BBC, Reuters, etc. You might be surprised what a tiny few will report ICE is out for and only arresting criminals.
I’m waiting to find out that experimentations are being done on humans in ICE detention centers. It won’t be all of them of course but it will happen in at least one. The Germans supporting Hitler would never have believed it, but the exact same mindset is running the show here. Just a matter of time.
Space is not cold. Space is not hot either. Temperature is a property of matter. Empty space does not have a temperature. It is also a perfect insulator for conduction and convection, and as for radiation: it's a problem because of the sun heating up objects via its radiation.
This is the basics, I'm not an expert. But I don't think that you have anything useful at all to say here.
I think so too. But because of code quality issues and LLMs not handling the hard edge cases my guess is most of those startups will be unable to scale in any way. Will be interesting to watch.
I wouldn’t call this next-gen SQLite. How can it be when the “QL” of SQLite is “Query Language” and this doesn’t have one? This is an object serialization library.
Not really. This db allows traversing the (deeply nested) data structures without loading them into memory. Eg. In Clojure you can do
```
(get-in db [:people "john" :address :city])
```
Where `:people` is a key in a huge (larger than memory) map. This database will only touch the referenced nodes when traversing, without loading the whole thing into memory.
So the 'query language' is actually your programming language. To the programmer this database looks like an in-memory data structure, when in fact it's efficiently reading data from the disk. Plus immutability of course (meaning you can go back in history).
reply