Framework 14 is the original (and best tbh). The 13 and 16 unfortunately don't hit the balance of feeling premium like the 14 does.
This thread has me wondering if they really diluted their reputation with these new devices...
I meant 14, then. I bought my framework from the third or fourth batch that was available (going from memory), in November 2021 (going from email).
Nothing premium about it at all. That’s ridiculous in my opinion, the quality is cringe in comparison to an Apple. On the other hand, I’ve got four years of use out of it, so… whatever.
I wanted to love it very badly, and they did the right thing with the camera and audio cutoff switches and their location. But that wasn’t enough to make it a great laptop.
By all accounts the Framework 14 hits the balance well, feeling basically like any other premium metal laptop. Maybe based on that reputation alone, the author decided to buy the 16.
But the 16 is meant to be a chonky desktop replacement with a giant GPU enclosure on the back. Just by virtue of what it is, it's never going to feel very nice.
The author's other option to buy being a MacBook tells me they neglected to do their research on what they were buying.
What they really wanted was a Framework 14! It basically IS a MacBook with replaceable components and full repairability.
Microsoft Teams. Teams (Personal), Teams (Work and School), Teams (New). A year or so ago you night have had all three of these installed at the same time..
and that's AFTER they changed the names of Personal and Work. Before that, you'd have Microsoft Teams and Microsoft Teams. One was purple with a white T and the other was white with a purple T.
If you tried to login with the personal version, it would error-out but not give any indication you may be using the wrong version. Let's be real. NO ONE is using Teams in their personal life. /rant
They're just infuriating on every level when it comes to naming things.
To be honest, it's MUCH easier to have one source to blame when things go down. If a small-medium vendor's website goes down on a normal day, so poor IT guy is going to be fielding calls all day.
If that same vendor goes down because Cloudflare went down, oh well. Most already know and won't bother to ask when your site will be back up
There's people in the FOSS realm running VERY competent operations for simple living wage, or less.
Take KDE for example. It's easy to argue they've accomplished MORE than Mozilla has in the last decade.
Their desktop ships with every Steam Deck (along with some niche laptop manufacturers) and they have a vast ecosystem of applications. Albeit some more rapidly developed than others.
Their structure is entirely different than Mozilla so it's hardly a direct comparison. But the main point is that Mozilla's traditional corporate structure seems to be a millstone.
They could have stashed most of their Google funding and kept a solid team of passionate maintainers paid in perpetuity. Goodwill could have volunteers contributing directly to Firefox, instead of forking it.
I truly believe you're right on that. Apple seems to be the most visible example of a company using LLMs in a way that will make long-term sense.
The most logical future of AI is strictly on-device, capable enough to understand the user and making sense of the algorithmic-anarchy that is our phone.
THAT is something exciting and practical, that doesn't require magical-thinking to desire.
You can't just take cost of training out of the equation...
If these companies plan to stay afloat, they have to actually pay for the tens of billions they've spent at some point. That's what the parent comment meant by "free AI"
Training is expensive, but it's not that expensive either. It takes just one of those super-rich players to pay the training costs and then release the weights, to deny other players a moat.
If your economic analysis depends on "one of those super-rich players to pay" for it to work, it isn't as much analysis as wishful thinking.
All the 100s of billions of $ put into the models so far were not donations. They either make it back to the investors or the show stops at some point.
And with a major chunk of proponent's arguments being "it will keep getting better", if you lose that what you got? "This thing can spit out boilerplate code, re-arrange documents and sometimes corrupts data silently and in hard to detect ways but hey you can run it locally and cheaply"?
The economic analysis is not mine, and I though it was pretty well-known by now: Meta is not in the compute biz and doesn't want to be in it, so by releasing Llamas, it denies Google, Microsoft and Amazon the ability to build a moat around LLM inference. Commoditize your complement and all that. Meta wants to use LLMs, not sell access to them, so occasionally burning a billion dollars to train and give away an open-weight SOTA model is a good investment, because it directly and indirectly keeps inference cheap for everyone.
No, it just means that the big players have to keep advancing SOTA to make money; Llama lagging ~6 months behind just means there's only so much they can charge for access to the bleeding edge.
Short-term, it's a normal dynamics for a growing/evolving market. Long-term, the Sun will burn out and consume the Earth.
The cost to improve training increases exponentially for every milestone. No vendor is even coming close to recouping the costs now. Not to mention quality data to feed the training.
The R&D is running on hopes that increasing the magnitude (yes, actual magnitudes) of their models will eventually hit a miracle that makes their company explode in value and power. They can't explain what that could even look like... but they NEED evermore exorbitant amounts of funding flowing in.
This truly isn't a normal ratio of research-to-return.
Luckily, what we do have already is kinda useful and condensing models does show promise. In 5 years I doubt we'll have the post-labor dys/utopia we're being hyped up for. But we may have some truly badass models that can run directly on our phones.
Like you said, Llama and local inference is cheap. So that's the most logical direction all of this is taking us.
Nah, the vendors have generally been open about the limits of scaling. The bet isn't on that one last order of magnitude increase will hit a miracle - the bet is on R&D figuring out a new way to get better model performance before the last one hits diminishing returns. Which, for now, is what's been consistently happening.
There's risk to that assumption, but it's also a reasonable one - let's not forget the whole field is both new and has seen stupid amounts of money being pumped into it over the last few years; this is an inflationary period, there's tons of people researching every possible angle, but that research takes time. It's a safe bet that there are still major breakthroughs ahead us, to be achieved within the next couple years.
The risky part for the vendors is whether they'll happen soon enough so they can capitalize on them and keep their lead (and profits) for another year or so until the next breakthrough hits, and so on.