A lot of the complaints here don't make a lot of sense and read like the author has never used an embedded linux device. The previously reported bugs are more substantial - hardcoded secrets for JWT access and firmware encryption, everything running as root, etc.
However, "Chinese product uses Chinese DNS servers and it's hard to change them" or "no systemd nor apt installed" are totally expected and hardly make it "riddled with security flaws". Same with tcpdump and aircrack being installed - these hardly compromise the security more than having everything run as root.
I would expect most users of this device will not be exposing the web interface externally, and the fact that they ship with Tailscale installed is actually impressive. I can't imagine the lack of CSRF protection will be a vulnerability for 99% of users.
I am curious what the "weird" version of wireguard the author refers to but based on their apparent lack of knowledge on embedded systems in general I would not be shocked to find that it's totally innocuous.
Also what do you really expect for 30€ or 60€ price point? On relatively low volume product. It even doing what is promised is already a good start to me. And that probably tells their priorities. Start from some already working image with wide support for features. And then add the features that are needed in specific use case. And then ship it.
Hanlon's Razor at work; most of the shortfalls described in the article points to incompetence more than malice.
Though I find it strange though, because I would call this the shortcomings of a crowdfunded project, but the author took it as a malicious and planned act to take over target computers and networks.
As far as I remember, some of the botnets are formed by routers that vendors refused to patch, because they're no longer being sold and not profitable to do so.
yeah.. their list of issues speaks more to their lack of experience and understanding of linux and embedded linux devices wrapped in xenophobic nonsense...
Battery life that needs to last all day is a first world computing problem. Most people leave their laptops plugged in and as long as you don’t run 10 chromium apps or other reasource hogs in the background, you will easily get 10-12 hours of battery.
I like the car analogy for IQ. Having an engine with 50% or more horsepower above the people around you is only useful if you know how to handle it, how to steer, etc.
The transmission is another great analogy, IMHO for communication skills. Applying full power to the tarmac from a dead stop is a great way to spin your tires.
> I like the car analogy for IQ. Having an engine with 50% or more horsepower above the people around you is only useful if you know how to handle it, how to steer, etc.
And its not useful at all in a typical traffic situation, you are still limited to the speed of the one in front of you. Intelligence is only useful in environments that allows it to be, but most places are designed for typical people.
In some cases it even comes with similar outcomes on mood and mental health to what I imagine being stuck in traffic all day every day would do to a person
The very notion of IQ reduces the mind to a receptacle for some ineffable thing called 'intelligence'. One may as well have a CQ - Comedy Quotient and start speculating who has a higher GQ - Robin Williams or Dave Chapelle.
Agreed, it's when. They're hoping to stave it off or maybe stretch out the pop into a correction by all hedging together with all these incestuous deals, but you can't hold back the tide. They debuted this tech way too early, promised way too much, and now the market is wary about buying AI products until more noise settles out of the system.
> They debuted this tech way too early, promised way too much,
finally, some rational thought into the AI insanity. The entire 'fake it til you make it' aspect of this is ridiculous. sadly, the world we live in means that you can't build a product and hold its release until it works. you have to be first to release even if it's not working as advertised. you can keep brushing off critiques with "it's on the road map". those that are not as tuned in will just think it is working and nothing nefarious is going on. with as long as we've had paid for LLM apps, I'm still amazed at the number of people that do not know that the output is still not 100% accurate. there are also people that use phrases as thinking when referring to getting a response. there's also the misleading terms like "searching the web..." when on this forum we all know it's not a live search.
> sadly, the world we live in means that you can't build a product and hold its release until it works. you have to be first to release even if it's not working as advertised.
You absolutely can and it's an extremely reliable path to success. The only thing that's changed is the amount of marketing hype thrown out by the fake-it vendors. Staying quiet and debuting a solid product is still a big win.
> I'm still amazed at the number of people that do not know that the output is still not 100% accurate.
This is the part that "scares" me. People who do not understand the tool thinking they're ACTUALLY INTELLIGENT. Not only are they not intelligent, they're not even ACTUALLY language models because few LLMs are actually trained on only language data and none work on language units (letters, words, sentences), tokens are abstractions from that. They're OUTPUT modelers. And they're absolutely not even close to being let loose unattended on important things. There are already people losing careers over AI crap like lawyers using AI to appeal sanctions because they had AI write a motion. Etc.
And I think that was ultimately the biggest unforced error of these AI companies and the ultimate reason for the coming bubble crash. They didn't temper expectations at all, the massive gap between expectation and reality is already costing companies huge amounts of money, and it's only going to get worse. Had they started saying, "these work well, but use them carefully as we increase reliability" they'd be in a much better spot.
In the past 2 years I've been involved in several projects trying to leverage AI, and all but one has failed. The most spectacular failure was Microsoft's Dragon Copilot. We piloted it with 100 doctors, after a few months we had a 20% retention rate, and by the end of a year, ONE doctor still liked it. We replaced it with another tool that WORKS, docs love it, and it was 12.6% the cost, literally a sixth the price. MS was EXTREMELY unhappy we canceled after a year, tried to throw discounts at us, but ultimately we had to say "the product does not work nearly as well as the competition."
I think they already have that confirmation. When we bailed the banks out in 08 we basically said "If you're big enough that we'd be screwed without you then take whatever risks you like with impunity".
That's a reduction of complexity, of course, but the core of the lesson is there. We have actually kept on with all the practices that led to the housing crash (MBS, predatory lending, Mixing investment and traditional banking).
> If you're big enough that we'd be screwed without you then take whatever risks you like with impunity".
I know financially it will be bad because number not go up and number need go up.
But do we actually depend on generative/agentic AI at all in meaningful ways? I’m pretty sure all LLMs could be Thanos snapped away and there would be near zero material impact. If the studies are at all reliable all the programmers will be more efficient. Maybe we’d be better off because there wouldn’t be so much AI slop.
It is very far from clear that there is any real value being extracted from this technology.
The government should let it burn.
Edit: I forgot about “country girls make do”. Maybe gen AI is a critical pillar of the economy after all.
> I’m pretty sure all LLMs could be Thanos snapped away and there would be near zero material impact.
I mostly agree, but I don't think it's the model developers that would get bailed out. OpenAI & Anthropic can fail, and should be let to fail if it comes to that.
Nvidia is the one that would get bailed out. As would Microsoft, if it came to that.
I also think they should be let to fail, but there's no way the US GOV ever allows them to.
> Nvidia is the one that would get bailed out. As would Microsoft, if it came to that.
> I also think they should be let to fail, but there's no way the US GOV ever allows them to.
There's different ways to fail, though: liquidation, and a reorganization that wipes out the shareholders.
OpenAI could be liquidated and all its technology thrown in to the trash, and I wouldn't shed a tear, but Microsoft makes (some) stuff (cough, Windows) that has too much stuff dependent on it to go away. The shareholders can eat it (though I think broad-based index funds should get priority over all other shareholders in a bankruptcy).
Why would Nvidia need bail out? They have 10 billion debt and 60 billion of cash... Or is it finally throwing any trust in the market and just propping up valuations? Which will lead to inevitable doom.
I expect the downvotes to come from this as they always seem to do these days, but I know from my personal experience that there is value in these agents.
Not so much for the work I do for my company, but having these agents has been a fairly huge boon in some specific ways personally:
- search replacement (beats google almost all of the time)
- having code-capable agents means my pet projects are getting along a lot more than they used to. I check in with them in moments of free time and give them large projects to tackle that will take a while (I've found that having them do these in Rust works best, because it has the most guardrails)
- it's been infinitely useful to be able to ask questions when I don't know enough to know what terms to search for. I have a number of meatspace projects that I didn't know enough about to ask the right questions, and having LLMs has unblocked those 100% of the time.
Economic value? I won't make an assessment. Value to me (and I'm sure others)? Definitely would miss them if they disappeared tomorrow. I should note that given the state of things (large AI companies with the same shareholder problems as MAANG) I do worry that those use cases will disappear as advertising and other monetizing influences make their way in.
Slop is indeed a huge problem. Perhaps you're right that it's a net negative overall, but I don't think it's accurate to say there's not any value to be had.
I'm glad you had positive experiences using this specific technology.
Personally, I had the exact opposite experience: Wrong, deceitful responses, hallucinations, arbitrary pointless changes to code...
It's like that one junior I requested to be removed from the team after they peed in the codebase one too many times.
On the slop i have 2 sentiments: Lots of slop = higher demand for my skills to clean it up. But also lots of slop = worse software on probably most things, impacting not just me, but also friends, family and the rest of humanity. At least it's not only a downside :/
If Meta or Google disappared overnight, it would be, at worst, a minor annoyance for most of the world. Despite the fact that both companies are advertising behemoths, marketing departments everywhere would celebrate their end.
Then they would just use another Messenger or fall back on RCS/SMS.
The only reason WhatsApp is so popular, is because so many people are on it, but you have all you need (their phone number) to contact them elsewhere anyway
So if WhatsApp had an outage, but you needed to communicate to someone, you wouldn't be able to? Don't you have contacts saved locally, and other message apps available?
In most of Asia, Latin America, Africa, and about half of Europe?
You’d be pretty stuck. I guess SMS might work, but it wouldn’t for most businesses (they use the WhatsApp business functionality, there is no SMS thing backing it).
Most people don’t even use text anymore. China has it’s own Apps, but everyone else uses WhatsApp exclusively at this point.
Brazil had many times a judge punished WhatsApp by blocking it in Brazil, and all the times that happened, Telegram gained hundreds of thousands of new users.
Really? Please indicate your source for that claim that "most people don't even use text anymore" because I have never once in my life been asked about WhatsApp, but have implemented a few dozen SMS integrations after all the annoying rules changes where you have to ask "mother may I" and submit a stool sample to send an SMS message from something other than a phone.
It all depends on whether MAGA survives as a single community. One of the few things MAGA understands correctly is that AI is a job-killer.
Trump going all out to rescue OpenAI or Anthropic doesn't feel likely. Who actually needs it, as a dependency? Who can't live without it? Why bail out entities you can afford to let go to the wall (and maybe then corruptly buy out in a fire sale)?
Similarly, can you actually see him agreeing to bail out Microsoft without taking an absurd stake in the business? MAGA won't like it. But MS could be broken up and sold; every single piece of that business has potential buyers.
Nvidia, now that I can see. Because Trump is surrounded by crypto grifters and is dependent on crypto for his wealth. GPUs are at least real solid products and Nvidia still, I think, make the ones the crypto guys want.
Google, you can see, are getting themselves ready to not be bailed out.
> One of the few things MAGA understands correctly is that AI is a job-killer
Trump (and by extension MAGA) has the worst job growth of any President in the past 50 years. I don't think that's their brand at all. They put a bunch of concessions to AI companies in the Big Beautiful Bill, and Trump is not running again. He would completely bail them out, and MAGA will believe whatever he says, and congress will follow whatever wind is blowing.
The bubble may well burst when the corporations are denied the enormous quantity of energy that they claim they need "to innovate". From TFA:
"""
Mr Pichai said action was needed, including in the UK, to develop new sources of energy and scale up energy infrastructure.
"You don't want to constrain an economy based on energy, and I think that will have consequences," he said.
He also acknowledged that the intensive energy needs of its expanding AI venture meant there was slippage on the company's climate targets, but insisted Alphabet still had a target of achieving net zero by 2030 by investing in new energy technologies.
"""
"Slippage" in this context probably means, "We no longer care about climate change but we don't feel that mere citizens are ready to hear us say it."
They got enough slush money to make this go on for a couple of years.
I am shocked at the part they know it is a bubble and they are doing nothing to amortize it. Which means they expect the government to step in and save their butts.
I've been trying to grok this idea of - when does a bubble pop. Like in theory if everyone knows it's a bubble, that should cause it to pop, because people should be making their way to the exists, playing music chairs to get their money out early.
But as I try to sort of narrative the ideas behind bubbles and bursts, one thing I realize, is that I think in order for a bubble to burst, people essentially have to want it to burst(or the opposite have to want to not keep it going).
But like Bernie Madoff got caught because he couldn't keep paying dividends in his ponzi scheme, and people started withdrawing money. But in theory, even if everyone knew, if no one withdrew their money (and told the FCC) and he was able to use the current deposits to pay dividends a few years. The ponzi scheme didn't _have_ to end, the bubble didn't have to pop.
So I've been wondering, like if everyone knows AI is a bubble, what has to happen to have it collapse? Like if a price is what people are willing to pay, in order for Tesla to collapse, people have to decide they no longer want to pay $400 for Tesla shares. If they keep paying $400 for tesla shares, then it will continue to be worth $400.
So I've been trying to think, in the most simple terms, what would have to happen to have the AI bubble pop, and basically, as long as people perceive AI companies to have the biggest returns, and they don't want to move their money to another place with higher returns (similar to TSLA bulls) then the bubble won't pop.
And I guess that can keep happening as long as the economy keeps growing. And if circular deals are causing the stock market to keep rising, can they just go on like this forever?
The downside of course being, the starvation of investments in other parts of the economy, and giving up what may be better gains. It's game theory, as long as no one decides to stop playing the game, and say pull out all their money and put it into I dunno, bonds or GME, the music keeps playing?
You’re over complicating something that is very simple. The stock market reflects people’s sentiments: greed, excitement, FOMO, despair…
A bubble doesn’t need a grand catalyst to collapse. It only needs prices to slip below the level where investors collectively decide the downside risk outweighs the upside hope. Once that threshold is crossed, selling accelerates, confidence unravels, and the fall feeds on itself.
It's important to keep in mind the difference between the stock market and the economy.
Economically, AI is a bubble, and lots of startups whose current business model is "UI in front of the OpenAI API" are likely doomed. That's just economic reality - you can't run on investor money forever. Eventually you need actual revenue, and many of these companies aren't generating very much of it.
That being said, most of these companies aren't publicly traded right now, and their demise would currently be unlikely to significantly affect the stock market. Conversely, the publicly traded companies who are currently investing a lot in AI (Google, Apple, Microsoft, etc) aren't dependent on AI, and certainly wouldn't go out of business over it.
The problem with the dotcom bubble was that there were a lot of publicly traded companies that went bankrupt. This wiped out trillions of dollars in value from regular investors. Doesn't matter how much you may irrationally want a bubble to continue - you simply can't stay invested in a company that doesn't exist anymore.
On the other hand, the AI bubble bursting is probably going to cost private equity a lot of money, but not so much regular investors unless/until AI startups (startups dependent on AI for their core business model) start to go public in large numbers.
I think the targeted ad revenue all of the llm providers will get using everyones regular chat data + credit card dataset for training is going to be insanely good.
Plus the information they can provide to the State on the sentiment of users is also going to be greatly valued
Didn't perplexity make only like 27K from ad revenue? They're going to have to actively compete with Google and Facebook dollars, as google and facebook develop competing products.
Eventually money to invest will run out. If earnings of the companies doesn't catch up we'll reach a situation where stock prices reach a peak, have limited future expected returns, and then it'll pop when there's a better opportunity for the money
Imagine if interest rates go up and you can get 5% from a savings account. One big player pulls out cash triggering a minor drop in AI stocks. Panic sells happen trying to not be the last one out of the door, margin calls etc.
You're assuming cash will never stop flowing in driving up prices. It will. The only way it goes on forever is if the companies end up being wildly profitable
This one? When China commits to subsidising and releasing cutting-edge open-source models. What BYD did to Tesla's FSD fee dreams, Beijing could do to American AI's export ambitions.
I'm not. A few podcasts I've listened to recently (mostly Odd Lots) explored how a pop is often preferable to a protracted downturn because it weeds out the losers quickly and allows the economy to begin the recovery aspect sooner. A protracted downturn risks poorly managed assets limping along for years instead of having capital reallocated to better investments.
which is kind of sad to think about.
The US could have invested all that money to actually invest in its infrastructure, schools, hospitals and general wellbeing of its workforce to make the economy thrive.
It's not "the US" who's investing the money. This is the same problem people run into when they say, "we should just put money into more trains and buses rather than self driving cars".
Private actors are the ones who are investing into AI, and there's no real way for them to invest into public infrastructure, or to eventually profit from it, the way investors reasonably expect to do when they put up their money for something.
It's the government who can choose to invest into infrastructure, and it's us voters who can choose to vote for politicians who will make that choice. But we haven't done that. So many people want to complain endlessly about government and corporations -- not entirely without merit, of course -- but then are quick to let voters off the hook.
I think the economic background has changed, in 2008 it was after a big run up in wealth so the reversion wasn’t so bad, there was some fat to cut. Since then people have been ground down to the breaking point, another 2008 wipeout will cut into the bone. I do think this time it could be different.
Sort of? My thoughts are that there's something of an AI arms race and the US doesn't want to lose that race to another country... so if the AI bubble pops too fiercely, there may likely be some form of intervention. And any time the government intervenes, all bets are off the table. Who knows what they will do and what the impact will be.
I can see them intervening to preserve AI R&D of some sort, but many of the current companies are running consumer oriented products. Why care if some AI art generation website goes bust?
20 odd years ago when I went to do a CS degree, I discovered that the university had these beautiful buildings called “libraries” and they were filled with all sorts of amazing books! I ended up splitting my time roughly evenly between learning C, SQL and Java and devouring every 19th century English literature book I could get my hands on.
I can’t claim to write as well, but weirdos like us do exist.
> The bans are being motivated largely by health professionals ringing all sort of alarm bells because mental health indicators paint a pretty dire picture. These are based on actual statistics and have been confirmed many times.
Do the stats prove that cell phones are the cause of the dire mental health indicators? Or at least that there is a correlation?
reply