When I hear people talking about how insecure OpenClaw is, I remember how insecure the internet was in the early days. Sometimes it's about doing the right thing badly and fix the bad things after.
Big Tech can't release software this dangerous and then figure out how to make it secure. For them it would be an absolute disaster and could ruin them.
What OpenClaw did was show us the future, give us a taste of what it would be like and had the balls to do it badly.
Technology is often pushed forwards by ostensively bad ideas (like telnet) that carve a path through the jungle and let other people create roads after.
I don't get the hate towards OpenClaw, if it was a consumer product I would, but for hackers to play around to see what is possible it's an amazing (and ridiculously simple) idea. Much like http was.
If you connected to your bank account via telnet in the 1980s or plain http in the 90s or stored your secrets in 'crypt' well, you deserved what you got ;-) But that's how many great things get started, badly, we see the flaws fix them and we get the safe version.
Yes, also for semantic indexes, I use one for person/role/org matches. So that CEO == chief executive ~= managing director good when you have grey data and multiple look up data sources that use different terms.
> Trouble is some benchmarks only measure horse power.
IMO it's the other way around. Benchmarks only measure applied horse power on a set plane, with no friction and your elephant is a point sphere. Goog's models have always punched over what benchmarks said, in real world use @ high context. They don't focus on "agentic this" or "specialised that", but the raw models, with good guidance are workhorses. I don't know any other models where you can throw lots of docs at it and get proper context following and data extraction from wherever it's at to where you'd need it.
We will see at the end of April right? It's more of a guess than a strongly held conviction--but I see models improving rapidly at long horizon tasks so I think it's possible. I think a benchmark which can survive a few months (maybe) would be if it genuinely tested long time-frame continual learning/test-time learning/test-time posttraining (idk honestly the differences b/t these).
But i'm not sure how to give such benchmarks. I'm thinking of tasks like learning a language/becoming a master at chess from scratch/becoming a skill artists but where the task is novel enough for the actor to not be anywhere close to proficient at beginning--an example which could be of interest is, here is a robot you control, you can make actions, see results...become proficient at table tennis. Maybe another would be, here is a new video game, obtain the best possible 0% speedrun.
It's a useless meaningless benchmark though, it just got a catchy name, as in, if the models solve this it means they have "AGI", which is clearly rubbish.
Arc-AGI score isn't correlated with anything useful.
It's correlated with the ability to solve logic puzzles.
It's also interesting because it's very very hard for base LLMs, even if you try to "cheat" by training on millions of ARC-like problems. Reasoning LLMs show genuine improvement on this type of problem.
ARC-AGI 2 is an IQ test. IQ tests have been shown over and over to have predictive power in humans. People who score well on them tend to be more successful
IQ tests only work if the participants haven't trained for them. If they do similar tests a few times in a row, scores increase a lot. Current LLMs are hyper-optimized for the particular types of puzzles contained in popular "benchmarks".
>can u make the progm for helps that with what in need for shpping good cheap products that will display them on screen and have me let the best one to get so that i can quickly hav it at home
And get back an automatic coupon code app like the user actually wanted.
Its possibly label noise. But you can't tell from a single number.
You would need to check to see if everyone is having mistakes on the same 20% or different 20%. If its the same 20% either those questions are really hard, or they are keyed incorrectly, or they aren't stated with enough context to actually solve the problem.
It happens. Old MMLU non pro had a lot of wrong answers. Simple things like MNIST have digits labeled incorrect or drawn so badly its not even a digit anymore.
But 80% sounds far from good enough, that's 20% error rate, unusable in autonomous tasks. Why stop at 80%? If we aim for AGI, it should 100% any benchmark we give.
I'm not sure the benchmark is high enough quality that >80% of problems are well-specified & have correct labels tbh. (But I guess this question has been studied for these benchmarks)
The problem is that if the automation breaks at any point, the entire system fails. And programming automations are extremely sensitive to minor errors (i.e. a missing semicolon).
AI does have an interesting feature though, it tends to self-healing in a way, when given tools access and a feedback loop. The only problem is that self-healing can incorrectly heal errors, then the final reault will be wrong in hard-to-detect ways.
So the more wuch hidden bugs there are, the nore unexpectedly the automations will perform.
I still don't trust current AI for any tasks more than data parsing/classification/translation and very strict tool usage.
I don't beleive in the full-assistant/clawdbot usage safety and reliability at this time (it might be good enough but the end of the year, but then the SWE bench should be at 100%).
it's more just a personal want to be able to see what I can do on my own tbh; i don't generally judge other people on that measure
although i do think Steve Jobs didn't make the iPhone /alone/, and that a lot of other people contributed to that. i'd like to be able to name who helps me and not say "gemini". again, it's more of a personal thing lol
So not disagreeing as you say, it is a personal thing!
I honestly find coding with AI no easier than coding directly, it certainly does not feel like AI is doing my work for me. If it was I wouldn't have anything to do, in reality I spend my time thinking about much higher level abstractions, but of course this is a very personal thing too.
I myself have never thought of code as being my output, I've always enjoyed solving problems, and solutions have always been my output. It's just that before I had to write the code for the solutions. Now I solve the problems and the AI makes it into code.
I think that this probably the dividing line, some people enjoy working with tools (code, unix commands, editors), some people enjoy just solving the problems. Both of course are perfectly valid, but they do create a divide when looking at AI.
Of course when AI starts solving all problems, I will have a very different feeling :-)
If you managed an AI (or rather, ai system) that wrote a compiler or web browser like Claude code or cursor did, would you feel like you did it?
Just a curious question, not trying to be combative or anything.
I myself will go into planning mode and ask it to implement a feature, and ask it to give me tradeoffs between implementation details. Then I might chat with it a bit to further understand the implementation before it writes the plan.
I find it to be very effective and gives me a sense of agency in my features.
You switch difficulties, like you do in a game. Play on Hard or Survival mode now. Build greater and more amazing things than you ever did before.
We have never, ever, written what the machine executes, even assembly is an abstraction, even in a hex editor. So we all settle for the level of abstraction we like to work at. When we started (those of our age) most of us were assembly (or BASIC) programmers and over time we either increased our level of abstraction or didn't. If you went from assembly -> C -> Java/Python you moved up levels of abstraction. We're not writing in Python or C now, we are writing in natural language and that is compiled to our programming languages. It's just the compiler is still a bit buggy and opinionated!! And yes for some low level coding you still want to check the assembly language, some things need that level of attention.
I learn more in a day coding with AI than I would in a month without it, it's a wonderful two-way exchange, I suggest directions, it teaches me new libraries or techniques that might solve the problem. I lookup those solutions and learn more about my problem space. I feel more like a university student some days than a programmer.
Eventually this will probably be the end of coding and even analytical work. But I think that part is still far off (and possibly longer than we'll still be working for) in the meantime actually this for me is as exciting as the early days of home computing. It won't be fun for ever, the Internet was the coolest thing ever, until it wasn't, but doesn't mean we can't enjoy the summer while it's summer.
Besides that, multiple ways to read this. "Monarchies" could've been a reference to pre modern monarchies of which many made it through at least 3 successions. Or as a correction to the upper comment, saying that the Kim's are more monarchy than plain dictatorship.
The British Crown had to concede some rights centuries ago, or there would have been civil wars and probably no more crown. Dear Leaders are the ones that don't have to concede anything, yet.
Not only have there been multiple civil wars, there has also been “no more crown” after the beheading of King Charles I (1649) until the restoration in 1660.
Big Tech can't release software this dangerous and then figure out how to make it secure. For them it would be an absolute disaster and could ruin them.
What OpenClaw did was show us the future, give us a taste of what it would be like and had the balls to do it badly.
Technology is often pushed forwards by ostensively bad ideas (like telnet) that carve a path through the jungle and let other people create roads after.
I don't get the hate towards OpenClaw, if it was a consumer product I would, but for hackers to play around to see what is possible it's an amazing (and ridiculously simple) idea. Much like http was.
If you connected to your bank account via telnet in the 1980s or plain http in the 90s or stored your secrets in 'crypt' well, you deserved what you got ;-) But that's how many great things get started, badly, we see the flaws fix them and we get the safe version.
And that I guess is what he'll get to do now.
* OpenClaw is a straw man for AGI *
reply