We run an OpenClaw agent for our entire team — he lives in a group chat (although we have DMs too).
- Runs our standups, checks in withe everybody EOD on blockers
- Already know what we shipped on Github and Linear so it can focus on the work that's not tracked and summarize it in the morning for everyone
- Helps with debugging customer issues
- Keeps up with twitter and competitors and lets us know if they launch new features
Besides, I'm honestly blown away by the social aspect of it. I was honestly pretty skeptical at first, but having an AI team mate is actually _fun_. There, I said it. Everybody on the team said they'd be sad if we took it away.
I'll do a write-up on our setup sometime this week, I hope others will find our approach to security posture and multi-tenant usage insightful.
In your experience, did you (or anyone) in the team/company felt that some non-tech people were not pulling their weight, example project managers/directors who didn't seem to bring enough value and if you did, found that using OpenClaw reduces the need for those positions?
Now if you have multiple teams each doing this and then have all those agents talk to each other and then report back to your team, you get "AI Hyperchat"[0], which may actually be a really good idea that has the potential to seriously improve intra-organizational communications (disruptively so). See also [1] for a VentureBeat article about the idea.
It all depends on the model and how much you use it of course. We're running Opus 4.6 and on a light day it spends a dollar or two. This is just a few simple operations like "create a ticket for ..." and it's regular heartbeat checks. The heaviest day I see is $110 and on that day we were basically talking to it and having it implement features all day long.
Out of curiousity, is it nonsense because you're a scrum master feeling threatened, or nonsense because automating rituals like those seen in SCRUM makes them less about communication and more about just doing the ritual itself?
It is nonsense because it's just nonsense finding a bot "funny" and the team requesting it otherwise they'd be sad. It's totally nonsense if not just marketing
In particular: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
I love the vision, but it glosses over the most important difference between Infrastructure and Companies.
Infrastructure as code is prescriptive. The code is the source of truth, and the world gets crested from it.
Company as code is descriptive. It is constantly catching up to meat-space, rather than creating it. Changes are gradual instead of instant roll-outs. Patterns change over time and only get documented later.
Making the company code prescriptive would require an insane amount of discipline that might be more stifling and restrictive than it is freeing.
Nailed it. I think about prescriptivism / descriptivism in terms of these archetypes:
- "Rule followers" think an org will be better off if everyone agreed on a set of rules to follow. At the boundaries, they will think about establishing new rules to clarify and codify new things. Charitably, I'd add that they might remove rules that are obsolete, but we all know this is not sufficiently true in practice: governments, for example, are much more likely to add new rules than to remove old ones.
- "Rule breakers" think that most rules are suggestions. At the boundaries, they will see rules other people are needlessly bound by, and translate those into strategic openings for whatever game they're playing. For better and for worse, start-up ecosystems are full of people like this.
Rule followers want to be told what's allowed, while rule breakers try to figure out what _should_ be allowed from first principles. At the extreme, they tug the world towards authoritarianism or towards anarchy.
This is obviously a spectrum, so everyone has both of these archetypes in them, albeit in different proportions (e.g. most people pay taxes, but almost no one drives the speed limit).
This goes back all the way to the beginnings of Psychology. William James, who is considered the somewhat of godfather of Psychology, argued that all feelings are bodily feelings; ie. emotions are caused by bodily sensations. Your heart is not beating BECAUSE of anxiety, rather your beating heart IS anxiety. You don’t tremble because you’re afraid, you’re “afraid” (a complex emotion mediated by stories we have) because you tremble.
It’s a theory psychologists and philosophers still argue about.
So if my heart rate stays the same even though I feel anxious I am not anxious? I am thinking I am anxious and that I don't feel good, but I don't really notice any physical symptoms.
E.g. I am worried about upcoming deadlines, or whether I am going to make it, maybe it is not a direct fight or flight anxiety though, but what is it then, just stress?
They're all great, but the 2012 "The Uncensored Picture of Dorian Gray" is the closest to the original script before the editor cut out things that he deemed... checks notes... "too gay".
It restores parts that were cut, and essentially bans chapter 3 and some other digressions on art history that Wilde added as a literary Beard to the footnotes - still there to read, but set in context)
It's not a huge different honestly, but I believe Oscar Wilde would want you to read that version.
I literally just hired Ben Horowitz last month, but I must assume that mine is the better systems and integrations engineer so I consider myself having getting the better deal.
It might be worth noting that humans also struggle with keeping up a coherent world model over time.
Luckily, we don’t have to; we externalize a lot of our representations. When shopping together with a friend we might put our stuff on one side of the shopping cart and our friends’ on the other. There’s a reason we don’t just play chess in our heads but use a chess board. We use notebooks to write things down, etc.
Some reasoning model can do similar things (keep a persistent notebook that gets fed back into the context window on every pass), but I expect that we need a few more dirty representational ist tricks to get there.
In other words, I don’t think it’s an LLMs job to have a world model, but an LLM is just one part of an AI system.
I think it's funny that it's very similar to ensō in many ways, but also the complete opposite: ensō is calm, mindful, soothing. MDWa is hectic, terrifying, sadistic. Funny how a tiny difference produces products that look almost the same, and feel completely different.
huge props to rafal for creating ensō, personally really love it
- Runs our standups, checks in withe everybody EOD on blockers - Already know what we shipped on Github and Linear so it can focus on the work that's not tracked and summarize it in the morning for everyone - Helps with debugging customer issues - Keeps up with twitter and competitors and lets us know if they launch new features
Besides, I'm honestly blown away by the social aspect of it. I was honestly pretty skeptical at first, but having an AI team mate is actually _fun_. There, I said it. Everybody on the team said they'd be sad if we took it away.
I'll do a write-up on our setup sometime this week, I hope others will find our approach to security posture and multi-tenant usage insightful.