I spun up an Debian stable ec2 vm (using an agent + aws cli + aws-vault of course) to host openclaw, giving it full root access, and I talk to it on discord.
It's a little slow sometimes, but it's the first time I've felt like I have an independent agent that can handle things kind of.
The only two things I did were 1. Ask it to create a Monero address so I could send it money, and have it notify me whenever money is sent to that address. It spun up its own monerod daemon which was really heavy and it ran out of space. So I had to get it to use the Monero wallet instead, but had to manually intervene to shut down the monerod daemon and kill the process and restart openclaw. In the end it worked and still works.
2. I simply asked it "@ me the the silver price every day around 8am ET" and it just figured out how to do it and schedule it. To my understanding it has its own cron functionality using a json file.
3. Write and host some python scripts I can ping externally to send me a notification
I've had it done other misc stuff, but ChatGPT is almost always better for queries, and coding agents + Zed is much better for coding. But with a cheap enough vm and using openrouter plus glm 4.7 or flash, it can do some quirky fun stuff. I see the advantage as mainly having control of a system where it can have long term state (like files, processes, etc) and manage context itself. It is more like glue and it's full mastery and control of a Linux system gives it a lot of flexibility.
Think of it more as agent+os which you aren't getting with raw Claude or ChatGPT.
I've done nothing that interesting with it, it's absolutely a security nightmare, but it's really fun!
Hate to be the guy in the comments complaining about the css, but the sides of the text of this article are cut off. It looks like I'm zoomed in, and there's no way I can see the first few columns of the text without going to Reader view. I'm on a modern iPhone using safari, accessibility settings font larger than usual.
> Autoscaling is configured via CloudWatch alarms on CPU usage:
> Scale-out policy adds workers when CPU > 30%.
> Scale-in policy removes idle workers when CPU < 20%.
Does this handle the case where there are longer-running activities that have low CPU usage? Couldn't these be canceled during scalein?
Temporal would retry them, but it would make some workflow runs take longer, which could be annoying for some user-interactive workflows.
Otherwise I've seen needing to hit the metrics endpoint to query things like `worker_task_slots_available` to scale up, or query pending activities, pending workflows, etc to scale down per worker.
They can be cancelled if CPU drops below the scale-in threshold.
In my case the activities were CPU-heavy, batch-style, and not client-facing — so preferred occasional retries and slightly longer runtimes over blowing up the AWS bill. For that workload, CPU-based autoscaling was perfectly fine.
I originally ran this setup on Temporal Cloud, and pulling detailed worker/queue metrics directly from Cloud can be tricky... you need to expose custom worker metrics yourself, then pipe them into CloudWatch. If you host Temporal yourself, it is easier:)
The current approach of the maintainers terrifies me -- de facto standards should be respected. Even if something is invalid like `description-file`, if it is present in 12k repos it should raise a warning and not break anything.
In the rationale for this that I can find [1], a maintainer says the following:
> I'm inclined to say we should do it, even though it will cause some disruption.
They also say an alternative is to "accept the status quo", which is exactily what they should be doing. I can't find maintainers giving a compelling reason not to support this status quo of `long-description` as an alias to `long_description` besides "simplifying code." Code simplification should never take precedence over massive breakage of compatibility.
It seems that the person who did this acted unilaterally, with no code review, and ignored (then disabled) broken tests while landing this (https://github.com/pypa/setuptools/pull/4909). One should not be too harsh - he seems to be a student. One perhaps should be more harsh on the commerical entity sponsoring the project, though - setuptools is sponsored by Sonar via "Tidelift". According to https://tidelift.com/subscription/pkg/pypi-setuptools:
> The maintainers of setuptools get paid by Tidelift to
> implement industry-leading secure software development
> practices and document the practices they follow.
Well, that really doesn't seem so in this case now, does it?
I'm usually of the opinion of just removing stuff when it needs to be gone, but this is really an incosequential change to the setuptools code compared to how many problems it caused
"Needs to be gone" is the operative phrase here. An alias of `description-long` to `description_long` has no specific technical need to be removed.
The conditions that lead to having two tokens pointing to the same functionality should be prevented, but in this case it is a "de facto" alias which no amount reasonable amount of labor could fix.
From Gandhi's commentary on chapter 1 of the Bhagavad Gita:
> ... evil cannot by itself flourish in this world. It can do so only if it is allied with some good. This was the principle underlying noncooperation—that the evil system which the [British colonial] Government represents, and which has endured only because of the support it receives from good people, cannot survive if that support is withdrawn.
The entire photo is the finish line. Look at the color of the ground -- it's cream, the color of the lines on the track, while the track is blue. Each vertical line of the photo is taken in the same position on that finish line.
The first "line photo" is the right-most column of pixels on that photo. The next photo is the second-to-the-right column, etc. This way the winner can be determined as the line first photo that has the contestant's torso in it.
Also every runner in this photo is showing them at the finish line even tho they appear in the photo to be at different places. It’s why all the runners are leaning forward because they are all crossing the finish line.
It's a little slow sometimes, but it's the first time I've felt like I have an independent agent that can handle things kind of.
The only two things I did were 1. Ask it to create a Monero address so I could send it money, and have it notify me whenever money is sent to that address. It spun up its own monerod daemon which was really heavy and it ran out of space. So I had to get it to use the Monero wallet instead, but had to manually intervene to shut down the monerod daemon and kill the process and restart openclaw. In the end it worked and still works. 2. I simply asked it "@ me the the silver price every day around 8am ET" and it just figured out how to do it and schedule it. To my understanding it has its own cron functionality using a json file. 3. Write and host some python scripts I can ping externally to send me a notification
I've had it done other misc stuff, but ChatGPT is almost always better for queries, and coding agents + Zed is much better for coding. But with a cheap enough vm and using openrouter plus glm 4.7 or flash, it can do some quirky fun stuff. I see the advantage as mainly having control of a system where it can have long term state (like files, processes, etc) and manage context itself. It is more like glue and it's full mastery and control of a Linux system gives it a lot of flexibility.
Think of it more as agent+os which you aren't getting with raw Claude or ChatGPT.
I've done nothing that interesting with it, it's absolutely a security nightmare, but it's really fun!