I hope AI people start doing agentic agents to agent their agents and stop interacting with other humans whatsoever. Will be positive for all involved.
When I was working before on something that used headless browser agents, the ability to do a screenshot (or even a recording) was really great for debugging... so I am not sure about the "no paint". But hey everything in life is a trade-off.
Really depends on what you want to do with the agents. Just yesterday I was looking for something like this for our web access MCP server[0]. The only thing that it needs to do is visit a website and get the content (with JS support, as it's expected that most pages today use JS), and then convert that to e.g. Markdown.
I'm not too happy with the fact that Chrome is one of our memory-hungriest parts of all the MCP servers we have in use. The only thing that exceeds that in our whole stack is the Clickhouse shard, which comes with Langfuse. Especially if you are looking to build a "deep research" feature that may access a few hundreds of webpages in a short timeframe, having a lightweight alternative like Lightpanda can make quite the difference.
Well, it was "normal" crawlers that needed to work perfectly and deterministically (as best as possible), not probabilistically (AI); speed was no issue. And I wanted to debug when something went wrong. So yeah for me it was crucial to be able to record/screenshot.
So yeah, everything is a trade-off, and we needed a different trade-off; we actually decided to not use headless chromium, because they are slight differences, so we ended up using full chrome (not even chromium, again - slight differences) with xvfb. It was very, very memory hungry; but again was not an issue
(I used "agent" as in "browser agent", not "AI agent", I should be more precise I guess.)
yeah I feel the same, I think even having a screenshot of part of rendered page or full page can be useful even for machines considering how heavy those HTML can be to parse and expensive for LLM context. Sometimes (sub)screenshot is just a better kind of compression
I've spent some time feeding llm with scrapped web pages and I've found that retaining some style information (text size, visibility, decoration image content) is non trivial.
reply