A2A is for communication between the agents.
MCP is how agent communicate with its tools.
Important aspect of A2A, is that it has a notion of tasks, task rediness, and etc. E.g. you can give it a task and expect completely in few days, and get notified via webhook or polling it.
For the end users for sure A2A will cause a big confusing, and can replace a lot of current MCP usage.
I'm building Probe https://probeai.dev/ for a while now, and this this docs-mcp project is showcase of its capable. Giving you a local semantic search over any codebase or docs without indexing.
I do maintain big OSS projects and and try to contribute as well.
However contribution experience can very bad, if you follow the path of picking the most famous objects. Good luck contributing to Node, Rust, Shadcn and etc - they do not need your contribution, their PR queue is overloaded and they can't handle it. Plus you need to get to their internal circles first, though quite complex process.
The world is much bigger. There are so many help required from the smaller but still active projects.
Just recently I raised 3 small PRs, and they reviewed the same day!
As a my respect to all the OSS community, I have build https://helpwanted.dev/ website, which in the nutshell shows latest "help wanted" and "good first issue" issues, from all over github in the last 24 hours.
You would be amazed how many cool projects out of there looking for the help!
One of the cases when AI not needed. There is very good working algorithm to extract content from the pages, one of implementations: https://github.com/buriy/python-readability
Some years ago I compared those boilerplate removal tools and I remember that jusText was giving me the best results out of the box (tried readability and few other libraries too). I wonder what is the state of the art today?
Feel free to answer then, how do you do the same functions this does with gpt(3/4) without AI?
Edit -
This is an excellent use of it, a free text human input capable of doing things like extracting summaries. It does not seem to be used at all for the basic task of extracting content, but for post filtering.
I think “copy from a PDF” could be improved with AI. It’s been 30 years and I still get new lines in the middle of sentences when I try to copy from one.
Meh, it’s just the “how does it work?” question. How content extractors work is interesting and not obvious nor trivial.
And even when you see how readability parser works, AI handles most of the edge cases that content extractors fail on, so they are genuinely superseded by LLMs.
Macros? Any situation where code edits other code?
Sure, I could not write a regex engine, but the language itself can be fine if you keep it to straightfoward stuff. Unlike the famous e-mail parsing regex.
I have had challenges with readability. The output is good for blogs but when we try it for other type of content, it misses on important details even when the page is quite text-heavy just like blog.
What I really appreciate about Rails, is the strong vision, and not becoming another bloated framework for building "enterprise grade" applications. This is a 20-year-old framework, which does not afraid to radically change with time, and still being seen as Punk compared to rest
It is quite hard problem to solve, because you have to deal with state difference between test and production environments. Love your approach to mocking dependencies, and leveraging OpenTelementry. It potentially can solve some of state issues. But still require modifying user code. I wonder if it can be done purely using OpenTelementry (e.g. you depend on typical OTel setup), and then read the data directly from OTel DB.
Thanks Leonid! Your vote of confidence means a lot.
OTel for go requires user code changes. Languages that allow monkey-patching (java, js, python, etc.).
> I wonder if it can be done purely using OpenTelementry (e.g. you depend on typical OTel setup), and then read the data directly from OTel DB.
OTel doesn't work out of the box. OTel usually doesn't collect request or response for any network or db call. 90% of my time is spent on extending the individual agents' code; so that they can collect additional required information, and perform "replay".
I hope so! But I also hope that I will be also able to monetise some of this movent. GoReplay dual licensed under AGPL and Commercial license. I also sell special appliance licenses.
If anyone in this thread wants to build a product based on GoReplay technology (capture network traffic directly, via AWS Traffic Mirroring or k8s), sent me message :)
(mcp auth is terrible btw)