What’s the scaling bottleneck? If you made a local-first, P2P version of Figma what would break first? For a company of like 50 people, I doubt you’d have more than 100GB of data so it should fit on everyone’s computers. The P2P syncing part seems solvable, even if you need a centralized handshake server somewhere. And from the user perspective I don’t see why the UX couldn’t be identical, so it’s all the same to them.
It seems like the real bottleneck is something else.
Unless it's illegal in more places, I think they won't care. In my experience, the percentage of free riders in Brazil is higher (due to circumstances, better said).
I can't stop thinking what happened when CASE tools, WYSIWIG, UML, Model Driven Architecture/Development, etc was pushed into devs. I know, it's a different phenomenon (that was a graphical visual push, this keeps the text).
We've had it on code as well. The factory pattern, workflow engines, SOA, lo-code, cloud computing, serverless, a billion different templating engines for js, js the right way, jQuery, not jQuery, SPAs, noSQL, graphQL, micro services, event sourcing and on and on.
Every couple of years there's something that if you aren't using you're apparently doing it wrong.
I think maintaining this AI code is going to turn out to be a nightmare and everyone will tone down on it, not letting agents run off on their, but we'll see.
reply