Oh, no no. We'll be keeping Microsoft, Google, and Meta in power. They'll be just fine. They won't have to worry about small, nimble startups coming for their lunch.
AI makes a lot of product easy to deliver, and startups can tool it in ways to go after entirely new product areas and domains. Big tech disassembly, a la Craigslist. Quickly. We can't have that until big tech gets a good choke hold via regulatory moat.
Eh, China will be fine with our laws for AI for internal usage, they don't want internal destabilization either. If your AI happens to become bothersome to Xi or drop out too many Pooh flavored meme's then expect China to frown on it.
Now for AI's used against other countries then China will be all about that, as long as they know their place.
As real as regulatory capture is, alternatively, do we actually want congress to make laws based off their poor understanding of what AI is and where its going?
If we get AI laws they will either be misinformed or designed to entrench the current winners. I actually dont think we need AI laws. If some real problem actually does emerge in the future which warrants a new law then we can make an informed law to deal with it then.
Some US Senators were already giving Meta a hard time about releasing LLaMA to researchers, saying that centralization is "safer" to control and regulate. I couldn't help but think of the "We have no moat" leak, and to think that those senators were well-informed about the business side of the emerging AI duopoly.
I don't think ai is different enough from the rest of software, and even creative works to warrant needing dedicated ai experts guiding congress.
It's best if congress treats it as a company paying humans to do the same work, where the humans are assumed to be copying the training artifacts to the output
They learned from the web and the way later movers like Google upstaged early big web companies like Yahoo and AltaVista. When you have the resources to do so, go straight for regulatory capture. Outlaw competition before it can challenge you to lock in first mover advantage.
AI is the first real fundamental advance since the Internet went public. Looks like they’re trying to head it off and lock it down right away this time.
The Kardashians are a household conversation topic, but that doesn't make them profound.
CRISPR is a revelation fundamental in the machinery of cells, and gives an analog of unix sed.
To your point, it will be interesting to see if transformer architecture implementations, or theory, is widely enough recognized as fundamental advancement in science to merit the analog (Fields medal, no way, or Turing Award, maybe) of the at least 2 Nobel prizes awarded for Crispr-Cas9.
hey, it's honestly nice thinking this out with you.
the point is that headlines are one thing, but they don't evaluate impact of a technology very well, though they do evaluate hype.
i question, is AI getting so much enthusiasm because the ideas are more widely accessible, have been prominent in sci-fi for 50 years (at least), whereas hardcore genetics tools, and recent discover of how to effect surgery on DNA as code, is appreciated by a smaller audience?
I mean, look at the sophistication required to appreciate
and pretty clearly that's a hardcore hacker effort realized in biology, and it won 2 Nobel prizes as assessed not by headline count but by the world of scientists, and that in under 10 years, which basically is amazing.
disclaimer: i come from a foot in both worlds background, so I am mega enthused (like you) about the transformers results, as well as 'computing' in genomics.
i wonder if the two worlds will soon collaborate, by the way.
I think a lot of people genuinely believe that AI can/should be regulated.
People at the companies may or may not want to do this for selfish reasons, but regardless, have been basically forced to "ask" for regulation, or face that regulation being applied against them as an existential threat to the business.
Meta seems to have possibly gotten around this through having their model "leaked", but I am not sure what else can be expected at this point. I am hopeful that community efforts will be able to make progress quickly enough so as to make regulatory capture seem more difficult.
We're stuck in a tough spot: The fact is that the power structure is clueless when it comes to tech, for the most part. The other edge to that though, is that the ones capable of informing them have a financial motivation to do so.
US Companies generally outsource ”illegal” research to other countries. there is a reason “biomedical research” occurs in Ukraine and not the US. there is a reason a certain lab in China was funded by the US to do virus research. How would this be different? It makes no sense to regulate in just one country.
Why would the US want to push this work out into other countries? i’m a little unclear about what is going on here? What are the underlying motives? Is it about votes or profits?
We've been seeing something of a Cambrian explosion in AI that is driven by hardware. Legislation of "AI" will not be enough. There is a chance that the low key war against general purpose computing starts to ratchet up into a big deal.
I agree that regulation at the software level seems unenforceable. It may even backfire by pushing model distribution onto a less regulated black market, or ceding the market to other countries without the regulation that are happy to let companies incorporate in their jurisdiction and distribute whatever sets of numbers they'd like.
I worry that the conversation will move to regulating hardware purchases, e.g. needing a license to purchase more than one GPU.
Personally I don't see why GPU access shouldn't be covered by the second amendment. If the proposal is to regulate the hardware then there's a clear argument that it's 2nd amendment protected: it's not possible to simultaneously argue that AI can be weaponized while also asserting it's not protected by the 2nd amendment.
Ultimately, it's not the AI we should fear, but the people controlling it. And if the best defense against AI is more AI, then it's better if everyone can have an AI in their house, because it makes the power more diffuse, rather than concentrated within the control of a few economically dominant actors.
> It may even backfire by pushing model distribution onto a less regulated black market...
Attaching a note that this isn't really a "backfire", that would be the point. There is going to be a lot of value in this space, and if only Meta/Google/friends can comply with the regulation legally then they might be able to make a lot of money. The black market will be a bit of a release valve but hat they don't want is a nimble up-and-coming company that eats their lunch.
> Under the EU’s draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.
It takes hundreds of millions of dollars of compute on high end low latency GPU hardware to build these models. These GPUs are manufacturered in only one or two places on earth.
FTFY