Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Congress is racing to regulate AI. Silicon Valley is eager to teach them how (washingtonpost.com)
35 points by robg on June 17, 2023 | hide | past | favorite | 44 comments


"Congress will never understand AI. Powerful tech companies are eager to exploit that"

FTFY


Sinister hypothetical scenario:

Congress: Tell us about AI. If and how we should regulate it.

Big Tech, public hearing: AI will eat the world if not regulated. Crime will be abundant. Jobs will be lost.

Big Tech, private dinner: Our unilateral control over AI will boost our stock prices.

Congress: Only companies with regulatory checks and balances can run AI.

Also congress: Buys MSFT, GOOG, META.


Whatever laws we come up with China won't comply. So this is like kicking ourselves in the kneecap


Oh, no no. We'll be keeping Microsoft, Google, and Meta in power. They'll be just fine. They won't have to worry about small, nimble startups coming for their lunch.

AI makes a lot of product easy to deliver, and startups can tool it in ways to go after entirely new product areas and domains. Big tech disassembly, a la Craigslist. Quickly. We can't have that until big tech gets a good choke hold via regulatory moat.


The laws are to keep the poors and competitors away from the resources necessary to be real competition.


Whatever tech laws EU comes up with, US does not comply. Especially when it comes to peovacy. Now all of the sudden the shoe is in the other foot.

Also, you sont even know whay laws you want yet, and are already pointong fingers.


There should be some law about using a spellchecker.


Eh, China will be fine with our laws for AI for internal usage, they don't want internal destabilization either. If your AI happens to become bothersome to Xi or drop out too many Pooh flavored meme's then expect China to frown on it.

Now for AI's used against other countries then China will be all about that, as long as they know their place.


I haven't seen the US complying with Chinese law.


exactly


If AI is too dangerous to allow normal people to have it, it is definitely too dangerous for governments and large corporations.


I mean, that's what regulation would prevent, right?


The law would let OpenAI become the Lockheed Martin of AI, while the rest of us will go to prison if we touch it.



As real as regulatory capture is, alternatively, do we actually want congress to make laws based off their poor understanding of what AI is and where its going?

If we get AI laws they will either be misinformed or designed to entrench the current winners. I actually dont think we need AI laws. If some real problem actually does emerge in the future which warrants a new law then we can make an informed law to deal with it then.


Some US Senators were already giving Meta a hard time about releasing LLaMA to researchers, saying that centralization is "safer" to control and regulate. I couldn't help but think of the "We have no moat" leak, and to think that those senators were well-informed about the business side of the emerging AI duopoly.


I don't think ai is different enough from the rest of software, and even creative works to warrant needing dedicated ai experts guiding congress.

It's best if congress treats it as a company paying humans to do the same work, where the humans are assumed to be copying the training artifacts to the output


> I don't think ai is different enough from the rest of software, and even creative works to warrant needing dedicated ai experts guiding congress.

Agree 100%. We dont need AI laws because they cut across other existing laws such as copyright, for example.


Hot take: I’d be open to laws around opt in vs opt out for what is considered available for training data.


They learned from the web and the way later movers like Google upstaged early big web companies like Yahoo and AltaVista. When you have the resources to do so, go straight for regulatory capture. Outlaw competition before it can challenge you to lock in first mover advantage.

AI is the first real fundamental advance since the Internet went public. Looks like they’re trying to head it off and lock it down right away this time.


> AI is the first real fundamental advance since the Internet went public.

CRISPR-cas9 is a pretty strong competitor.


Not even close. CRISPR is a niche science nerd topic. Whereas AI is a household conversation topic these days.


The Kardashians are a household conversation topic, but that doesn't make them profound.

CRISPR is a revelation fundamental in the machinery of cells, and gives an analog of unix sed.

To your point, it will be interesting to see if transformer architecture implementations, or theory, is widely enough recognized as fundamental advancement in science to merit the analog (Fields medal, no way, or Turing Award, maybe) of the at least 2 Nobel prizes awarded for Crispr-Cas9.


> The Kardashians are a household conversation topic, but that doesn't make them profound.

I would argue that AI has grabbed orders of magnitude more headlines in the past 6 months.

And AI is pretty damn profound, so I’m struggling to grasp the point this comment is making.


hey, it's honestly nice thinking this out with you.

the point is that headlines are one thing, but they don't evaluate impact of a technology very well, though they do evaluate hype.

i question, is AI getting so much enthusiasm because the ideas are more widely accessible, have been prominent in sci-fi for 50 years (at least), whereas hardcore genetics tools, and recent discover of how to effect surgery on DNA as code, is appreciated by a smaller audience?

I mean, look at the sophistication required to appreciate

https://en.wikipedia.org/wiki/CRISPR_gene_editing

and pretty clearly that's a hardcore hacker effort realized in biology, and it won 2 Nobel prizes as assessed not by headline count but by the world of scientists, and that in under 10 years, which basically is amazing.

disclaimer: i come from a foot in both worlds background, so I am mega enthused (like you) about the transformers results, as well as 'computing' in genomics.

i wonder if the two worlds will soon collaborate, by the way.


I think a lot of people genuinely believe that AI can/should be regulated.

People at the companies may or may not want to do this for selfish reasons, but regardless, have been basically forced to "ask" for regulation, or face that regulation being applied against them as an existential threat to the business.

Meta seems to have possibly gotten around this through having their model "leaked", but I am not sure what else can be expected at this point. I am hopeful that community efforts will be able to make progress quickly enough so as to make regulatory capture seem more difficult.


We're stuck in a tough spot: The fact is that the power structure is clueless when it comes to tech, for the most part. The other edge to that though, is that the ones capable of informing them have a financial motivation to do so.


US Companies generally outsource ”illegal” research to other countries. there is a reason “biomedical research” occurs in Ukraine and not the US. there is a reason a certain lab in China was funded by the US to do virus research. How would this be different? It makes no sense to regulate in just one country.

Why would the US want to push this work out into other countries? i’m a little unclear about what is going on here? What are the underlying motives? Is it about votes or profits?



I don’t see anything in the article talking about what kind of regulation is being proposed.


The moat-preserving kind.


The moat narrative is a red herring.


We can make up all kinds of scenarios and post them here, but that's not gonna change that this shit regulation's gonna come sooner or later.


The worst outcome is AI Writing those 800 page laws that no Congress person will read.


We've been seeing something of a Cambrian explosion in AI that is driven by hardware. Legislation of "AI" will not be enough. There is a chance that the low key war against general purpose computing starts to ratchet up into a big deal.


Aka. The big Tech companies are anxious to ensure regulatory capture to prevent competition and screw over the citizenry.


How would a proposed regulation even work? Didn't Google publish an article about how they have no moat against open source models?


Well that’s easy. Kill open source models by requiring all sorts of expensive compliance from the model provider. Ta-da! Google has a moat.


I'm sure that would be as effective as laws against exporting open-source cryptography software.


I agree that regulation at the software level seems unenforceable. It may even backfire by pushing model distribution onto a less regulated black market, or ceding the market to other countries without the regulation that are happy to let companies incorporate in their jurisdiction and distribute whatever sets of numbers they'd like.

I worry that the conversation will move to regulating hardware purchases, e.g. needing a license to purchase more than one GPU.

Personally I don't see why GPU access shouldn't be covered by the second amendment. If the proposal is to regulate the hardware then there's a clear argument that it's 2nd amendment protected: it's not possible to simultaneously argue that AI can be weaponized while also asserting it's not protected by the 2nd amendment.

Ultimately, it's not the AI we should fear, but the people controlling it. And if the best defense against AI is more AI, then it's better if everyone can have an AI in their house, because it makes the power more diffuse, rather than concentrated within the control of a few economically dominant actors.


> It may even backfire by pushing model distribution onto a less regulated black market...

Attaching a note that this isn't really a "backfire", that would be the point. There is going to be a lot of value in this space, and if only Meta/Google/friends can comply with the regulation legally then they might be able to make a lot of money. The black market will be a bit of a release valve but hat they don't want is a nimble up-and-coming company that eats their lunch.


Its already happening

> Under the EU’s draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.

https://techcrunch.com/2022/09/06/the-eus-ai-act-could-have-...


Delaying public usage by decades?


It takes hundreds of millions of dollars of compute on high end low latency GPU hardware to build these models. These GPUs are manufacturered in only one or two places on earth.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: