Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I replied to LeCun's claims about their latest protein structure predictor and he immediately got defensive. The problem is that i'm an expert in that realm and he is not. My statements were factual (pointing out real limitations in their system along with the lack of improvement over AlphaFold) and he responded by regurgitating the same misleading claims everybody in ML who doesn't understand biology makes. I've seen this pattern repeatedly.

It's too bad because you really do want leaders who listen to criticism carefully and don't immediately get defensive.



Same thing with making wildly optimistic claims about "obsoleting human radiologists in five years", made more than five years ago by another AI bigwig Geoffrey Hinton. They are doubtless brilliant researchers in their field, but they seem to view AI as a sort of cheat code to skip the stage where you actually have to understand the first thing about the problem domain to which it is applied, before getting to the "predictions about where the field is going".

Very similar to crypto evangelists boldly proclaiming the world of finance as obsolete. Rumours of you understanding how the financial system works were greatly exaggerated, my dudes.


The tribal thesis in the AI world seems to be that AI workers don't need subject matter expertise, as the AI will figure it out during training. In fact, subject matter expertise can be a negative because it's a distraction from making the AI good enough to figure it out on its own.

This assumption has proven to be very fragile, but I don't think the AI bigwigs have accepted that yet. Still flush from the success of things like AlphaZero, where this thesis was more true.


An old story:

“What are you doing?”, asked Minsky.

“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.

“Why is the net wired randomly?”, asked Minsky.

“I do not want it to have any preconceptions of how to play”, Sussman said.

Minsky then shut his eyes.

“Why do you close your eyes?”, Sussman asked his teacher.

“So that the room will be empty.”

At that moment, Sussman was enlightened.


Maybe I'm naive - but I don't get it.

Closing your eyes doesn't make the room empty. And in the same way not programming preconceptions into the neural net doesn't make the preconceptions go away?

I realize explaining a joke or something like this takes away some of the charm (sorry), but would love to get the point :)


Randomly wiring a neural net doesn't remove preconceptions, it just makes it that you don't know them. Similarly, closing your eyes doesn't make a room empty, it just makes it that you can't see what's there. Minsky is pointing out that Sussman's fundamental assumptions on desiring to remove preconceptions is logically flawed.


Gotcha - I read it as Sussman didn't want to program his preconceptions into the network (for which randomness seemed suitable, which is why I was confused). Your explanation makes more sense.


There's an old series of - stories? jokes? called 'unix koans' [1] which always end with a master answering a question in a very unclear but profound-sounding way, then the line 'Upon hearing this, [someone] was enlightened.'

I never found them laugh-out-loud funny myself.

This is probably a reference to those.

[1] http://www.catb.org/~esr/writings/unix-koans/


A Koan (公案) is a concept from Zen Buddhism. Zen made its way into pop-culture (or at least the popular counter-culture movement) in the US in the 1960s via several writers of the Beat Generation. Project MAC at MIT was founded in the early 60s (Gerald Sussman started there in '64 I think)? So a number of faux Koans were in circulation in the AI crowd by about 1970.


Unfortunately because it's delivered as a koan we'll never know whether he's talking about the fact that the random weights determine the nearest local minimum or the hyperparameters.


also unfortunately, a "real koan" often relies on conflating expectations+conditioning versus direct experience, where direct experience is shown to invoke an "impossible" result; teaching that the conditioned, subjective mind does not see fundamental aspects of reality, though it thinks it has the answers.

These technical mimics of that structure echo a time when a lot of people, relatively, were experimenting with disregarding personal subjectivity in favor of direct experience and deeper Truth. In the modern tech versions, that is rarely if ever part of the story?


I think you’ve exactly described the point.


I interpret that as Minsky being... an unpleasurable person, to say the least.


Every time I read this I appreciate its sublimity.


The problem with preconceptions about your parameters is that you might be missing some crazy cool path to your goal, which you might find by randomly exploring your sample space. I remember seeing this same principle in mcmc samplers using uniform priors. Why is this so crazy?


It's predicated on the assumption that a random discovery from a zero-comprehension state is more likely to get you to a goal than an evolution from a state that has at least some correctness.

More generally, it disingenuously disregards the fact that the definition of the problem brings with it an enormous set of preconceptions. Reductio ad absurdum, you should just start training a model on completely random data in search of some unexpected but useful outcome.

Obviously we don't do this; by setting a goal and a context we have already applied constraints, and so this really just devolves into a quantitative argument about the set of initial conditions.

(This is the entire point of the Minsky / Sussman koan.)


> from a zero-comprehension state is more likely to get you to a goal than an evolution from a state that has at least some correctness.

I get that starting from a point with "some correctness" makes sense if you want to use such information (e.g. a standard starting point). However, such information is a preconceived solution to the problem, which might not be that useful after all. The fact is that you indeed might not at all need such information to find an optimal solution to a given problem.

> by setting a goal and a context we have already applied constraints.

I might be missing your point here since the goal and constraints must come from the real world problem to solve which is independent from the method to solve the problem. Unless you're describing p-value hacking your wait out, which is a broader problem.


With exploring, the starting state should only affect which local-maximum you end up in. Therefore you need to make an argument that a random starting state is likely to end up in a higher local-maximum than a non-random starting state.

There is always a starting state; using a random one only means you don't know what it is.


Exactly, but why do so many people seem to have a problem with this? Sounds like a political problem to me instead of a scientific one.


There are a lot of problems that arise from lack of domain expertise, but they can be overcome with a multidisciplinary team.

The biggest defeating problem for pure AI teams is that they don't understand the domain well enough to know if their data sets are representative. Humans are great at salience assessments, and can ignore tons of the examples and features they witness when using their experience. This affects dataset curation. When a naive ML system trains on this data, it won't appreciate the often implicit curation decisions that were made, and will thus be miscalibrated for the real world.

A domain expert can offer a lot of benefits. They could know how to feature engineer in a way that is resilient to these saliency issues. They can immediately recognize when a system is making stupid decisions on out of sample data. And if the ML model allows for introspection, then the domain expert can assess whether the model's representations look sensible.

I'm scenarios where datasets actually do accurately resemble the "real world", it is possible for ML to transcend human experts. Linguistics is a pretty good example of this.


It makes sense to have a domain expert and an AI expert working together, but I'd offer two important modifications:

1) The AI expert is auxiliary here, and the domain expert is in the driver's seat. How can it be otherwise? You no more put the AI expert in charge than you'd put an electronic health record IT specialist in charge of the hospital's processes. The relationship needs to be outcome-focused, not technology-focused.

2) The end result is most likely to be a productivity tool which augments the abilities/accuracy/speed of human experts rather than replacing them. AGI being not that sciencey of a fiction, we aren't likely to be actually diagnosed by an AI radiologist in our lifetimes, nor will an AI scientist make an important scientific discovery. Ditch the hype and get to work on those productivity tools, because that's all you can do for the foreseeable future. That might seem like a disappointing reduction in ambition, but at least it's reality-based.


Unless of course the "domain experts" have fundamental disagreements or have equally limited knowledge of what should constitute what is important to extrapolate data beyond their own scope. E.g. like in comp sci, there might be multiple comparable ways to accomplish n, but which is best to reliably accomplish an unknown or unforseen n+1...depends.


> Humans are great at salience assessments, and can ignore tons of the examples and features they witness when using their experience

This is called the frame problem in AI.


> The tribal thesis in the AI world seems to be that AI workers don't need subject matter expertise

Not throwing any stones here, because I've been guilty of the same sort of arrogance in other contexts. But I think the same thing happened a ton during Bubble 1.0 and the software-is-eating-the-world thing. And it's hardly limited to tech: https://xkcd.com/793/

For me, at least, where this came from was ignorance and naivete. Three things cured me. One was getting deeper mastery of particular things, and experiencing a fair bit of frustration when dealing with people who didn't understand those things or respect my expertise. The second was truly recognizing there were plenty of equally smart people who'd spent just as long on other things. And the third was working in close, cross-functional team contexts with those people, where mutual listening and respect were vital to the team doing our best work.

So here's hoping that the AI bigwigs learn that one way or another.


Not only AlphaZero, but hadn't the whole field of computer vision (which LeCun is specialised in) had its major breakthrough with letting the AI figure out the features (i.e. CNNs)?


Yes, but they needed 2 billion images training data to get to the point where the AI usually draws the correct number of limbs...

Any radiology AI that needs millions of training sets is useless in practice.


> Any radiology AI that needs millions of training sets is useless in practice.

Why? I have no doubt that radiology AI might not be that useful (though radiologist friends of mine say AI is making an increasing impact on their field.) But this logic doesn't make sense. So what if an AI needs a million training examples or even a million training sets? Once your topology and weights are set, that net can be copied/used by others and you get a ready-to-go AI. There's an argument to be made that if training scale is what's needed to get to AGI, then maybe AGI is unrealizable, but that's not the same as saying a domain-specific AI is useless because it needs a large training set.


It helps that human scientific expertise in the topic of "recognizing objects" is limited.


I think all this work could be useful if only those people understood that the technology is not mature enough to remove people from the loop.

For the case of AI analysing x-ray photos, the obvious solution would be a system that can tag photos with information about what AI thinks is going on there. And this information could be passed to the human.

This could save a ton of time and help reduce cases where radiologists missed obvious things.

My son once broke his arm. I brought him to ER, they made the photo but the two people who looked at the photo said there is nothing wrong with the arm. I asked for the copy of the photo.

A week later the swelling did not subside so I took the photo to another doctor and he pointed out an obvious fracture line.

There are many ways to deploy automation and I wonder why everybody tries to shoot for removing humans altogether when most of the time this is literally asking for problems.


I've definitely seen radiologists miss things (as a patient and a researcher) but I've also seen the behavior where if something is labeled or indicated the next person (or in this case the first person) might not give it a true 2nd look and could easily default to agreeing with the AI instead of actually checking things. Overall I think this would be a net benefit as image analysis can help with a lot of these activities, just needs care to not inadvertently remove the humans you still need.


Right. Until we trust the AI 100% (at which point we get rid of humans) the AI input shouldn't be given until after the human makes a call. However then the AI gets above 80% (or something like this, UX has better numbers) trusted humans know the AI will look will just skip looking themselves.

The above is a common problem that UX research is interested in. I'm not sure how much it is solved, but it goes well beyond medical fields.


sadly, this will never appear completely in practice, I predict. Refer to information theory roughly, to support that. That is, pure signal is not often occurring in real systems. Some systems will favor signal quality at the expense of time and fewer false positives and negatives, but the majority of cases in the real world will favor less-effort, less-time decisions that are expedient, or worse, favor obfuscation of the process to cover the sins of the humans benefiting/profiting directly from the show. Also, systems that just are not actually working well, will be sold for quick profit using pressure marketing and forced contracts, at least I think so...


>For the case of AI analysing x-ray photos, the obvious solution would be a system that can tag photos with information about what AI thinks is going on there. And this information could be passed to the human.

The human will still have to look at the x-ray to see if the AI missed something. 95% accuracy is not good enough, those 5% of cases are what most of their training is for, missing it can mean a lost human life. Maybe it can be used to speed up obvious diagnoses, but it cannot be used to filter and rule anything out. The amount of time a radiologist will spend looking at the x-ray will probably not be reduced, so I don't think there's money to be saved here.

A useful productivity tool could be to examine datasets after the radiologist found nothing, as a way to double-check their reading. This won't reduce costs but might marginally improve patient outcomes. Radiologists in first-world medical systems don't really miss a lot of stuff, though.

And of course for simple obvious non-life-affecting stuff like broken bones and dental x-rays, you don't need radiologists now either. Your son's x-ray was probably not looked at by a radiologist.


> Radiologists in first-world medical systems don't really miss a lot of stuff, though.

From "Discrepancy and Error in Radiology: Concepts, Causes and Consequences" (2011) (https://www.ums.ac.uk/umj081/081(1)003.pdf):

> In the 1970s, it was found that 71% of lung cancers detected on screening radiographs were visible in retrospect on previous films [4,6].

> The “average” observer has been found to miss 30% of visible lesions on barium enemas [4].

> A 1999 study found that 19% of lung cancers presenting as a nodular lesion on chest x-rays were missed [7].

> Another study identified major disagreement between 2 observers in interpreting x-rays of patients in an emergency department in 5-9% of cases, with an estimated incidence of errors per observer of 3-6% [8].

> A 1997 study using experienced radiologists reporting a collection of normal and abnormal x-rays found an overall 23% error rate when no clinical information was supplied, falling to 20% when clinical details were available [9].

> A recent report suggests a significant major discrepancy rate (13%) between specialist neuroradiology second opinion and primary general radiology opinion [10].

> A recent review found a “real-time” error rate among radiologists in their day-to-day practices averages 3-5%

> In patients subsequently diagnosed with lung or breast cancer with previous “normal” relevant radiologic studies, retrospective review of the chest radiographs (in the case of lung cancer) or mammogram (in breast cancer cases) identified the lung cancer in as many as 90% and the breast cancer in as many as 75% of cases [11].

> A Mayo Clinic study of autopsies published in 2000, which compared clinical diagnoses with post-mortem diagnoses, found that in 26% of cases, a major diagnosis was missed clinically [11].


>95% accuracy is not good enough, those 5% of cases are what most of their training is for, missing it can mean a lost human life.

Isn't this very context dependent? E.g., a delay in lung cancer diagnosis may be a very big deal, but much less so for something like prostate cancer


Certainly. In fact, prostate cancer is often best left undiagnosed.

As the CDC says on its page for prostate cancer screening[0]:

>Screening finds prostate cancer in some men who would never have had symptoms from their cancer in their lifetime. Treatment of men who would not have had symptoms or died from prostate cancer can cause them to have complications from treatment, but not benefit from treatment. This is called overdiagnosis.

>Prostate cancer is diagnosed with a prostate biopsy. A biopsy is when a small piece of tissue is removed from the prostate and looked at under a microscope to see if there are cancer cells. Older men are more likely to have a complication after a prostate biopsy.

[0] https://www.cdc.gov/cancer/prostate/basic_info/benefits-harm...


> For the case of AI analysing x-ray photos, the obvious solution would be a system that can tag photos with information about what AI thinks is going on there. And this information could be passed to the human.

There was a paper a while ago about an effort at something like this. They got the AI going and noticed that the first thing it did was classify all the x-rays by race of the patient. Then they freaked out, gave up on their original project, and wrote their paper about how AI is inherently evil.


Remove the human and you can charge close to what it costs to employ the human. Aid the human and you’re looking at more like tens to hundreds of dollars per month.


doctor have so many money can spend to save human life. their time expensive than 100$/month.


The funny thing is, of all the areas where ML could help, radiological image classification definitely is the one where ML could shine (and I think it did). HUmans doing radiological image classification are basically a network service now (IE they can do their job on the other side of the world, and their efforts are extremely carefully evaluated using later data such as disease progression).


Image classification could be employed (together with very very good UI) in good productivity tools for radiologists. However, radiologists don't classify images for a living, they make diagnoses, so it's not going to get close to replacing them. I agree that radiology is the most "friendly" discipline to inject ML-based productivity tools into their "pipeline", since they essentially don't even need to be at the hospital, other than interventional radiology.


Diagnosis is classification. They almost always do work for service rather than dealing with patients on a regular basis.


>Diagnosis is classification.

A radiologist's diagnosis is not image classification, it's reality classification maybe. (That's quite poetic).

Watching an educational Youtube video about endangered tiger habitats is not the same thing as segmenting possible embedded pictures of kitties and poachers or whatever, and classifying them as such. There's, like, a lot of additional context.


To flesh this out a little more:

A (good) radiologist is interpreting the images in light of the patient's history, symptoms, and other tests, with the goal of forming a diagnosis and treatment plan. This is rather different from taking an MxN array of pixels and trying to decide if it contains a tumor.

For example, about 10% of people have small gallstones. If an ultrasound incidentally detects some in a healthy, asymptomatic person, nothing happens. The exact same images, coming from a patient with a history of upper-right abdominal pain and jaundice, probably lead to a referral for surgery instead.


What you described is outside the scope of the problem being solved in radiological classification. That problem- like protein structure predction- is an intentionally simplified process used to make it possible to fairly compare humans vs ML.

What you're describing is the general problem of informed diagnosis, which is also classification, but typically takes into account a great deal of qualitative information. Few if any ML people are working in this area because there is no golden data and it's nearly impossible to evaluate in a quantitative way.


Agreed--but that's the point: the demand is not actually for radiological classification, it's for accurate, actionable diagnoses. Heck, radiologists call what they do "interpretation."

I can certainly imagine that radiologists would love (say) a tool that automatically flags low-quality or mis-oriented images, but that's not at all where the hype is at.


Those things already exist. It was very enlightening- my uncle was a radiologist and I spent a few hours watching him do his job. The software they use is extremely sophisticated with lots of custom bells and whistles (and the monitors have insane contrast ratios). Most people don't see it but non-ML medical imaging is extremely mature and is developed in close contact with the users.


There is a personal bias I have observed when I have listened to people talk over the last ~5 years, for example:

- Startup founders without any domain experience in extremely risky ventures

- Crypto bros

- Donald Trump

I consider myself - and others view me as - a hyperrational person (possibly often to a fault), and I must admit that when I hear an outlandish claim like the ones spewed by the above, I am sometimes left in a strange emotional state... a stupor?

Like, I don't quite believe the claim because I'm defensively rational, but I feel a certain dizziness and confusion (thinking to myself, "could this actually be true?") until I come back to my senses. The more outlandish and impassioned the speech, the stronger the effect.

It's made me realise we're all built similarly, from the most rational from the most gullible.


> Like, I don't quite believe the claim because I'm defensively rational, but I feel a certain dizziness and confusion (thinking to myself, "could this actually be true?") until I come back to my senses. The more outlandish and impassioned the speech, the stronger the effect.

I always like to ask myself: "what are they trying to get me to do?". Whether it's a habit, voting pattern, product, etc. Then I ask myself: "is it useful? who does it benefit?"

Whether it's true or not, idrc. what's more important (at least to me) is what it does and who it benefits

(also in this case to avoid any ethical conundrums, something is useful if it makes $, beneficial to X party if it makes them $)


This. In the language popularized by Kahnemann et al. I tell myself: wow, he really got my system 1. And then system 2 needs to work on on undoing the belief created, get back to sober doubt and weigh the probabilities.

But this is an interpretation after the fact, it feels exactly like you describe it. Stupor.

Edit: typo


But the odd time when system 2 is also satisfied, you genuinely learn something important and new. How lovely that feeling is.


Sounds like cognitive dissonance, I have had similarly humbling experiences.


I somewhat doubt most people actually have the `"could this actually be true?"` phase. It seems most like... just your rational mind reminding you that whatever you set your priors, if you set it to exactly zero the Bayes' Theorem breaks down?


I don't think anyone believes themself to be irrational, even crazy people.


I do- I definitely see limits to my own rationality that have only been obvious after extraordinary reflection and addtional data. I would presume that most people who are truly self-aware recognize that we are all fundamentally irrational.


> - Crypto bros

I'm in the field. Haven't experienced people who I could have described as bros. Can you point to one example?


I think HN is skewing heavily against AI and blockchain claims. I pick those to make the point below.

For what it's worth, I agree that blockchain itself is a first-generation technology and sucks relative to other things, like giant vacuum tube computers did. However, the concepts it enables (smart contracts) have as much promise as the idea of software programs running on personal computers back when most people wondered why you need them, since they do very little but play pong.

When I wrote the following article for CoinDesk in 2020, I didn't want to say "blockchain voting", I wanted to say "voting from your phone". Because there are far better decentralized byzantine-fault-tolerant systems, than blockchains. But that's what they ran with:

https://www.coindesk.com/in-defense-of-blockchain-voting

In it, I say:

For every technology we use today, there was a time it was laughably inadequate as a replacement for what came before.

And that's really, the crux of the issue. It happens slowly, and then all at once. Yes we need to listen to guys like Moxie who are skeptical, but we need to also then go and have a discussion from different perspectives, not just one specific perspective. It has even become fashionable in many liberal circles to be against the type "tech bro" typified by HN, including VCs and Web 2.0 tech bros. So before you downvote, realize that most of you would be on the receiving end of it in other echo chambers, due to this phenomenon of thinking there's only one best narrative.

People like Moxie are much more interesting and interested, because they say they' love to be proven wrong. And I am also open to substantive discussion:

https://community.intercoin.app/t/web3-moxie-signal-telegram...

I imagine it's the same with AI claims about traditional fields. Where have we heard that before? "Yes it's cute and impressive but these guys don't really understand what the experts know about chess."


> For every technology we use today, there was a time it was laughably inadequate as a replacement for what came before.

That is just success bias. How many times did people try out perpetual motion machines? The rest of the article tries to make the converse that if something is inadequate today, it will be the replacement in the future.

When do you consider something to be not worth spending time on?


I think perpetual motion is the wrong way to go.

Mark Twain lost all of his money on a wide variety of speculative investments, most of which I would call fairly reasonable.

Getting in on the ground floor does you no good if it's the wrong kind of ground floor, or if it's the right kind of ground floor but the one next door ends up going to the moon while yours falters, or even if it's the right exact ground floor but you go bankrupt investing too early and the person who buys it from you rides it to the moon.


I think its more a comment of how we should be open to people contributing to these efforts (while still being mindful of wild claims). While the concept of perpetual motion machines is flawed, the effort put into reducing kinetic energy loss is very useful. Flywheels are being actively studied as a method to store excess energy storage. In the long-term view I don't think you can consider anything not worth spending time on, the result just might not be applicable to the original goal.


> How many times did people try out perpetual motion machines?

Or even turning lead into gold! A ton of famous scientist (i.e. Newton) were very busy with that idea but we end up only knowing them for other side projects/discoveries.


> A ton of famous scientist (i.e. Newton)

The abbreviation "i.e." stands for "id est" and means "that is". So what you have written means "A ton of famous scientist (that is, Newton)"...

Now while I'm sure Newton had gravitas (hah!), I doubt it amounted to a ton.

You want to use the abbreviation for "exempli gratia" which is "e.g." and means "for example".

It's a common mistake but easy to remember with eg-xample. :-)


My skepticism about e.g. AI and blockchain is not definitive or final, and the goal is not to end the conversation. To the contrary, for me this is a (possibly very deficient) style of reasoning about the world. You are excited about technology X, can I express my skepticism in a sufficiently annoying way to prod you into helping me understand some insight about technology X which disproves my hypothesis that technology X is a technology in search of a problem? For example, I don't see an important problem which is solved by blockchain in a way that's superior to other solutions, when taking into account their respective advantages and drawbacks. Just don't see it.

For example, I want to have a centralized authority which can override fraud or a mistake in a financial transaction. I want laws to apply, I want them to be written by elected humans.

I'm not even sold on the value of any kind of electronic voting in general elections, since trust in the process is so vital here that in my mind the horrible inefficiencies of pen and paper and a bunch of humans manually tallying up votes in a thousand school gyms until 11 pm is actually quite okay. I'll pay for that with my taxes, no problem. Now you add blockchain into the mix, and I don't know what problem it solves that does not have superior alternative solutions.

And so on. But I'm gonna stay open minded. Technology X might one day find the perfect problem to solve, or I might realize I was stupidly wrong about technology X for some time.


For now, I would simply ask that you go through the many applications here and tell me if you see smart contracts being very useful for collectively managaing high-value decisions and capital after reading: https://intercoin.org/applications


Can you instead point out the one (strongest, best) application for which the case of using blockchain solves a big problem in a way that's better than existing alternatives, given all their respective advantages and drawbacks?

I'm lazy, you see. Conveniently forgot to mention that.


Sure. How about the one where:

1. Communities around the world issue their own currencies independent of the dollar (believe it or not, this is as important for financial stability as not just relying on Facebook’s server farms in California)

2. They give their members a UBI in the local currency that they can only spend locally

3. The amount of daily UBI to give out is determined by the vendors in the community voting on it (monetary policy)

4. Have each vendor tagged with “food”, “clothing” etc. and apply taxes to make negative externalities more costly and withdraw money from the economy (fiscal policy)

5. Calculate statistics on how money flows locally in the smart economy and have economists make recommendations about the fiscal and monetary policies, while the population continuously gets to adjust them up and down based on these recommendations

6. Hooking up all such communities to a decentralized exchange called Intercoin, where the central currency is only held by real KYCed communities and isn’t easily susceptible to pumps and dumps by speculators and banks like Goldman Sachs, and also encourages recurring and sustainable value exchanges between communities

7. Allowing tourists to buy the local currency, and allowing individuals around the world to donate to the community and see stats how the money is being used

More info here for instance:

https://intercoin.org/communities.pdf

If this is too vague here is a concrete example:

https://community.intercoin.app/t/fund-for-refugees/2688


What problem does this one solve, and how does it solve it? To me, this just looks like how a normal currency works, "but on a computer". Is the (claimed) advantage that you can narrow the scope of a currency? I just don't see the benefits in doing so.


Well, that sounds exactly like what people said about all the other technologies that were "on a computer". Watch this exchange, for instance, between Bill Gates and David Letterman... about "this internet thing"

https://www.youtube.com/watch?v=tgODUgHeT5Y

Or the Today show wondering what the point is:

https://www.youtube.com/watch?v=UlJku_CSyNg

Technology empowers individuals and smaller communities. That's what it does throughout history. Personal computers. Personal printers. Now you can send email instead of relying on a centralized post office. VOIP disrupted Ma Bell monopolies in our lifetimes, instead of $3 a minute phone calls you can now do full videoonferencing for free. The Web instead of gatekeepers at radio, TV, magazines, newspapers, etc.

In all of those cases, you could question why "on a computer" matters. Who needs email when there are phonecalls? Who needs the Web anyway when there is email? Who needs online dating sites when there are matchmakers?

Nathan Myrhvold at Microsoft told people at Excite that "search is not a business".

Economist Paul Krugman wrote that by 2005, it would become clear that the Internet's effect on the economy is no greater than the fax machine's.

Well, these smart contracts are programmable, and you program against one widespread platform, like JVM or in this case EVM. That's a huge benefit. You can have programmable money, programmable elections, without having to deal with thousands of APIs, or as you are saying "normal" physical currency. Governments in fact also want to phase out cash, and even bank credit, and create centrally controlled "CBDCs" so your "normal" money is also under attack by your governments. China and USA and Canada have already done it and they'd love to be able to freeze people's accounts, restrict them from getting on trains, etc. It may be preferable to incarcerating them later on for years, as we do in the USA.


I read the list of applications. All of them seem pointless, or at least inferior to what we already have. Obviously the people writing these lists are unclear on the basics and probably haven't even read the NIST blockchain technology overview.

https://www.nist.gov/publications/blockchain-technology-over...


Why does it seem pointless?

What they all have in common is cutting out an inefficient rentseeking middleman that people have been forced to trust. Yes that includes FTX, Binance, Coinbase and governments.

Here is an example… how would you do this internationally without crypto?

https://community.intercoin.app/t/fund-for-refugees


It's a stupid idea. There are plenty of good charities already helping refugees. You can just donate cash to them. Using cryptocurrency just complicates things without adding any value or solving any real problems.


Sorry, but no. You don’t seem to be very knowledgeable about what these charities actually get done and their efficiency.

The real problems are making global payments directly to the people on the ground without waste and having confidence in how they are spent.

This works with the existing infrastructure in every city — similarly to point-of-sale machines that VISA and Mastercard did a lot of work to set up over decades. Back then you would be asking why the world needs credit cards and payment systems when there were perfectly good cash based systems and charities on the ground.

Anyway, the vendors sell food, the person shows up and buys the food. We know how the money was spent. The people help people.

To try to show you by analogy… it is as if people said that there should be a decentralized and uncensorable network for uploading videos taken by people’s own dashcams, phones, etc. of rockets hitting buildings, detention campa etc. But you’d keep saying that the Associated Press and the current centralized media is perfectly adequate and reports everything we need to know, and if people wanted they could upload their videos to Telegram or some other adhoc solution that isnt designed like the news agencies. Why decentralize anything? Because the people DON’T get a good system otherwise


Actually I'm more knowledgeable about this stuff than you are, and it's still a stupid idea. It fails to account for all the tax and legal compliance issues that vendors have to deal with.

And real vendors don't want cryptocurrency magic beans anyway. They want useful currency like dollars or euros that they can use to pay their own suppliers.


I doubt that. But hey, if you're so knowledgeable, then you'd realize that the systems do in fact support taxation and make auditing for compliance with any goals far easier.

Technology empowers individuals and smaller communities. That's what it does throughout history. Personal computers. Personal printers. VOIP instead of $3 a minute phone calls. The Web instead of gatekeepers at radio, TV, magazines, newspapers, etc.

In all of those cases, you would probably say "the real vendors don't want the Internet anyway". Who needs email when there are phonecalls? Who needs the Web anyway when there is email? Who needs online dating sites when there are matchmakers?

Nathan Myrhvold at Microsoft told people at Excite that "search is not a business".

Economist Paul Krugman wrote that by 2005, it would become clear that the Internet's effect on the economy is no greater than the fax machine's.

You'd be in good company ... a lot of smug people have always said this newfangled stuff is totally useless because people are perfectly fine using the "useful" systems they've always used, not "magic beans" like this new programmable money.


I think the framing of your argument is a little bit misguided. Looking at history and possibilities doesn't really get you anywhere. Sometimes people get a tech massively wrong, sometimes they don't.

In order to have an actually good discussion, we need to look at the thing and go past the "well someone criticized the Internet too and now look at it".

So taking the idea of the blockchain. What makes the blockchain a "different thing"? The differential aspect is that it allows untrusted nodes to join in a distributed architecture. It's not the only database that exists and it's not the only distributed one either. So any claims of new features brought by blockchain should justify why they need the "untrusted distributed nodes" part. If they don't, we can assume that those new features don't really need the blockchain: either it's already been done, it's not an use case people are too interested in or it's not viable due to other reasons (economical, technical apart from storage, political...). In the case of blockchain claims, most don't actually justify the need for the untrusted distribution. For example, smart contracts: it's just a fancy word for "computer program" only that it runs on a distributed trustless architecture. But is that really needed? My bank already runs computer programs that execute loan payments, for an example of things people try to implement with smart contracts.

Compare that with personal computers or even AI. PCs allowed data manipulation, storage and calculations at a capacity that was not previously available. Of course the first computers wouldn't have enough power to do things that a wide array of people would find useful, but "low power" isn't a fundamental aspect of personal computers in the same way that "low bandwidth" isn't a fundamental aspect of blockchains.


Suppose I start a Web2 company, and end up building the next big social network. Or maybe I deploy Mastodon or Qbix and make a large community.

Now elections, roles, permissions and credit balances may hold significant total value.

Various jurisdictions now start to require you to hold surety bonds, get audits etc. Suppose you manage payments volume of $20 million a month for a teacher marketplace. One of your developers can just go into the database and change all the balances, salami-slicing money to themselves. Or someone can go and change all the votes.

How can the users trust elections, or that you won’t abscond with the money one day, or get hacked, like FTX and MtGox?

Web3 solves this with smart contracts. For the first time in history, we can guarantee (given enough nodes) that it is infeasible to take actions you are not authorized to do. The blockchain is readable by everyone - but more importantly, only authorized accounts can take individual actions that do a limited amount. It’s truly decentralized.

The alternative is to build elaborate schemes where watchers watch the watchers — and the more value is controlled by the database (in terms of votes or balances) the more risk and liability everyone has. Why have it?

Have teachers be paid by students using web3 smart contracts and tokens. Your site becomes merely an interface which contains far more low-value things.

As for data, you can store it on IPFS with similar considerations. Read this:

https://community.intercoin.app/t/who-pays-for-storage-nfts-...

As you can see, my company and I have been giving it a LOT of thought, and not distracted by ponzi schemes. I am able to articulate exactly when you need Web3 and IPFS.


Do smart contracts resolve that problem? I don’t think they do completely.

Most people won’t have the knowledge or the time to verify the contracts. They will trust your word that they can’t be used to scam them. Smart contracts can still have failures too. And as long as the blockchain doesn’t control the real world, it won’t guarantee anything there (such as people making multiple wallets to manipulate the votes).

> The alternative is to build elaborate schemes where watchers watch the watchers — and the more value is controlled by the database (in terms of votes or balances) the more risk and liability everyone has. Why have it?

Why not the alternative of a regular bank account with public records? That also eliminates the risk that any mistake or manipulation stays there forever. It’s a tradeoff, not an absolute improvement.

In any case, it seems you have indeed thought about a real use case where the blockchain at least makes some sense. But that’s precisely my point, we need to be talking about actual use cases and not empty claims about potential without actually looking at why it’s useful.


1. End-users don’t need to personally verify smart contracts or any other open source software. The key is to have each version have a hash, and on EVM thatms easy — it’s the address of the contract. Then, more and more auditors go through the code and sign off on the contract. In fact, this should be done for lots of open source ecosystems, eg composer or npm, because then we might not have stuff like this: https://blog.sonatype.com/npm-project-used-by-millions-hijac...

or this in log4j: https://theconversation.com/amp/what-is-log4j-a-cybersecurit...

In fact it happens more and more:

http://cyware.com/news/analyzing-the-deadly-rise-in-npm-pack...

2) On EVM you can just audit a smart contract FACTORY, and then people can trust all the instances made from it. This is immensely powerful.

And then there is also regular use by people who put huge amounts of value into a protocol and it is never hacked. It is why for example people trust UniSwap liquidity pools not to rugpull them, and the same with many other protocols.

Now, to be fair, you don’t need a blockchain for that. You can use merkle trees and code signing. But the current Web sucks at it — you just have to trust the server not to suddenly send you bad code, or your “user agent” to execute JS that sends your password in the clear somewhere. And App Stores are black boxes where you just have to trust Signal’s claims or Telegram’s claims. I write about that here in much more detail:

https://community.intercoin.app/t/web3-moxie-signal-telegram...

and I give an extensive talk here as to why we need to reform the current Web:

https://www.youtube.com/watch?v=yKPKuH6YCTc

And I wrote an article here on why the current technology leads to extreme centralization and what effect that has on our society:

https://cointelegraph.com/news/how-a-web-that-lost-its-way-c...

There are real societal consequences on the largest scales, from these platform choices of tech stack.

3) Sure, in the real world, things may not match the data on the blockchain. That painting could be stolen despite what the NFT says. The house up for sale may be a fake. And someone could default on a loan that you thought would bring you revenues.

Blockchain doesn’t guarantee any of those things. However, on Aave marketplace, I am guaranteed a return, or my money back in the token I had lent. So my risk is reduced now to black swan events on the token market, so if I’m using mainstream tokens the chances of getting wiped out or losing my collateral are almost totally eliminated.

People who don’t want to use it don’t have to. But if you’re building a very big community with a lot of value at stake, would you rather put votes, roles, and balances into a central database and PHP app code, or a blockchain with smart contracts?

From personal experience I can tell you it should be the latter.

4) In your example it would be the bank that would opt to use blockchain and freely offer other indepdendent entities not under their control to run nodes snd secure it. It reduces the bank’s risk and liability!

And yes thank you for recognizing that there may be a real class of use cases!


I feel like there is a tendency for technology to overpromise by offering solutions to the wrong problems. To take your example, what is the fundamental issue with elections? I would say that it is that the optimal number of people for deliberative decision making is probably around Dunbar's number, certainly not millions. When you have millions of people, it is purely a media game, so decision quality falls off a cliff. So I doubt general elections can ever yield better results than they currently do, regardless of the voting system and regardless of technology.

So my general skepticism regarding blockchain is that it presents technological solutions to social problems (so it won't work). AI is different since it's a bit all over the place. In principle, it aims to solve problems that are obviously worth solving, but as long as it will fall short of its promises, the partial solutions we do get are kind of a mixed bag: if we need to keep a human in the loop or at the wheel, it's suddenly a lot less attractive. And the path to the AI being good enough to be trusted is nebulous at best. But we'll see. As for PCs, their utility has always been obvious and the roadmap clear.


The fundamental issue is that elections are too expensive so we don’t do collective decision making that often, and usually only for our state or national governments.

People tend to elect representatives for long terms instead and then complain about them, rather than delegating their vote to experts or trying other systems like Ranked Choice Voting.

Many people complain about having to travel far to a polling booth, and disenfranchisement, whereas they could vote for their phone. Elderly and minorities in rural areas often have bad access.

If elections were cheap, people could easily engage in collective decision making of various types and choose various ways yo tally votes. None of that is possible today, we are like the people before computers, or before the industrial revolution - having a limited number of options, newspapers, etv.

We would also have more confidence in the results as we’d check using the Merkle tree that our vote was counted. It would be user friendly to do so.

And we could also implement many of the results on-chain, such as how much UBI to give out in our own community’s currency, or how much to tax transactions.

This is just the tip of the iceberg. Just see https://intercoin.org/applications


The fundamental issue with elections is that the mob is easily swayed and because most of us are unqualified to hold opinions about most things. What is best or true is not decided by majority vote. Never in a million years would I want mob rule. I don't know where this strange doctrine of mob wisdom came from, but it is dangerous and false. The vote is open to all adult citizens only because we need a hedge against corruption in leadership, and even then, it is a vote for leadership, not a system of referenda and direct democracy. Even this "hedge" can easily enable corruption and vote the worst tyrants into power.

And the reason we localize voting, or should do so, is because of the principle of subsidiary.


Back in 2014 there were a lot of studies on wisdom of the crowds. You’d have to explain why they beat experts at many fields yet in others they would be worse. Most of the failures (FTX, MtGox, Softbank, invasions of Iraq and Ukraine, wars in general) are the result of centralization, not the regular people (who don’t want to kill others en masse). Centralizing power and decision making in a few hands leads to a lot of consequences:

https://www.npr.org/sections/parallels/2014/04/02/297839429/...


That's not hard. Most cases where wisdom of the crowd works fine is when the crowd does not have a personal or emotional stake in the outcome. Such gatherings attract mostly people that have an interest in that particular area, so the crowd self-selects for competence.

What is your expected outcome when you let the crowd decide how much taxes they pay, or how much to spend on road/water/electricity grid maintenance?


My point is that general elections are intrinsically mediocre at decision making. You can make them cheaper, more efficient, make sure everyone can vote, add a delegation system, and so on, but you'd just be polishing a turd.

It's pointless to improve voting technology if people don't understand the ramifications of what they vote for, and they will never know that unless they invest hundreds of hours of careful study into it. If everyone does this, there is no way it won't be insanely expensive.

IMHO there is a very simple solution to all of these issues that could have been implemented a century ago: pick representatives at random. Send them all to the capital, lodge them and pay them to think about the issues full time, talk to experts directly, interview candidates for Prime Minister and other executive positions directly, coordinate with each other, and so on. Give people the time and the information they require in order to cast the very best votes they are capable of.


I'd say the very simple solution is to have representatives decide. Then there's a separate debate about how to the representatives are assigned. At random, based on voting, based on competitive self-appointment by means of heavy weaponry, etc.

Blockchain may or may not make it easier to bypass the concept of having representatives decide, but this assumes we'd want to in the first place, which I am in full agreement with you: this is a feature, not a bug, so I don't wanna, so I don't need blockchain.


So what would constitute him not "getting defensive"?

He's supposed to agree with you, or not express an opinion? Anything else short of this would be "defensive" right?

This whole idea that defending your positions in arguments is somehow a bad thing is a really odd modern development that I never understood.


TBH I think even a basic "I considered your point, but X and Y factors seemed to mitigate it enough for my standards, defined by P and Q. Let me know if I'm misunderstanding anything" would do a lot. It's important to always show you've considered that you're wrong about something.


I'm not sure what OPs particular point was, but Yan seemed to argue over and over again that testing Galactica with adversial inputs is why "we can't have nice things" which to me seems not just defensive but kind of comical.

Any AI model needs to be designed with adversarial usage in mind. And I don't even think random people trying to abuse the thing for five minutes to get it to output false or vile info counts as a sophisticated attack.

Clearly before they published that demo Facebook had again put zero thought into what bad actors can do with this technology and the appropriate response to people testing that out is certainly not blaming them.


Any AI model needs to be designed with adversarial usage in mind

Why? There's probably plenty of usage of ML where both the initial training set, its users and its outputs are internal to one company and hence well-controlled. Why should such a model be constructed with adversarial usage in mind, if such adversarial usage can be prevented by e.g. making it a fireable offense?


> He's supposed to agree with you, or not express an opinion?

Wow, not sure what to say if that's what you think are the only options. I didn't see the original response to the parent commenter, but this quote in the article, "It’s no longer possible to have some fun by casually misusing it. Happy?" doesn't bode well.

I get that in the post-Twitter world it can be heart to differentiate between valid criticism and toxic bad-faith arguments, but lets not pretend that it's impossible to acknowledge criticism in a way that doesn't immediately try to dismiss it, even if you may not agree in the end.


No, you can disagree with someone without acting defensive. When a person is acting defensive, they're trying to protect or justify themselves. People who are insecure or guilty tend to act defensive. You can have a disagreement and defend your positions without taking things personally.


The correct response to a criticism you disagree with is "thanks".


You're right. That response is sufficient for people who provide it in good faith. There are bad faith actors who aren't happy unless you actually respond in detail and convince them otherwise. They're more than happy to raise a ruckus about how "XYZ ignored my feedback and criticism".


Thanks.


[flagged]


From the HN Guidelines: " Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community. Edit out swipes. "


I feel we are at crypto/blockchain levelof hype int ML and basically the old saying of "if you are a hammer, everything looks like a nail" applies.

For someone who dedicated their career to ML, they'll naturally try to solve everything in that framework. I observe this in every discipline that falls prey to it's own success. If there's a problem, those in the industry will naturally try to solve it with ML, often completely ignoring practical considerations.

Is the engine in your car underperforming? Let's apply ML. Has your kid bruised their knee while skating? Apply ML to his skating patterns.

The one saving grace of ML is that there are genuinely useful applications among the morass.


Without even opening the link I half expected it to be about LeCun and I want wrong.

Him and Grady Booch recently had a back and forth on the same subject on Twitter where to me it seemed like he couldn’t answer Booch’s very basic questions. It’s interesting to see another person with a similar opinion.


> It's too bad because you really do want leaders who listen to criticism carefully and don't immediately get defensive.

For sure. If this is how he treats outside experts, I can't imagine what it's like to work for him. Or rather, I can imagine it, and I think it does a lot to explain the release-and-panicked-rollback pattern.


ML people are the ultimate generalists. They claim to make tools which are domain agnostic, but they can't really validate that for themselves because they have no domain knowledge about anything.


Could you share the critical feedback you gave? I am interested as someone who works with biological systems and is curious about how ML can or cannot help.


I told him that: the increased speed but lower accuracy of their protein structure predictor was not useful because the only thing that matters in PSP is absolute prediction quality. And that speeding up that step wouldn't have any impact on pharmaceutical development, which is one of his claims (closed-loop protein design).


Without being an expert in this matter, this seems wrong.

Sure you want quality here but there’s always going to be a human in the loop for this kind of work.

Any workflow with a human in the loop has this speed vs accuracy tradeoff.

While I’m not saying that speed trumps accuracy here, I don’t think you can dismiss without evidence that the tradeoff exists and speed might have benefits.


It's Amdahl's Law.

Lab work and clinical trials are incredibly slow. A single experiment testing a single candidate might take weeks (in cell lines), months (rodents) or even years (humans/non-human primates). You're going to do a bunch of them and they often require expensive reagents and/or tedious work.

Consequently, shortening the wait for a predicted structure by a few hours (or days) won't really move the needle. This is especially true if it makes your experiment, already probably a long shot, less likely to succeed.


GP is saying that the slow part of pharmaceutical development (synthesis,trials,etc) takes so many orders of magnitude longer than the software part (candidate generation) that any speed improvement is moot. In fact having lower quality software generated candidates merely leads to wasted time later.


I think it makes sense when you realize that the product (Galactica) and all the messaging around it are just PR - they're communicating to shareholders of a company in deep decline trying to say 'look at the new stuff we're doing, the potential for new business here'.

You interrupting the messaging ruins it, so you get some deniable boilerplate response. its not personal.


But we gave the keys to the economy to some vain children who have never had to do real work to make a name for themselves. Straight from uni being librarians assistants to the elders, straight to running the world!

Society is still led by vague mental imagery and promises of forever human prosperity. The engineering is right but no one asks if rockets to Mars are the right output to preserve consciousness. We literally just picked it up because the elders started there and later came to control where the capital goes.

We’re shorting so many other viable experiments to empower same old retailers and rockets to nowhere.


I'm delighted you called out these problems when you came across them, and sorry that he didn't have the grace or maturity to take it on board without getting defensive.

Like many thin-skinned hype merchants with a seven-figure salary to protect, they're going to try and block criticism in case it hits them in the pocket. Simple skin in the game reflex that will only hurt any chances of improvement.


Interesting. What would be one of those factual statements given your expertise in the area that the ML folks don't understand?


"Improved protein structure prediction will speed up drug discovery"


It won't? AI-powered drug screening has definitely been overhyped, but in the longer term highly accurate protein structure modelling should let us understand protein-protein interactions and provide new opportunities for intervention.


That has been a long-stated claim in the field that has repeatedly not shown to be useful in any sort of "engineering" or "medical treatment" sense.


Hrmm, I think I was expecting a different type of disagreement when you said he was wrong on facts.

I'm sure you know your stuff, and that you have a lot of experience with proteins that haven't helped with drug discovery or engineering, but it sounds like this is indeed a mismatch between predictions rather than facts.

It very well could be the case that speeding up certain problems by multiple orders of magnitude really does help with drug discovery, and this isn't factually inconsistent with the fact that solving those problems hasn't turned out to be useful so far in this area.


If you can discover a drug faster and that drug is as useful as dirt does it matter?

This isn't my field but I could grab a bunch of random jars off a shelf and pour them into capsules. No matter how fast I can do this won't improve medical outcomes for patients.


you just described how modern high content screening, which has been one of the most useful techniques for finding drug leads, works. Since lead-finding is a bottleneck in the drug discovery process, it has been highly effective because it can measure things that are not currently computationally accessible.


I don't follow.

Wasn't your claim that the AI process hasn't been shown to generate a useful drug [1]?

[1]: https://news.ycombinator.com/item?id=33720915


is a bottleneck != is THE bottleneck


It's the first bottleneck. Generally, if you can't pass the first bottleneck, all the remaining bottlenecks don't matter.


Don’t forget that we already have highly accurate protein structure modelling. AlphaFold adds to that but it’s not like it’s something radically new. Proteins involved in diseases we care about have been extensively studied.


Except a blackbox that simply spits out a resultant folded protein doesn't actually improve our understanding of anything. Are we just going to fuzz the blackbox with different protein combos hoping to find something useful? In that case, aren't we just more likely to find some stupid error in the ML predictor?


Explaining why this won’t happen is the obvious follow up question.

When is “improved structure prediction” useful or important?

Often a process is simplified or distilled down to a sound bite for the general population. Then we simply repeat without understanding the details.


Improved structure prediction is mainly useful in hypothesis generation when doing hypothesis-driven science (IE, you want to confirm that a specific part of a protein plays a functional role in a disease). Its also a nice way to think creatively about your protein of interest.

THe problem is those distilled soundbites get learned by the next generation and they try to apply it. At least I will give AlphaFold/DM credit for correcting their language - originally they claimed AF solved protein folding, but really, it's just a structure predictor, which is an entirely different area. Unfortunately, people basically taught computer scientists that the Anfinsen Dogma was truth. I fell for this for many years.


https://en.wikipedia.org/wiki/Anfinsen%27s_dogma

> It states that, at least for a small globular protein in its standard physiological environment, the native structure is determined only by the protein's amino acid sequence.

Seems like "no true scotsman". If you present a counter example, they'll go "but this is only true for "small", the one you gave me isn't small.


Let's say you have a pool of smart first year grad students you want to inspire to work hard on problems for you for the next 7 years.

Do you say "we're going to give you a problem that is unsolved but likely has a general solution, and you have a chance of making progress, publishing, and moving on to a postdoc" or do you say "THis is an impossibly hard problem and you will only make a marginal improvement on the state of the art because the problem space is so complex and large"?

You say the first because it gets the students interested and working on the problem, only to learn many years later that the simplified model presented was so simplified it wasn't helpful. I fell for that and spent years working on drug discovery, structure prediction, etc, only to realize: while what Anfinsen said was true, it only applies to about 1% of protein space. It's not so much a "no true scotsman" as "some scotsmen wear quilts, and others have beards, but neither of those is sufficient to classify an example as scots".


The core caveat being the yield of “improved … prediction” in the future tense, I assume?

Given that improved structures has sped up drug discovery, I can see where the mistake is made (X has improved Y, therefore X will improve Y)


It's tough because I think he has a really difficult job in many regards; Meta catches so much unjustified flak (in addition to justified flak) that being a figurehead for a project like this has to be exhausting.

Being constantly sniped at probably puts you in a default-unreceptive state, which makes you unable to take on valid feedback, as yours sounds like.


> I replied to LeCun's claims about their latest protein structure predictor and he immediately got defensive

Ideally scientists would be interested in the truth and engineers would be interested in making the system better.


At some level he must know (AI) winter is coming along with the recession, which is why he is so defensive, as if a barrage of words will stave off the inevitable.


Is this on twitter, please link if so


I feel that if you make a claim like this, you should link to the full text of your exchange.


I had to do some searching but it looks like he deleted his replies to my comments and that caused my own subsequent comments to be deleted? https://www.linkedin.com/posts/yann-lecun_esm-metagenomic-at...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: