Hacker Newsnew | past | comments | ask | show | jobs | submit | ballooney's commentslogin

I don’t like this site’s obsession with reducing everything to market opportunities, but… it’s extremely well documented that land mines, white truffles, cancer, diabetes, chemical weapons, etc can all be ‘sniffed’ by animals and it’s a mechanism that is almost always ‘better’ (cheaper, quicker, more deployable in the field) than human-engineered solutions. Surely there’s some vebture capital opportunity here for better sensors that would unarguably improve our lot more than AI, at least per dollar invested?


There has certainly been work on it, but not sure what the status is. Of course, it could be very useful.

From Google, 2019,

https://research.google/blog/learning-to-smell-using-deep-le...


Sounds like the obsession of reinventing trains and trees. Surely training a rat is cheaper than a portable real-time NMR device, right?


Rats are sentient beings. If we have a choice, it’s not ethical to risk their lives to meet our own goals.


Before focusing on rats, who are too light to set off mines and live long pampered lives, I would focus on the 73 million pigs and 87 million cows in factory farms [0].

[0]: https://www.sentienceinstitute.org/us-factory-farming-estima...


What he means and you're interpreting a bit too literally is that this [heatshield] is one subsystem where the risks are not well understood or quantified as, say, the propulsion system, for which we have a lot more experience and flight heritage.


Yes, of course there are risky systems in there, and calling attention to one of them is fine. What I object to is framing it as a "safe/not safe" issue - as if without the tests the author proposed it were "not safe" and with them, by implication, it would become "safe". That's not like replacing old tires on your car with new tires - there are a lot of things that can go wrong, and many of them are "unsafe", and it's always a complex equation which can not be (at least at current level of technology) solved with doing more tests or anything else to make it "safe". The "safe" framing is the one I object to.


Bayes rule has existed for nearly 300 years, there is no excuse for ‘only look[ing] at this data’ and that is NEVER a reasonable thing to do.


You’d need a load of additional propellant to insert yourself into the same orbit as the ISS on your return, which would have an exponential effect on the amount of propellant needed in the first place to get all this lot out to the moon. It would be a different vehicle.


In the last 60 days I have written over 600,000 lines of production code

No you haven't.


Not only he hasn't, writing over 600,000 lines of production code is not something to be proud of. At least not without explaining their purpose and why they were needed.

This is a major software engineering lesson that Garry's LLM-addled brain has apparently forgotten: measuring progress in LoC is not something that is done anymore because it's a bad metric!


About the same as Bezos invested in the Melania documentary, watched by about six people.


The Melania documentary is an important artifact that historians will be talking about for decades, although not in the way those involved anticipated.


Leni Riefenstahl taught me everything I know about lighting and camera angles!


Wikipedia says it "had the highest opening for a non-concert documentary since the $10.7 million opening for Chimpanzee (2012)".

It did well by documentary standards, poorly compared to its budget, and the stories about empty theaters are mostly in areas with very weak Trump support. Those stories spread mainly because they makes us feel good.


Everyone in silicon valley would do well to remember why the web was built (by other people, elsewhere).


What are your favourite active irc channels for technical hobbies?


For context, the Cosmos Institute an AI lobbying business. [it flatters itself otherwise but that’s not atypical of the form]


Hopelessly over-idealistic premise. Sama and pg have never been anything other than opportunistic muck. This will be my last ever comment on HN.


I feel this so hard, I think this may be my last time using the site as well. They don't care about advancement, they only care about money.


Like everything, it's projection. Those who loudly scream against something are almost always the ones engaging in it.

Google screamed against service revenue and advertising while building the world's largest advertising empire. Facebook screamed against misinformation and surveillance while enabling it on a global scale. Netflix screamed against the overpriced cable TV industry while turning streaming into modern overpriced cable television. Uber screamed against the entrenched taxi industry harming workers and passengers while creating an unregulated monster that harmed workers and passengers.

Altman and OpenAI are no different in this regard, loudly screaming against AI harming humanity while doing everything in their capacity to create AI tools that will knowingly harm humanity while enriching themselves.

If people trust the performance instead of the actions and their outcomes, then we can't convince them otherwise.


Oh I'm not saying they every believed more than their self-centered views, but that in a world that leaned more liberal there was value in trying to frame their work in those terms. Now there's no need to pretend.


And to those who "say" at least now they're honest, I say "WHY?!" Unconditionally being "good" would be better than disguising selfishness as good. But that's not really a thing. Having to maintain the presence of doing good puts significant boundaries on what you can get away with, and increases the consequence when people uncover some shit.

Condoning "honest liars" enables a whole other level of open and unrestricted criminality.


inb4 deleted


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: