Hacker Newsnew | past | comments | ask | show | jobs | submit | scottlawson's commentslogin

reading this article reminded me of a couple of quotes:

"The question of whether machines can think is about as relevant as the question of whether submarines can swim" - Dijkstra

And paraphrasing another "it would be silly to say that airplanes can't fly because they don't flap their wings like birds"


The thesis that in the past it was safe to share ideas and projects because the execution was hard, and that now things have changed because of AI is an interesting AI, but I wonder if it is really true.

It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST".

But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project?

In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI.

And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.

Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate.


The problem for me is that I'm competing with the AI results that Google trained on my work. I'm losing the majority of my traffic to it, so at some point I'll have to give up because the work no longer supports me and no longer has an audience.

Same here. Knowledge is being commodified.

> Knowledge is being commodified.

Already was well before AI, the difference now is that a few big AI providers risk becoming the ultimate rent-seekers that will increasingly capture all of the value of that commodified knowledge whether the original knowledge generators want that or not. There is no opt out, everything will be vacuumed up into the machine mind.

This will almost certainly lead to vastly increased amounts of wealth inequality (on top of the already unsustainable levels we have today) and possibly a very messy societal disintegration (this is theoretically avoidable, but I am not convinced it is practically avoidable given our current socioeconomic/political realities).

Bright future ahead!


Industrial-scale plagiarism. A form of copyright-laundering only available to big actors.

OG feudalism involved owning knights and horses and armor and grain production; techno-feudalism involves owning all ideas

It isn't just about AI. Some R&D domains started disappearing from literature and the public internet a decade before the first LLMs. The incentives to go dark emerged even when the adversary was other humans. AI is just accelerating a trend that was already there. Some areas of frontier computer science research have largely been dark for decades.

The strategy is to quietly do several years of iterated hardcore R&D. The cumulative advances are such a step change when seen by would-be fast-followers that it obscures the insights that allowed individual advances to occur. As an exaggerated case, imagine if the public history of powered flight skipped from the Wright Brothers to the Boeing 737.

In practice, this strategy has a major failure mode that people overlook. The sharp discontinuity in capability means that almost nothing that exists in the market is prepared to integrate with it. This is a large impediment to adoption even if the technology is objectively incredible and the market will inevitably get on board.

In short, it looks a lot like being too early to market. This is surmountable with clever execution but with this strategy you've traded one problem for a different one.



Interesting, any examples of companies that followed this model?

You get a time advantage for doing this strategy, but your talent will be pouched and your competitors will be able to catch up fairly quickly.

I used to think this but it only seems to be true for a shallow tech advantage, which isn’t this scenario. A sufficiently deep stack of compounded tech is robust against even aggressive talent poaching. The knowledge is embedded in the network, not the random individual.

We see this in jet engines, silicon fab, et al.


With these very deep tech stacks, does it really matter if you publish or not? Execution is still very hard for manufacturing these items, and will be for awhile.

We’re very very far from prompting to a silicon fab


I mean, even north korea has figured out the nuclear bomb, the original greatest secret deep stack of compounded tech. Seems like anyone can figure out anything if they are hell bent on it on this earth. Engineers seem to be more fungible than people anticipate I guess, and no one really comes up with unprecedented unique ideas. The whole research process incentivizes incremental work on known concepts to justify receiving funding at all, since it is in high demand and short supply.

Korea had the advantage of like seventy years of technological advancement from the first nuclear bomb.

> And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.

The blog post does touch upon this. The key difference, I believe, would be that compute scales in a way "meat-heads" doesn't, where if the other person has 100x the capital to throw at it, they could do the same 10 hour thing in 10 minutes.

Basically, what I got from it was that innovation has never been truly scalable enough to create the "dark forest", since hiring more and more engineers saturates quickly. But if/when innovation does become scalable (or crosses some scalability threshold) via AI, that could trigger a "dark forest" scenario.


Sure but the Forest point stands, whatever you can hide from the Forest becomes something that slows it down and allows you some, even if only brief, moat?

There’s a deeply flawed hidden assumption here, which is that the individual in question is the only possible source for the relevant information that the AI can harvest. In the real world that absurdly rare, original thought is rare because we’re in the mix with billions of others.

Scientists who hold back publishing breakthroughs have not guaranteed that they will be the sole discoverer, just that someone else will inevitably be credited when they reach the same conclusions.


the untold billions don't matter -- the AI can sift through those. social media already exists to do that, and LLMs have the luxury of often having the chaff separate from the wheat ahead of time.

science is not inevitable, and there is no telling people will reach the same conclusions in a reasonable time frame.


the legacy of Voyager 1 is crazy, this spacecraft launched decades before I was born and yet I see it regularly talked about even today. Seeing posts about how the Voyager 1 was leaving the solar system led to me learning about the heliosphere. Hearing about the Pioneer anomaly https://en.wikipedia.org/wiki/Pioneer_anomaly led me down a rabbit hole of learning about thermal radiation and radiation pressure (granted this is not Voyager). Then I learn about how it is powered by radioisotopes, its kind of cool how many things I've learned from these "ancient" spacecraft.

I love articles like this that give the context and history to important but not often talked about molecules. I enjoyed this as much as the "chemicals I will never work with" series.


you replied to an LLM generated comment. if you look at the posting history you can confirm it


conduction direct into the ground


The page mentions improvements but I can't tell from the main readme what is the advantage over the stock firmware, which I found to be quite nice for my personal use.



  https://github.com/flipperdevices/flipperzero-firmware/commits/dev/
(Edit: add two spaces so full url shows because HN parses)


This is just the official repo for the stock firmware. Last update 3 weeks ago for 1.0.1.


To put this in perspective, it supports up to 3kg in the demonstration. That's a fraction of what I can support with one finger.


If they made it more stable, you could use it to move your nitroglycerin around, in 3kg batches


Ok, but it's cargo AND vacuum cleaner. Can your finger do that?


More like a downward pointing leaf blower, driving the dirt from high traffic areas into every corner of the house.


My apartment pays for people to come by and blow dirt around every so often, so maybe this could let delivery people pick up extra cash on the side!


And can your finger AI? Doubt it.



Now you got me curious - how many kg can you support with a single finger, and what's your exercise regime?


If there's a handle you can hook into, anyone can carry ~20kg with a single finger (index or middle finger). For me, the limiting factor would be the handle biting into my finger too hard, but I can likely go up to 40kgs.


20kg is about the weight of a bag of water softener salt. No way am I carrying that from one finger for more than about 10 feet exactly because of the limiting factor you mention.


Professional climbers are known for being able to hang off of two fingers.

It usually takes years of training to be able to do this.


It’s one thing to be able to hang off a finger hooked over a tiny ledge, it’s much easier to hook a handle and lift a load vertically. Most adult humans could lift 10kg witha couple of fingers.


Except for a fair comparison with this particular drone, you need a figure for how much you can carry on top of your finger, pointing upwards, while keeping it balanced.


No I don't, because I don't need to carry things like that. I need to compare this drone with a sturdy cardboard box that I can carry things in.


I deadlift about 60kg with two fingers (middle), I can easily hang on two fingers (one of each hand), so about 37kg per finger. Some people can do one finger pullups

As for the regime: just lift heavy shit every day and climb one/twice a week


If the issue is panic why not create a wrapper func with recover and present the same interface that you want?


Actually, a tutor can be someone who plans and guides your learning process, as you described. But there is nothing to stop you as the student from inverting it, designing and planning your own leaning goals, and then going to a tutor for specific advice.

I do this for foreign languages. I plan out everything and my own syllabus and goals, and hire a tutor for speaking practice. I give instructions and context to the tutor so they can best help me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: