I think that's a fear I have about AI for programming (and I use them). So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces. Though we can build commercial products quickly and easily, no one really writes code for difficult problem spaces so no one builds up expertise in important subdomains for a generation. Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects? How will AI be able to do new things well if it was trained on what people wrote previously and no one writes novel code themselves? It seems to me like AI is pretty dependent on having a corpus of human made code so, for example, I am not sure if it will be able to learn how to write very highly optimized code for some ISA in the future.
> Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects?
I’ve seen variation of this question since first few weeks /months after the release of ChatGPT and I havent seen an answer to this from leading figures in the AI coding space, whats the general answer or point of view on this?
Is it hard to imagine that things will just stay the same for 20-30 years or longer? Here is an example of the B programming language from 1969, over 50 years ago:
printn(n,b) {
extrn putchar;
auto a;
if(a=n/b) /* assignment, not test for equality */
printn(a, b); /* recursive */
putchar(n%b + '0');
}
You'd think we'd have a much better way of expressing the details of software, 50 years later? But here we are, still using ASCII text, separated by curly braces.
I observed this myself at least 10 years ago. I was reflecting on what I had done in the approximately 30 years I had been programming at that time, and how little had fundamentally changed. We still programmed by sitting at a keyboard, entering text on a screen, running a compiler, etc. Some languages and methodologies had their moments in the sun and then faded, the internet made sharing code and accessing documentation and examples much easier, but the experience of programming had changed little since the 1980s.
I suspect a more general and much more clever learning algorithm will emerge by then and will require less training data to get to a competent problem solving state faster even with dirty data. Something able to discriminate between novel information and junk. Until then I think there will be a quality decline after a few more years.
How will it emerge? In the past we've been told that the a(g)i will write itself, rapidly iterating itself into a super intelligence that handily solves all our current and future problems, but it's beginning to look like a chicken or the egg scenario.
Living systems were able to brute force their way to human brain, but it took billions of years and access to parallel processes that make the entire collective history of human computation seem like a mote to a star.
What novel spark do you see accelerating this process to such a hyperbolic extreme?
I would imagine a trajectory similar to AlphaGo, it starts out trying to replicate humans and then at a certain point pivots to entirely self-play. I think the main hurdle with llms, is that there isn't a strong reward target to go after. It seems like the current target is to simply replicate humans, but to go beyond that they will need a different target.
I agree in general, but defining an appropriate target seems intractable at the moment. Perhaps it is something the AIs will have to define for themselves.
I think real intelligences are working with myriad such targets, but an adversarial environment seems essential for developing intelligence along this axis.
I do think if there's a path to AGI from current efforts it will be through game play, but that could just be the impressionable kid who watched Wargames in the 80s speaking through me.
It took a billion years to get to the tool-making state, and then less than a 1000th of that time to making CPUs. Then a 1000th of that time to make LLMs. We are in a parabolic extreme
This is begging the question. What evidence is there that this is all the same "stuff" driving towards some future apex? What does it mean to "get to" the tool making state outside of a Civ-style video game?
>So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces.
I don't think we need to wait a generation either. This probably was a part of their personality already, but a group of people developers on my job seems to have just given up on thinking hard/thinking through difficult problems, its insane to witness.
Exactly. Prose, code, visual arts, etc. AI material drowns out human material. AI tools disincentivize understanding and skill development and novelty ("outside the training distribution"). Intellectual property is no longer protected: what you publish becomes de facto anonymous common property.
Long-term, this is will do enormous damage to society and our species.
The solution is that you declare war and attack the enemy with a stream of slop training data ("poison"). You inject vast quantities of high-quality poison (inexpensive to generate but expensive to detect) into the intakes of the enemy engine.
We create poisoned git repos on every hosting platform. Every day we feed two gigabytes of poison to web crawlers via dozens of proxy sites. Our goal is a terabyte per day by the end of this year. We fill the corners of social media with poison snippets.
This will happen regardless. LLMs are already ingesting their own output. At the point where AI output becomes the majority of internet content, interesting things will happen. Presumably the AI companies will put lots of effort into finding good training data, and ironically that will probably be easier for code than anything else, since there are compilers and linters to lean on.
I've thought about this and wondered if this current moment is actually peak AI usefulness: the snr is high but once training data becomes polluted with it's own slop things could start getting worse, not better.
The lesson that I am taking away from AI companies (and their billionaire investors and founders), is that property theft is perfectly fine. Which is a _goofy_ position to have, if you are a billionaire, or even a millionaire. Like, if property theft is perfectly acceptable, and if they own most of the property (intellectual or otherwise), then there can only be _upside_ for less fortunate people like us.
The implicit motto of this class of hyper-wealthy people is: "it's not yours if you cannot keep it". Well, game on.
(There are 56.5e6 millionaires, and 3e3 billionaires -- making them 0.7% of the global population. They are outnumbered 141.6 to 1. And they seem to reside and physically congregate in a handful of places around the world. They probably wouldn't even notice that their property is being stolen, and even if they did, a simple cycle of theft and recovery would probably drive them into debt).
There's no evidence it's just speculation. Microsoft has a contract with the same exact orgs. So does AWS. Anyone with a little bit of common sense would know that. Palantir's CEO and Peter Thiel are not particularly well liked so presumably people are speculating without any evidence at all. Could there be an issue? Yes, absolutely but not just with Palantir but let's not let facts get in the way of a narrative. In any event I think the question of data being shared with the government could be a problem even if the software was made in house and then open sourced by the hospital (which is itself ridiculous to expect but this is HN) because the hospital themselves could provide the data to the government. At this point someone might say "no that won't happen because hospitals are nice and Palantir is evil" or "there are laws" but I am not sure why Palantir would be exempt unless anyone has proof or anything besides a vibes based argument but then we're back to square one.
TikTok said in a statement that glitches on the app were due to a power outage at a US data center. As a result, a spokesperson for TikTok US Joint Venture told CNN, it’s taking longer for videos to be uploaded and recommended to other users. The tech issues were “unrelated to last week’s news,” TikTok said.
There was a major storm over the weekend. I think the issues have been resolved. Is it still the case anti ICE videos can't be uploaded? Seems easy to test.
It seems to me that the hard part to test would be whether or not videos are allowed to circulate in the same way they would be if they were of a different subject. Upload status seems like red herring
Much like how even relatively innocuous comments on many subreddits will just be shadow-deleted.
If someone demonstrates they are liars, there is a reasonable default reaction. Most people can ignore what they say, because liars made the conscious decision not to be credible.
It is an incredible time-saving productivity hack to disregard what habitual liars say.
This is a reaction to reputation, which is sometimes reasonable. But reasonable people also confirm their suspicions with evidence regardless of the situation.
Go ahead and save your time, but remember your reputation is at risk as well, and I would consider you unreasonable.
Sadly, there are not enough minutes in a day to verify all information thrown at me. So taking shortcuts feels necessary to me. Sure, this should be contingent on new information and developments.
> the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*
I have a professor who has researched auto generated code for decades and about six months ago he told me he didn't think AI would make humans obsolete but that it was like other incremental tools over the years and it would just make good coders even better than other coders. He also said it would probably come with its share of disappointments and never be fully autonomous. Some of what he said was a critique of AI and some of it was just pointing out that it's very difficult to have perfect code/specs.
Adding on to this I think it's bizarre how you need to have a phone to navigate life now and corporations just assume you have one. So for example, using QR codes to gain entry to things. It's weird to think about how we all carry around this expensive computer and think nothing of it. It's like when we laugh about how people in the Middle Ages carried a personal knife for eating because hosts wouldn't supply you with a knife. The knives even came in more fancy and expensive versions for the rich kind of like the Android/iPhone divide. I wonder if historians will talk about these phones in the future.
> Adding on to this I think it's bizarre how you need to have a phone to navigate life now and corporations just assume you have one.
I have a VoIP phone line from 2004. I was told yesterday that it was showing up as "Spam" on someone's phone. Sigh.
Also, for 2FA, some services allow phone calls. So I put in the VoIP line and not my cell phone. At some point, any given service switches to text-only for 2FA - but they don't notify me in advance and I'm locked out for good.
Even worse, some 2FA that allow phone calls just will not call my VoIP line. No warnings, etc. But if I put my mobile number it calls.
And QR codes for menus? I try not to eat at such establishments. Paper is cheap. I don't need a fancy menu. If you change your prices, just print new ones.
I don't think it's wrong to go back that far. I think SV is it what it is because of those companies but also the schools, some local charm and quirks, etc. and the same reasoning applies there. The tech companies begot more tech companies basically. Before Meta and Alphabet it was Microsoft and Yahoo and before MS it was Sun and Netscape and before that Oracle maybe and the list keeps going back and add in hacker culture in the Bay Area I guess which existed for a long time. It's a fair thing to point out.
Immigration to SV is probably a result of SV success not the other way around. Likewise, why would immigrants even come here if there was nothing for them before they arrived? I think the adulation of immigration is historical revisionism. Sure, immigrants now contribute but they did not build SV.
> Sure, immigrants now contribute but they did not build SV.
"If you bulid it, they will come".
In the power curve growth of SV fortunes "home grown" second, third, fourth generation, and longer immigrants certainly built the groundwork, drawing upon education from schools founded upon Oxbridge and other offshore inspirations, absolutely as you say, all the same more recent first generation immigrants played a big part in inflating it sky high.
With no additional immigrants drawn to SV it's not hard to imagine SV stalling out at 1980s Microsoft levels, impressive but far short of where it is today.
I think in a discussion about the effect of immigration on the current state of an area, in this case Silicon Valley, you can totally reference its history if you are making a claim about a chain of events. If instead, you skip over 50 years of history which includes multiple generations of how the industry worked and multiple generations of immigration policy, to start talking about
> The highly selective immigration policy that prevailed from 1924-1965 is likely a key reason why so many Silicon Valley companies were founded by immigrants
then you are making a narrative that has nothing to do with the point, and I am unwilling to accept your framing.
I go on HN to read thoughtful non-partisan commentary but the general mood seems to be "everything is bad" in certain threads even if that contradicts a previous popular HN consensus.
I looked into founding a company in this space and steered straight back out of it because yes, by far and away the VAST majority of demand in the market of study tools for high/middle schoolers is cheating. Below that, parents are involved and there's a market there (but a bad one, because of double sales where you have to sell through the parent to the child even though those two actors have misaligned incentives).
That is interesting and kind of what I suspected anecdotally. I think it's unfortunate for people who aren't aware of all this. That is what I will say.
A cheat sheet could be a piece of paper you're allowed to bring to an exam. To make a proper cheat sheet you have to understand the material you're working with anyway so it usually doesn't help you.
Is there a way for someone on h1B to start a company in a roundabout way by doing something like placing company shares into a trust and having a unpaid board seat? Is that pushing luck? Not for me but a friend who I had plans to go into business with but we're facing a chicken or egg problem until she gets a green card or changes her visa status.
Why don't they just start a company in the country where they are from or why don't you start a company with someone who is a citizen or has a green card?
The entire premise of your question is misaligned with the intention of the H1-B visa. Yes, everyone abuses its intent, but that isn't justification for more people to find more ways to abuse it. The abuse of that visa (and other visas) is why folks just want it abolished outright. I guess the purpose of a system is what it does, but it was sold to the American electorate as a way for companies to get access to talent that they simply cannot find domestically.
Trying to use the H1-B to hire a very specific person instead of any person with the skillset needed for the role would be in contradiction with the labor market test (LMT) needed for PERM status.
An H1-B can only work for the employer on the I-129 petition. There are some forms of passive income allowed but to placing shares in a trust and having an unpaid board seat just seems like an attempt to cheat the process because ultimately the goal is for her to work for this startup. Doing what your proposing puts a target on her head where anyone that is anti-H-1B can report her to USCIS and get her deported.
Moving home, working remotely and then applying for an L-1 seems like the correct approach here for what you're trying to do.
I am not sure if your questions were rhetorical or not but I don't think you want them answered. I am happy to explain the situation though. She did not take a visa to circumvent the law, she has been here for years and we came up with this idea a few months ago. It's not remote applicable. So everything you said is true in a legal sense but it's unlikely any of it happens and now there is a very large fee for new applications so it's a very high risk to move back and then reapply. The reality of startups is at first they may not make enough money to support an H1-B so we're not sure how to make enough money to get her one without already having done work. I don't think this violates the intention of the H1-B it just creates a situation where starting a startup is very murky but working at a large firm is fine. I am not sure the intention of the law was meant to bias in favor of H1-B only benefitting large corporations.
There are ways for someone in H-1B status to start a company and not in a roundabout way. The approach will depend in part on whether she will leave her current employer and get an H-1B through her startup, stay with her current employment and get a concurrent part-time H-1B through her startup, or just stay with her current employer and somehow work on her startup.
> or just stay with her current employer and somehow work on her startup.
The first two options make sense but this latter option sounds like a risk. As I understand it, she can't earn any active income from this startup unless see has an I-129 for it. A share grant counts as income.
I mean, yeah you can work on a side project in your spare time that could become a business, but the moment employment and active income enters the picture that becomes something else.
It would be interesting to test that maybe by looking at the disability rate before and after the honor code was changed recently. If there was an increase in disabilities, it might be because other cheating options on exams were limited.
For those wondering, the honor code was changed to make all exams proctored because of a number of academic dishonesty issues that happened allegedly.
reply