Hacker Newsnew | past | comments | ask | show | jobs | submit | mezyt's commentslogin

Remind me of a recent discussion we had among Stackoverflow moderator:

> “Think about it,” he continued. “Who discovers the edge cases the docs don’t mention? Who answers the questions that haven’t been asked before? It can’t be people trained only to repeat canonical answers. Somewhere, it has to stop. Somewhere, someone has to think.”

> “Yes,” said the Moderator.

> He leaned back. For a moment, restlessness flickered in his eyes.

> “So why wasn’t I told this at the start?”

> “If we told everyone,” said the Moderator gently, “we’d destroy the system. Most contributors must believe the goal is to fix their CRUD apps. They need closure. They need certainty. They need to get to be a Registered Something—Frontend, Backend, DevOps, Full stack. Only someone who suffered through the abuse of another moderator closing their novel question as a duplicate can be trusted to put enough effort to make an actual contribution”


What does “destroy the system” mean here?

The metaphor doesn't match very well here because stackoverflow is not selling new tape at a premium but giving them for free and reading a stackoverflow answer is harder than asking an LLM.

Could be that AI companies feeding on stackoverflow are selling tape at a premium, and if they tell you it's only supervised learning from a lot of human experts it's going to destroy the nice bubble they have going on around AGI.

Could also be that you have to do the actual theory / practice / correction work for your basal ganglia to "know" about something without thinking about it (i.e. learn), contrary to the novel where the knowledge is directly inserted in your brain. If everyone use AI to skip the "practice" phase lazily then there's no one to make the AI evolve anymore. And the world is not a Go board where the AI can learn against itself indefinitely.


If you have to ask, you aren't ready to know the answer. There are some things you have to figure out on your own. This is one of them.

Use it to train an "AI"? :)

Probably not the OPs intent though. I suspect there are a lot of ways to destroy the system.


A half conversation is a lot more disruptive because your brain try to fill in the gap of information.

This comment chain is talking about people using speakerphone, though, meaning they hear both sides of the conversation

In theory yes, but in practice they usually have the speaker up far higher than they are speaking themselves so we do only hear one side clearly.

I think the high distractability is a trifecta of volume, non-naturallness of the sound (compression etc: feeling out of place in the space) and this point.


Plus, how many shell can one individual open in a day ? I'm doing that once per day on a good day, maybe twenty if I have a lot of unplanned work on subprojects that needs to be done concurrently with my main task.

Sometimes hundreds per day. Tabs come and go here. People have different workflows.

Yesterday just shy over 50, according to entries from `login` in the system log.

I do launch multiple interpreters just to get a fancy coloful cowsay on each launch. Which involves the fortune program, lolcat (via ruby) and cowsay itself (via Perl). I probably should optimise that into a single C binary for better startup times! :)


The problem with long startups is that they break the flow. I live in CLI. I open and close terminal windows all day long, sometimes just for quick 2-3 commands to check something. 100 new interactive shells a day is my guess. I already know commands to run, my fingers are ready to type, they already pressed the keys ti spawn a new shell and now they have to stop for 500 ms. Repeat these 100 interruptions every day and you get the death by 1000 spoons.

I don't use oh my zsh, but on one laptop zsh took 600ms to start. I narrowed it down to a strange "bug": adding even a single empty file to the custom fpath caused the slowdown. It bugged me so bad that I decided to try fish again, but this time for real. And this time it stuck with me. I like its architecture (defining functions to change its look-and-feel - great idea!) and of course its 32 ms startup time.


As another commenter, I'm probably at 100s per day.

A reason is that I use i3 workspaces, with each workspace being for different tasks, so I don't want to reuse a terminal that has another context.

One issue with keeping a single shell is that the history is full of irrelevant stuff. If you run commands that take a while and print a lot of output, you want it to be as clean as possible. If I want the beginning of the command, I can scroll to the top, and I don't end up with some unrelated output.

I also take quite notes in vim everywhere, often next to long-running commands, using tmux.


shadowsocks was the winner of the state of the art I had to do at work. It address the "long-term statistical analysis will often reveal a VPN connection regardless of obfuscation and masking (and this approach can be cheaper to support than DPI by a stat)" comment.


Someone should create a vpn protocol that pretends to be a command and control server exfiltrating US corporate secrets from hacked servers. The traffic pattern should be similar, and god forbid Xi blocks the real exfiltrations


There's a script to update from python2 to python3, it's now the most used language in the world, and they learned their lessons about the python2 to python3 migration. A python3 script is literally the most likely candidate to be still working/maintenable by someone else in 20 years.


They are actively removing modules from the base install.


Meanwhile, as a maintainer, I've been reviewing more than a dozen false positives slop CVEs in my library and not a single one found an actual issue. This article's is probably going to make my situation worse.


Maybe, but the author is an experienced vulnerability analyst. Obviously if you get a lot of people who have no experience with this you may get a lot of sloppy, false reports.

But this poster actually understands the AI output and is able to find real issues (in this case, use-after-free). From the article:

> Before I get into the technical details, the main takeaway from this post is this: with o3 LLMs have made a leap forward in their ability to reason about code, and if you work in vulnerability research you should start paying close attention. If you’re an expert-level vulnerability researcher or exploit developer the machines aren’t about to replace you. In fact, it is quite the opposite: they are now at a stage where they can make you significantly more efficient and effective.


Not even that. The author already knew the bug was there, and fed the LLM just the files related to the bug, with the explanation on how the methods worked and where to search, and even then, only 1 out of 100 times did it find the bug.


There are two bugs in the article: one the author previously knew about and was trying to rediscover as an exploration as well as a second the author did not know about and stumbled into. The second bug is novel, and is what makes the blog post interesting.


probably not. o3 is not free to use.


Who says you need to use a top model to produce cybersecurity slop? Did this person use o3?

https://hackerone.com/reports/3125832


> There is probably more JS written than any other language by orders of magnitude.

And the quantity of js code available/discoverable when scrapping the web is larger by an order of magnitude than every other language.


> I fear that LLMs have already fostered the first batch of developers who cannot function without it.

Playing the contrarian here, but I'm from a batch of developers that can't function without a compiler, and I'm at 10% of what I can do without an IDE and static analysis.


That's really curious: I've never felt that much empowered by an IDE or static analysis.

Sure, there's a huge jump from a line editor like `ed` to a screen editor like `vi` or `emacs`, but from there on, it was diminishing returns really (a good debugger was usually the biggest benefit next) — I've also had the "pleasure" of having to use `echo`, `cat` and `sed` to edit complex code in a restricted, embedded environment, and while it made iterations slower, not that much more slower than if I had a full IDE at my disposal.

In general, if I am in a good mood (and thus not annoyed at having to do so many things "manually"), I am probably only 20% slower than with my fully configured IDE at coding things up, which translates to less than 5% of slow down on actually delivering the thing I am working on.


I think there’s a factor of speed there, not a factor of insight or knowledge. If all you have is ‘ed’ and a printer, then I think most of the time you will spend is with the printout. ‘vi’ eliminates the printout and the tediousness of going back and forth.

Same with more advanced editors and IDEs. They help with tediousness, which can hinders insight, but does not help it if you do not have the foundation.


Apples and oranges (or stochastic vs deterministic)


Look inside a compiler, you'll find some AI.


You won't find an LLM.

What would you consider AI in it?


I've seen this comparison a few times already, but IMHO it's totally wrong.

A compiler translates _what you have already implemented_ into another computer runnable language. There is an actual grammar that defines the rules. It does not generate new business logic or assumptions. You have already done the work and taken all the decisions that needed critical thought, it's just being translated _instruction by instruction_. (btw you should check how compilers work, it's fun)

Using an LLM is more akin to copying from Stackoverflow than using a compiler/transpiler.

In the same way, I see org charts that put developers above AI managers, which are above AI developers. This is just smoke. You can't have LLMs generating thousands of lines of code independently. Unless you want a dumpster fire very quickly...


Yeah ok. I was viewing AI as "a tool to help you code better", not as "you literally can't do anything without it generating everything for you". I could do some assembly if I really had to, but it would not be efficient at all. I wonder if there's actually "developers" who are only prompting an LLM and not understanding anything in the output ? Must be generating dumpster fires as you said.



Code coverage tools allow to pragma the defensive code which will appear reasonable to most reviewers ?


I've hired by Linkedin recently and had to triage that stream of shit. There's 50% of candidate with no qualifications at all, then 25% that are somewhat qualified for something but not for the job at hands, then on those qualified there are some which might have left actual ChatGPT identifiable output in their own CV ("this job summary is short and insightful and will increase retention rate by 55%"), then of those that actually took some care of their CV and/or cover letter you realize their 'personal website' is an empty shell with fake projects and links to 2s YouTube video telling you that the demo is coming soon... you get the idea.

If you match the job well and want to get it, and actually accomplished something at some point, you should try to get it even if Linkedin showed you a high number.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: