Plus, how many shell can one individual open in a day ? I'm doing that once per day on a good day, maybe twenty if I have a lot of unplanned work on subprojects that needs to be done concurrently with my main task.
Sometimes hundreds per day. Tabs come and go here. People have different workflows.
Yesterday just shy over 50, according to entries from `login` in the system log.
I do launch multiple interpreters just to get a fancy coloful cowsay on each launch. Which involves the fortune program, lolcat (via ruby) and cowsay itself (via Perl). I probably should optimise that into a single C binary for better startup times! :)
The problem with long startups is that they break the flow. I live in CLI. I open and close terminal windows all day long, sometimes just for quick 2-3 commands to check something. 100 new interactive shells a day is my guess. I already know commands to run, my fingers are ready to type, they already pressed the keys ti spawn a new shell and now they have to stop for 500 ms. Repeat these 100 interruptions every day and you get the death by 1000 spoons.
I don't use oh my zsh, but on one laptop zsh took 600ms to start. I narrowed it down to a strange "bug": adding even a single empty file to the custom fpath caused the slowdown. It bugged me so bad that I decided to try fish again, but this time for real. And this time it stuck with me. I like its architecture (defining functions to change its look-and-feel - great idea!) and of course its 32 ms startup time.
As another commenter, I'm probably at 100s per day.
A reason is that I use i3 workspaces, with each workspace being for different tasks, so I don't want to reuse a terminal that has another context.
One issue with keeping a single shell is that the history is full of irrelevant stuff. If you run commands that take a while and print a lot of output, you want it to be as clean as possible. If I want the beginning of the command, I can scroll to the top, and I don't end up with some unrelated output.
I also take quite notes in vim everywhere, often next to long-running commands, using tmux.
shadowsocks was the winner of the state of the art I had to do at work. It address the "long-term statistical analysis will often reveal a VPN connection regardless of obfuscation and masking (and this approach can be cheaper to support than DPI by a stat)" comment.
Someone should create a vpn protocol that pretends to be a command and control server exfiltrating US corporate secrets from hacked servers. The traffic pattern should be similar, and god forbid Xi blocks the real exfiltrations
There's a script to update from python2 to python3, it's now the most used language in the world, and they learned their lessons about the python2 to python3 migration. A python3 script is literally the most likely candidate to be still working/maintenable by someone else in 20 years.
Meanwhile, as a maintainer, I've been reviewing more than a dozen false positives slop CVEs in my library and not a single one found an actual issue. This article's is probably going to make my situation worse.
Maybe, but the author is an experienced vulnerability analyst. Obviously if you get a lot of people who have no experience with this you may get a lot of sloppy, false reports.
But this poster actually understands the AI output and is able to find real issues (in this case, use-after-free). From the article:
> Before I get into the technical details, the main takeaway from this post is this: with o3 LLMs have made a leap forward in their ability to reason about code, and if you work in vulnerability research you should start paying close attention. If you’re an expert-level vulnerability researcher or exploit developer the machines aren’t about to replace you. In fact, it is quite the opposite: they are now at a stage where they can make you significantly more efficient and effective.
Not even that. The author already knew the bug was there, and fed the LLM just the files related to the bug, with the explanation on how the methods worked and where to search, and even then, only 1 out of 100 times did it find the bug.
There are two bugs in the article: one the author previously knew about and was trying to rediscover as an exploration as well as a second the author did not know about and stumbled into. The second bug is novel, and is what makes the blog post interesting.
> I fear that LLMs have already fostered the first batch of developers who cannot function without it.
Playing the contrarian here, but I'm from a batch of developers that can't function without a compiler, and I'm at 10% of what I can do without an IDE and static analysis.
That's really curious: I've never felt that much empowered by an IDE or static analysis.
Sure, there's a huge jump from a line editor like `ed` to a screen editor like `vi` or `emacs`, but from there on, it was diminishing returns really (a good debugger was usually the biggest benefit next) — I've also had the "pleasure" of having to use `echo`, `cat` and `sed` to edit complex code in a restricted, embedded environment, and while it made iterations slower, not that much more slower than if I had a full IDE at my disposal.
In general, if I am in a good mood (and thus not annoyed at having to do so many things "manually"), I am probably only 20% slower than with my fully configured IDE at coding things up, which translates to less than 5% of slow down on actually delivering the thing I am working on.
I think there’s a factor of speed there, not a factor of insight or knowledge. If all you have is ‘ed’ and a printer, then I think most of the time you will spend is with the printout. ‘vi’ eliminates the printout and the tediousness of going back and forth.
Same with more advanced editors and IDEs. They help with tediousness, which can hinders insight, but does not help it if you do not have the foundation.
I've seen this comparison a few times already, but IMHO it's totally wrong.
A compiler translates _what you have already implemented_ into another computer runnable language. There is an actual grammar that defines the rules. It does not generate new business logic or assumptions. You have already done the work and taken all the decisions that needed critical thought, it's just being translated _instruction by instruction_. (btw you should check how compilers work, it's fun)
Using an LLM is more akin to copying from Stackoverflow than using a compiler/transpiler.
In the same way, I see org charts that put developers above AI managers, which are above AI developers. This is just smoke. You can't have LLMs generating thousands of lines of code independently. Unless you want a dumpster fire very quickly...
Yeah ok. I was viewing AI as "a tool to help you code better", not as "you literally can't do anything without it generating everything for you". I could do some assembly if I really had to, but it would not be efficient at all. I wonder if there's actually "developers" who are only prompting an LLM and not understanding anything in the output ? Must be generating dumpster fires as you said.
I've hired by Linkedin recently and had to triage that stream of shit. There's 50% of candidate with no qualifications at all, then 25% that are somewhat qualified for something but not for the job at hands, then on those qualified there are some which might have left actual ChatGPT identifiable output in their own CV ("this job summary is short and insightful and will increase retention rate by 55%"), then of those that actually took some care of their CV and/or cover letter you realize their 'personal website' is an empty shell with fake projects and links to 2s YouTube video telling you that the demo is coming soon... you get the idea.
If you match the job well and want to get it, and actually accomplished something at some point, you should try to get it even if Linkedin showed you a high number.
> Well, nobody will be able to reproduce their work (unless other people also publish fraudulent work from there)
In theory, yes, in practice, the original result for amyloid beta protein as the main cause of Alzheimer were faked and it wasn't caught for 16 years. A member of my family took med based on it and died in the meantime.
> Many jurisdictions are downright hostile and obstructionist for any renewable investment.
Right, as long as energy prices are not guaranteed no one is going to invest massively. No one want to be the one having to eat up a 8000$/mwh price like in Texas in the Winter. The 3 month mean volatility of energy prices in Europe is 350% (with peak of more than 1500% in 2009 and 2021). For comparison Bitcoin volatility is 54% in the last 10 years.
Could you elaborate on your point I a bit? I think you are making a good point but I am not fully grokking it. Are you looking at the investment on the utility’s level or the individual level? Wouldn’t high variability provide an “insurance justification” for individuals to push solar / battery or similar? Who eats the $8000/mwh currently that wouldn’t be able to justify renewables on just that basis?
reply