Hacker Newsnew | past | comments | ask | show | jobs | submit | deathanatos's commentslogin

You're not wrong, but at my place, our main repository does not permit cloning into a directory with spaces in it.

Three factors conspire to make a bug:

  1. Someone decides to use a space
  2. We use Python
  3. macOS
Say you clone into a directory with a space in it. We use Python, so thus our scripts are scripts in the Unix sense. (So, Python here is replacable with any scripting language that uses a shebang, so long as the rest of what comes after holds.) Some of our Python dependencies install executables; those necessarily start with a shebang:

  #!/usr/bin/env python3
Note that space.

Since we use Python virtualenvs,

  #!/home/bob/src/repo/.venv/bin/python3
But … now what if the dir has a space?

  #!/home/bob/src/repo with a space/.venv/bin/python3
Those look like arguments, now, to a shebang. Shebangs have no escaping mechanism.

As I also discovered when I discovered this, the Python tooling checks for this! It will instead emit a polyglot!

  #!/bin/bash

  # <what follows in a bash/python polyglot>
  # the bash will find the right Python interpreter, and then re-exec this
  # script using that interpreter. The Python will skip the bash portion,
  # b/c of cleverness in the polyglot.
Which is really quite clever, IMO. But, … it hits (2.). It execs bash, and worse, it is macOS's bash, and macOS's bash will corrupt^W remove for your safety! certain environment variables from the environment.

Took me forever to figure out what was going on. So yeah … spaces in paths. Can't recommend them. Stuff breaks, and it breaks in weird and hard to debug ways.


If all of your scripts run in the same venv (for a given user), can you inject that into the PATH and rely on env just finding the right interpreter?

I suppose it would also need env to be able to handle paths that have spaces in them.


What a headache!

My practical view is to avoid spaces in directories and filenames, but to write scripts that handle them just fine (using BASH - I'm guilty of using it when more sane people would be using a proper language).

My ideological view is that unix/POSIX filenames are allowed to use any character except for NULL, so tools should respect that and handle files/dirs correctly.

I suppose for your usage, it'd be better to put the virtualenv directory into your path and then use #!/usr/bin/env python


For the BSDs and Linux, I believe that shebang are intepreted by the kernel directly and not by the shell. /usr/bin/env and /bin/sh are guaranteed by POSIX to exists so your solution is the correct one. Anything else is fragile.

These are part of the rituals of learning how a system works, in the same way interns get tripped up at first when they discover ^S will hang an xterm, until ^Q frees it. If you're aware of the history of it, it makes perfect sense. Unix has a personality, and in this case the kernel needs to decide what executable to run before any shell is involved, so it deliberately avoids the complexity of quoting rules.

I'd give this a try, works with any language:

  #!/usr/bin/env -S "/path/with spaces/my interpreter" --flag1 --flag2
Only if my env didn't have -S support, I might consider a separate launch script like:

  #!/bin/sh
  exec "/path/with spaces/my interpreter" "$0" "$@"
But most decent languages seems to have some way around the issue.

Python

  #!/bin/sh
  """:"
  exec "/path/with spaces/my interpreter" "$0" "$@"
  ":"""
  # Python starts here
  print("ok")
Ruby

  #!/bin/sh
  exec "/path/with spaces/ruby" -x "$0" "$@"
  #!ruby
  puts "ok"
Node.js

  #!/bin/sh
  /* 2>/dev/null
  exec "/path/with spaces/node" "$0" "$@"
  */
  console.log("ok");
Perl

  #!/bin/sh
  exec "/path/with spaces/perl" -x "$0" "$@"
  #!perl
  print "ok\n";
Common Lisp (SBCL) / Scheme (e.g. Guile)

  #!/bin/sh
  #|
  exec "/path/with spaces/sbcl" --script "$0" "$@"
  |#
  (format t "ok~%")
C

  #!/bin/sh
  #if 0
  exec "/path/with spaces/tcc" -run "$0" "$@"
  #endif
  
  #include <stdio.h>
  
  int main(int argc, char **argv)
  {
      puts("ok");
      return 0;
  }
Racket

  #!/bin/sh
  #|
  exec "/path/with spaces/racket" "$0" "$@"
  |#
  #lang racket
  (displayln "ok")
Haskell

  #!/bin/sh
  #if 0
  exec "/path/with spaces/runghc" -cpp "$0" "$@"
  #endif
  
  main :: IO ()
  main = putStrLn "ok"
Ocaml (needs bash process substitution)

  #!/usr/bin/env bash
  exec "/path/with spaces/ocaml" -no-version /dev/fd/3 "$@" 3< <(tail -n +3 "$0")
  print_endline "ok";;

This is exactly the use case I just tried, and not only does one get a non-transparent checkerboard, the squares are inconsistent & jumbled.

Example: https://gemini.google.com/share/36d66cad1764


The output from Nano Banana, even when it is ostensibly drawing "single color" shaded areas, is so jittery that it can be challenging to threshold it.

Nano Banana Pro is better at that type of thing, so I suspect Nano Banana 2 will be sufficient.

As the old saying goes, "emacs is an operating system lacking only a decent text editor".

Not so. Evil mode is a great text editor.

Even beyond/disregarding Project 2025, tariffs were a well-known part of the GOP platform in 2024; it was even included and discussed at the Presidential Debate. The Harris platform even called it a tax at that time, to attempt to make it quite clear to the voter who, in the end, would bear the cost, and the Trump platform equivocated on who would pay the tax to distract from that Harris was right.

Even if you knew nothing of Project 2025 (somehow), you were warned.


On top you have news outlets and educated people not being clear what they are. See from the article:

He has long argued tariffs boost American manufacturing - but many in the business community, as well as Trump's political adversaries, say the costs are passed on to consumers

It’s reported as if someone still needs to figure out who pays the tariffs in the end. I’m aware that tariffs are a lever to potential move buying behavior and give incentives to move production locally. But in this instance and how it’s/ was implemented it’s clear who is the paying for it.


“ Even beyond/disregarding Project 2025, tariffs were a well-known part of the GOP platform in 2024;”

The tariff stuff is just a variation of the republican dream to replace income tax with a sales tax. Big tax cut for higher incomes while raising taxes for lower incomes.


I have a pair of men's jeans. If I lay them flat, while buttoned, and measure the width of the cloth with a cloth measure, I get 16.5", so roughly a 33" circumference. They're a 32"x34" size pair, … so that's basically spot on.

Note that, at least AIUI, the measurement printed on the pair is the wearer's waist measurement, so we thus expect the measured circumference of the top of the jeans to be slightly wider, since men's jeans are not intended to sit at the waist.


The outer measurement will also be larger than the inner. If your waist measures precisely 32", a 32" outer-measurement waistband (sans some stretch) will be too snug.

While I don't disagree, the fabric is a few millimeters thick. I think there's more error in my measurement with a cloth measure with them laid on the bed than the inner vs. outer diameter.

Do we know that? I've written "dead" code. It's point was to communicate structure or intent, but it was also still dead. This pattern, in one form or another, crops up a lot IME (in multiple languages, even, with varying abilities to optimize it):

  if condition that is "always" false:
    abort with message detailing the circumstances
That `if` is "dead", in the sense that the condition is always false. But "dead" sometimes is just a proof — or if I'm not rigourous enough, an assumption — in my head. If the compiler can prove the same proof I have in my head, then the dead code is eliminated. If can't, well, presumably it is left in the binary, either to never be executed, or to be executed in the case that the proof in my head is wrong.

What about assertions that are meant to detect bad hardware? I'd think that's not too uncommon, particularly in shops building their own hardware. Noise on the bus, improper termination, ESD, dirty clock signal, etc. -- there are a million reasons why a bit might flip. I wouldn't want the compiler to optimize "obviously wrong" code out anymore then empty loops.

I think if you're in a language that's doing constant-propagation optimizations, you work around that in one of two ways:

1. you drop down to assembly.

2. you use functions that are purpose built to be sequence points the optimizer won't optimize through. E.g., in Rust, for the case you mention, `read_volatile`.

In either case, this gives the human the same benefit the code is giving the optimizer: an explicit indication that this code that might appear to be doing nothing isn't.


Some conditions depend strictly on inputs and the compiler can't reason much about them, and the developers can't be sure about what their users will do. So that pattern is common. It's a sibling of assertions.

There are even languages with mandatory else branch.


> undefined behavior makes debugging intractable:

By their own admission, the compiler warns about the UB. "-Wanal"¹, as some call it, makes it an error. Under UBSan the program aborts with:

  code.cpp:4:6: runtime error: execution reached the end of a value-returning function without returning a value
… "intractable"?

¹a humorous name for -Wextra -Wall -Werror


Not everybody has full control over their environment.

The -Werror flag is not even religiously used for building, e.g. the linux kernel, and -Wextra can introduce a lot of extraneous garbage.

This will often make it easier (though still difficult) to winnow the program down to a smaller example, as that person did, rather than to enable everything and spend weeks debugging stuff that isn't the actual problem.


Yes, this is the funny thing. People do not want to spend time using a stricter language as already supported by C compilers using compiler flags because "it is waste of time" while others argue that we need to switch to much stricter languages. Both positions can not be true at the same time.

> Nobody at this point disagrees we’re going to achieve AGI this century.

Nobody. Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.

> 100% of today’s SWE tasks are done by the models.

Thank God, maybe I can go lie in the sun then instead of having to solve everyone's problems with ancient tech that I wonder why humanity is even still using.

Oh, no? I'm still untying corporate Gordian knots?

> There is no reason why a developer at a large enterprise should not be adopting Claude Code as quickly as an individual developer or developer at a startup.

My company tried this, then quickly stopped: $$$


Can you please make your substantive points without snark? We're trying for a quite different kind of discussion here. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

You may not owe AGI enthusiasts better, but you owe this community better if you're participating in it.


(Since I cannot edit the post.)

These posts are so tiring. The statement is an outright and blatant lie, because it's grift. The grifter wants to silence dissent by rendering it "non-existent", so that the grift can take the position of being a foregone conclusion. There is no dissent. The statement is outrageous, given the obvious amount of dissent in the comments, and the positive reaction of my fellow commenters to it. "AI built a browser from scratch." It did not. "AI built a compiler." It can't compile hello world. "AGI is coming & nobody disagrees." But the truth takes its time getting its shoes on while a lie already spread across the world.

It's doubly tiring since I (and I suspect, many of this) are having AI stuffed down our gullets by our respective management chains. Any honest evaluation of AI comes to the result that it's nowhere near capable, routinely misses the mark, and probably takes more time to verify its answer than it does to use. But I suspect many people are just skipping the verification step.

& it's disappointing to see low-quality articles like this make it, time and again, and it feels like thoughtful discussion no longer moves minds these days.

I'll try to express this without the snark going forward, though.


> Any honest evaluation of AI comes to the result that it's nowhere near capable, routinely misses the mark, and probably takes more time to verify its answer than it does to use.

Before the era of code reviews as best practice I was surprised to learn many developers I worked with knowingly check partially or completely broken features and code. That’s what AI is when I’ve used it on large complex applications so far.


Agreed. While there has been a lot of progress with AI, the full-throated bleating of its superlatives is tiresome and often verges on outright falsehoods.

That said, Dario is orders of magnitude better than other AI tech ceos who are outright bullshitters/liars. He generally makes/raises good points which are worth thinking over.


> Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.

This captures my chief irk over these sorts of "interviews" and AI boosterism quite nicely.

Assume they're being 100% honest that they genuinely believe nobody disagrees with their statement. That leaves one of two possible outcomes:

1) They have not ingested data from beyond their narrow echo chamber that could challenge their perceptions, revealing an irresponsible, nay, negligent amount of ignorance for people in positions of authority or power

OR

2) They do not see their opponents as people.

Like, that's it. They're either ignorant or they view their opposition as subhuman. There is no gray area here, and it's why I get riled up when they're allowed to speak unchallenged at length like this. Genuinely good ideas don't need this much defense, and genuinely useful technologies don't need to be forced down throats.


option 3: reject the premise that they're being 100% honest

this third option seems like the most reasonable option here? the way you worded this makes it seems like there are only these two options to reach your absurd conclusion

> like thats it

> There is no gray area here

re-examine your assumptions


...did you just skip the first part where I literally preface my argument with this line?

> Assume they're being 100% honest that they genuinely believe nobody disagrees with their statement.

That's the core assumption. It's meant to give them the complete benefit of the doubt, and show that doing so means their argument is either ignorant or their perspective that opponents aren't people.

Obviously they're being dishonest little shits, but calling that out point-blank is hostile and results in blind dismissal of the toxicity of their position. Asking someone to complete the thought experiment ("They're behaving honestly, therefore...") is the entire exercise.


Yeah i read that part. Thats the part thats wrong.


To be honest, I kind of think it's a combination of both #2 and #3.

They know they're lying. But they also believe, and they want everyone else to believe, that anyone who disagrees with them is subhuman, inconsequential.


I think you're being uncharitable towards option 2. When a physicist says "nobody disagrees that perpetual motion machines are impossible", are they saying that Jimbo who thinks he's built one in his garage is subhuman? Of course not. What they mean is that all experts who've seriously considered the issue agree, and they see so little substance in non-expert objections that it's not worth engaging.


> 2) They do not see their opponents as people.

> Like, that's it. They're either ignorant or they view their opposition as subhuman.

I'm going to go a bit off topic, but tech people often just inhale sci-fi, and I think we ought to reckon the problems with that, especially when tech people get into position of power.

Take Dune, for instance. Everyone know Vladimir Harkonnen is a bad guy, but even the good-guy Atreides seem to be spending their time fighting and assassinating, Paul's jihad kills 60 billion people, and Leto II is a totalitarian tyrant. It's all elite power-and-dominance shit, not even the protagonists are good people when you think about it. Regular people merit barely a mention, and are just fodder.

Often the people are cardboard, and it's the (fantasy) tech and the "world building" that are the focus.

It doesn't seem like it'd be good influence on someone's worldview, especially when not balanced sufficiently by other influences.


> They do not see their opponents as people.

You hit the nail on their head.

They go out of their way to call you an "AI bot" if you say something that contradicts their delusional world view.


> My company tried this, then quickly stopped: $$$

How much were devs spending to become a sticking point?

I'm asking because I thought it'd be extremely expensive when it rolled out at the company I work for, we have dashboards tracking expenses averaged per dev in each org layer, the most expensive usage is about US$ 350/month/dev, the average hovers around US$ 30-50.

It's much cheaper than I expected.


Nobody out of people remotely worth listening to. There's always people deeply wrong about things but over 70 years at this point is a pretty insane position unless you have a great reason like expecting Taiwan to get bombed tomorrow and slow down progress.


Probabilities have increased, but it's still not a certainty. It may turn out that stumbling across LLMs as a mimicry of human intelligence was a fluke and the confluence of remaining discoveries and advancements required to produce real AGI won't fall into place for many, many years to come, especially if some major event (catastrophic world war, systematic environmental collapse, etc) occurs and brings the engine of technological progress to a crawl for 3-5 decades.


"100% of AI researchers think we will have AGI this century" isnt the same as "100% of AI researchers think theres a 100% chance that we will have AGI this century"


I think the only people that don't think we're going to see AGI within the next 70 years are people that believe consciousness involves "magic". That is, some sort of mystical or quantum component that is, by definition, out of our reach.

The rest of us believe that the human brain is pretty much just a meat computer that differs from lower life forms mostly quantitatively. If that's the case, then there really isn't much reason to believe we can't do exactly what nature did and just keep scaling shit up until it's smart.


I don't think there's "magic" exactly, but I do believe that there's a high chance that the missing elements will be found in places that are non-intuitive and beyond the scope of current research focus.

The reason is because this has generally been how major discoveries have worked. Science and technology as a whole advances more rapidly when both R&D funding is higher across the board and funding profiles are less spiky and more even. Diminishing returns accumulate pretty quickly with intense focus.


Sufficiently advanced science is no different than magic. Religion could be directionally correct, if off on the specifics.

I think there’s a good bit of hubris in assuming we even have the capacity to understand everything. Not to say we can’t achieve AGI, but we’re listening to a salesman tell us what the future holds.


I'm not sure why you would characterize the possibility that consciousness relies on quantum mechanics to be "magic". Quantum mechanics are very real.


While I fully agree with your sentiment it’s striking Dario said “this century”. He likely won’t even be alive for about 50%, assuming he lives to 80, of his prediction window. It’s such a remarkably meaningless comment.


He was hawking the doubling of human lifespan to some boomers few months ago. The current AI is just religion in new clothes, mainly for people who see themselves as too smart to believe in God and heaven so believe in the AI and project everything to it.


> We pay humans upwards of $50 trillion in wages because they’re useful, even though in principle it would be much easier to integrate AIs into the economy than it is to hire humans


Can someone explain to me what AGI means? What is the concrete technical definition? How do we know it is achieved?


Microsoft and OpenAI had to define it in their agreement, and settled on “AI systems that can generate at least $100 billion in profits”. Which tells you where those folks are coming from.


Dario defines it as:

'By powerful AI [he dislikes the baggage of AGI, but means the same], I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:

In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.

In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.

It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.

It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.

The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with.

Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.

We could summarize this as a “country of geniuses in a datacenter”.'

https://darioamodei.com/essay/machines-of-loving-grace


A human working virtual has a lot less interfaces than a non-virtually working human - will that matter if the interfaces are limited for the AGI?


What a sick person Dario is

I think we still have trouble defining the 'I' part of AGI and the rest is predicted on that definition being objective and concrete first.


It's Nirvana or Heaven for the AI cult.

It's a constantly shifting goalpost. Really it's a just a big lie that says AI will do whatever you can imagine it would.


It's just God for them basically, they project the solutions to their fears into it. Eg. fear of death, in religion we have heaven etc. In AI they believe it will multiply their lifespan with some magic.


>It's Nirvana or Heaven for the AI cult

Nah, that would be ASI, artificial super intelligence.


> 100% of today’s SWE tasks are done by the models.

Meanwhile, Claude Code is implemented using a React-like framework and has 6000 open issues, many of which are utterly trivial to fix.


He does not specify if tasks are done correctly. Merge your change request full of ai slop and close the task in jira as done. Voila! Velocity increased to the moon! 6000 or 7000 open issues - who cares?


I’m honestly trying to understand the state of the art and unfortunately the industry is so grifty it’s hard to tell…

Can I ask what happened with your Claude Code rollout?


A quick Google suggests in 2016 it was $1.17B, and earned $21M, or 1.79%.

(via https://www.amminvest.com/starbucks-sbux-float/ )


I mean I’d love to have a free $21M a year, but if you’re already Starbucks then somehow it feels like pocket change compared to your actual earnings and would question if it was worth the effort.


But they also have over a billion cash at hand. I imagine at that scale and customers being private, the amount is pretty stable and Starbucks can just do whatever with this since it's extremely unlikely that customers demand all their money back at once.


Sorry, can they (customers) demand it back?

I mean, once Starbucks have it, then the customers get it back via product (that has a margin included), or just leave it forever (free money!)

I have a firm "No vouchers" rule because of this, the vouchers in my part of the world inexplicably "expire" if not used within a certain amount of time, cannot be redeemed for cash, and will not be honoured if the business goes belly up


According the laws here they have to. Doesn't mean they won't make it difficult. And it needs to be in a separate account and business (to avoid it being drawn into a bankruptcy). Not that this has ever stopped businesses from abusing it anyway. I doubt this voucher option is available in the Dutch app because of this but I didn't bother to check.


That was actually a bad year, as that "free" $21 million represented a loss of about $30 million. $1.17 billion on Jan 1st 2016 is equivalent to $1.22 billion a year later due to inflation. So they would have had to generate $50 million just to break even in actual buying power terms.


Is that based on not investing that $21 million or am I misreading something about how that money was or wasn't used?


No, they're saying that inflation that year was 4.3% ($1.22B / $1.17B¹) If you're only making 1.8% in investing it, you're not beating inflation, and your money, though nominally growing (number is going up), is decreasing in its real value (amount of stuff the money gets is going down).

(¹I think this is too high; BLS thinks inflation over 2016 was 2.5%. But their core point still stands: interest earned was below inflation.)


I used the CPI


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: