Funny that Jocelyn Bell Burnell herself argued that she shouldn't have received the Nobel prize (I think she should have):
> It has been suggested that I should have had a part in the Nobel Prize awarded to Tony Hewish for the discovery of pulsars. There are several comments that I would like to make on this: First, demarcation disputes between supervisor and student are always difficult, probably impossible to resolve. Secondly, it is the supervisor who has the final responsibility for the success or failure of the project. We hear of cases where a supervisor blames his student for a failure, but we know that it is largely the fault of the supervisor. It seems only fair to me that he should benefit from the successes, too. Thirdly, I believe it would demean Nobel Prizes if they were awarded to research students, except in very exceptional cases, and I do not believe this is one of them. Finally, I am not myself upset about it – after all, I am in good company, am I not!
There's an interview show on RTE presented by Tommy Tiernan. I don't care for him much, but I happened to be watching the episode where JJB was interviewed. The thing that makes the show interesting (modulo Tommy, the prick) is that he doesn't know who the next guest will be, and often doesn't know why they're notable even when he is told their name, live. So, he has to ask them questions. "Why are you notable?". This I love, because it's a recognition that there are many people who do, and have done, really interesting, worthwhile things without necessarily being known to more than a small segment of the population. [I feel this way about footballers and opera singers. I know a few really big names, but it's mostly a clouded mountain top to me]
So, JJB is interviewed. Approximately like this:
TT: So, who are you? Why are you interesting?
JJB: I discovered pulsars.
TT: ... what's a pulsar?
JJB: <explanation>
.. drifts into talk about Nobel prizes. JJB continues as you say to be a class act. Then onto spirituality [not my bag]
I wish I could find the whole interview for you. It was gold. Although the subject matter of the segment I linked isn't that interesting to me, the format, and spirit (sorry) of open and honest enquiry is really good IMO. I wish we had more TV like this.
It wasn't so much pointing it out as a typo as making myself clear.
When suggesting a word is not what the writer meant, when it was also not the word that the writer wrote, it seemed wise to clarify exactly what I was talking about.
It depends whether you're asking it to solve a maze because you just need something that can solve mazes, or if you're trying to learn something about the model's abilities in different domains. If it can't solve a maze by inspection instead of writing a program to solve it, that tells you something about its visual reasoning abilities, and that can help you predict how they'll perform on other visual reasoning tasks that aren't easy to solve with code.
Again, think about how the models work. They generate text sequentially. Think about how you solve the maze in your mind. Do you draw a line direct to the finish? No, it would be impossible to know what the path was until you had done it. But at that point you have now backtracked several times. So, what could a model _possibly_ be able to do for this puzzle which is "fair game" as a valid solution, other than magically know an answer by pulling it out of thin air?
> So, what could a model _possibly_ be able to do for this puzzle which is "fair game" as a valid solution, other than magically know an answer by pulling it out of thin air?
Represent the maze as a sequence of movements which either continue or end up being forced to backtrack.
Basically it would represent the maze as a graph and do a depth-first search, keeping track of what nodes it as visited in its reasoning tokens.
And my question to you is “why is that substantially different than writing the correct algorithm to do it”? Im arguing its a myopic view of what we are going to call “intelligence”. And it ignores how human thought works in the same way by using abstractions to move to the next level of reasoning.
In my opinion, being able to write the code to do the thing is effectively the same exact thing as doing the thing in terms of judging if its “able to do” that thing. Its functionality equivalent for evaluating what the “state of the art” is, and honestly is naive to what these models even are. If the model hid the tool calling in the background instead, and only showed you its answer would we say its more intelligent? Because that’s essentially how a lot of these things work already. Because again, the actual “model” is just a text autocomplete engine and it generates from left to right.
> In my opinion, being able to write the code to do the thing is effectively the same exact thing as doing the thing
That's great, but it's demonstrably false.
I can write code that calculates the average letter frequency across any Wikipedia article. I can't do that in my head without tools because of the rule of seven[1].
Tool use is absolutely an intelligence amplifier but it isn't the same thing.
> Because again, the actual “model” is just a text autocomplete engine and it generates from left to right.
This is technically true, but somewhat misleading. Humans speak "left to right" too. Specifically, LLMs do have some spatial reasoning ability (which is what you'd expect with RL training: otherwise they'd just predict the most popular token): https://snorkel.ai/blog/introducing-snorkelspatial/
> I can write code that calculates the average letter frequency across any Wikipedia article. I can't do that in my head without tools because of the rule of seven
That is precisely the point I am trying to make. Its an arbitrary goalpost to say that knowing how to write the code doesnt mean its intelligent, and only doing it in a "chain of thought" would be.
First, the thrust of your argument is that you already knew that it would be impossible for a model like Gemini 3 Pro to solve a maze without code, so there's nothing interesting to learn from trying it. But the rest of us did not know this.
> Again, think about how the models work. They generate text sequentially.
You have some misconception on how these models work. Yes, the transformer LLMs generate output tokens sequentially, but it's weird you mention this because it has no relevance to anything. They see and process tokens in parallel, and then process across layers. You can prove, mathematically, that it is possible for a transformer-based LLM to perform any maze-solving algorithm natively (given sufficient model size and the right weights). It's absolutely possible for a transformer model to solve mazes without writing code. It could have a solution before it even outputs a single token.
Beyond that, Gemini 3 Pro is a reasoning model. It writes out pages of hidden tokens before outputting any text that you see. The response you actually see could have been the final results after it backtracked 17 times in its reasoning scratchpad.
You could actually add mazes and paths through them to the training corpus, or make a model for just solving mazes. I wonder how effective it would be, I’m sure someone has tried it. I doubt it would generalize enough to give the AI new visual reasoning capabilities beyond just solving mazes.
My guess is the part of its neural network that parses the image into a higher level internal representation really is seeing the dog as having four legs, and intelligence and reasoning in the rest of the network isn't going to undo that. It's like asking people whether "the dress" is blue/black or white/gold: people will just insist on what they see, even if what they're seeing is wrong.
> Further good news: the change in the file size will result in minimal changes to load times - seconds at most. “Wait a minute,” I hear you ask - “didn’t you just tell us all that you duplicate data because the loading times on HDDs could be 10 times worse?”. I am pleased to say that our worst case projections did not come to pass. These loading time projections were based on industry data - comparing the loading times between SSD and HDD users where data duplication was and was not used. In the worst cases, a 5x difference was reported between instances that used duplication and those that did not. We were being very conservative and doubled that projection again to account for unknown unknowns.
> Now things are different. We have real measurements specific to our game instead of industry data. We now know that the true number of players actively playing HD2 on a mechanical HDD was around 11% during the last week (seems our estimates were not so bad after all). We now know that, contrary to most games, the majority of the loading time in HELLDIVERS 2 is due to level-generation rather than asset loading. This level generation happens in parallel with loading assets from the disk and so is the main determining factor of the loading time. We now know that this is true even for users with mechanical HDDs.
They measured first, accepted the minimal impact, and then changed their game.
Yes, but I think maybe people in this thread are painting it unfairly? Another way to frame it is that they used industry best practices and their intuition to develop the game, then revisited their decisions to see if they still made sense. When they didn't, they updated the game. It's normal for any product to be imperfect on initial release. It's part of actually getting to market.
To be clear, I don't think it's a huge sin. It's the kind of mistake all of us make from time to time. And it got corrected, so all's well that ends well.
FWIW, the PC install size was reasonable at launch. It just crept up slowly over time.
But this means that before they blindly trusted
some stats without actually testing how their
game performed with and without it?
Maybe they didn't test it with their game because their game didn't exist yet, because this was a decision made fairly early in the development process. In hindsight, yeah... it was the wrong call.
I'm just a little baffled by people harping on this decision and deciding that the developers must be stupid or lazy.
I mean, seriously, I do not understand. Like what do you get out of that? That would make you happy or satisfied somehow?
Go figure: people are downvoting me but I never once said developers must be stupid or lazy. This is a very common kind of mistake developers often make: premature optimization without considering the actual bottlenecks, and without testing theoretical optimizations actually make any difference. I know I'm guilty of this!
I never called anyone lazy or stupid, I just wondered whether they blindly trusted some stats without actually testing them.
> FWIW, the PC install size was reasonable at launch. It just crept up slowly over time
Wouldn't this mean their optimization mattered even less back then?
One of those absolutely true statements that can obscure a bigger reality.
It's certainly true that a lot of optimization can and should be done after a software project is largely complete. You can see where the hotspots are, optimize the most common SQL queries, whatever. This is especially true for CRUD apps where you're not even really making fundamental architecture decisions at all, because those have already been made by your framework of choice.
Other sorts of projects (like games or "big data" processing) can be a different beast. You do have to make some of those big, architecture-level performance decisions up front.
Remember, for a game... you are trying to process player inputs, do physics, and render a complex graphical scene in 16.7 milliseconds or less. You need to make some big decisions early on; performance can't entirely just be sprinkled on at the end. Some of those decisions don't pan out.
> FWIW, the PC install size was reasonable at launch. It just crept up slowly over time
Wouldn't this mean their optimization mattered even less back then?
I don't see a reason to think this. What are you thinking?
> One of those absolutely true statements that can obscure a bigger reality.
To be clear, I'm not misquoting Knuth if that's what you mean. I'm arguing that in this case, specifically, this optimization was premature, as evidenced by the fact it didn't really have an impact (they explain other processes that run in parallel dominated the load times) and it caused trouble down the line.
> Some of those decisions don't pan out.
Indeed, some premature optimizations will and some won't. I'm not arguing otherwise! In this case, it was a bad call. It happens to all of us.
> I don't see a reason to think this. What are you thinking?
You're right, I got this backwards. While the time savings would have been minimal, the data duplication wasn't that big so the cost (for something that didn't pan out) wasn't that bad either.
Not sure why you linked that particular article, as it does not mention anywhere whether viruses are alive (though it implies they're alive with the sentence "Vaccines may consist of either live or killed viruses").
They are infectious agents, but many life forms are infectious agents.
That article (and the more general article on viruses) both pointedly avoid referring to viruses as organisms, "any living thing that functions as an individual".
Which specifically addresses edge cases including viruses, which "are not typically considered to be organisms, because they are incapable of autonomous reproduction, growth, metabolism, or homeostasis".
The terms "live" and "killed" have historical origins, but would better be read as "active" or "deactivated", and the immediately succeeding sentence clarifies this: "Live vaccines contain weakened forms of the virus, but these vaccines can be dangerous when given to people with weak immunity."
And yes, there are infectious agents which also happen to be organisms, such as bacteria, amoebas, funguses, etc. Tuberculosis (Mycobacterium tuberculosis), many stomach ulcers (Helicobacter pylori), botulism (Clostridium botulinum), and e. coli poisoning (Escherichia coli) are all infectious disease caused by bacteria. Giardiasis is a G-I infection of the Giardia amoeba. There are numerous fungal infections (many UTI infections, athlete's foot, jock itch, nail infections).
Further down the non-life infectious agent chain are prion diseases such as Transmissible spongiform encephalopathy ("mad cow" disease in cattle, "Creutzfeldt–Jakob disease", amongst others). These are literally misfolded proteins, which lack not only metabolism but any genetic material (DNA, RNA), but still propagate.
You misunderstood me. I wasn't claiming viruses are or aren't alive. I was pointing out you chose a citation that doesn't contain support for your claim. There are plenty of sources that would back you up, but that link doesn't.
> That article (and the more general article on viruses) both pointedly avoid referring to viruses as organisms
As if you expect people to carefully read the whole article, notice it doesn't mention anywhere whether viruses are alive, and conclude that by not mentioning this it supports your claim. By the same logic, it pointedly avoids saying viruses aren't alive.
> Scientific opinions differ on whether viruses are a form of life or organic structures that interact with living organisms. They have been described as "organisms at the edge of life", since they resemble organisms in that they possess genes, evolve by natural selection, and reproduce by creating multiple copies of themselves through self-assembly. Although they have genes, they do not have a cellular structure, which is often seen as the basic unit of life. Viruses do not have their own metabolism and require a host cell to make new products. They therefore cannot naturally reproduce outside a host cell—although some bacteria such as rickettsia and chlamydia are considered living organisms despite the same limitation. Accepted forms of life use cell division to reproduce, whereas viruses spontaneously assemble within cells. They differ from autonomous growth of crystals as they inherit genetic mutations while being subject to natural selection. Virus self-assembly within host cells has implications for the study of the origin of life, as it lends further credence to the hypothesis that life could have started as self-assembling organic molecules. The virocell model first proposed by Patrick Forterre considers the infected cell to be the "living form" of viruses and that virus particles (virions) are analogous to spores. Although the living versus non-living debate continues, the virocell model has gained some acceptance.
You're being rigid about your preferred definition of life, but for what purpose? What is gained by categorizing this as strictly non-living?
Wikipedia on the definition of life:
> Since there is no consensus for a definition of life, most current definitions in biology are descriptive. Life is considered a characteristic of something that preserves, furthers or reinforces its existence in the given environment. This implies all or most of the following traits: [list of seven common traits of life]
They don't make more money from showing you shorts once you've paid to remove the ads.
The default reason some feature doesn't exist is simply because no one bothered to make it. Maybe they don't think there's a big demand from their users to disable shorts completely.
I would wager some VP at YouTube in charge of shorts has their performance evaluations tied to how many hours of shorts are watched. So that's one incentive. Another is customer retention. Make current paying users addicted to shorts, and maybe they'll be more likely to keep paying.
I think you're basically right, but the comment I replied to was saying they'll somehow get more of that specific user's money. While the shorts may improve retention in aggregate, this particular paying customer doesn't want them.
It's possible that particular user, despite not wanting the shorts, will keep paying for YouTube for longer because they enjoy shorts. It's also possible that they genuinely don't like them and are less likely to keep paying because of them. People are different. What keeps some customers engaged can turn off others.
They use your data to target ads at you elsewhere on the internet, improve their analytics platforms and give it to oppressive regimes. It also often ends up at shady data brokers.
They make all sorts of money doing that, but they get upset when people say Google is “selling” the data.
> The default reason some feature doesn't exist is simply because no one bothered to make it. Maybe they don't think there's a big demand from their users to disable shorts completely.
My guess is they know exactly what users are doing with the app and website, and know that people use shorts more often than we think.
This is one of their prime products, and they're Google, the biggest surveillance company on the planet. Of course they know how users interact with their service.
> The main ones are that most people don't want to be mass murderers and actually doing it would be the fast ticket to Epic Retaliation.
The main thing preventing random nutcases from making nuclear weapons is they don't have access to the required materials. Restricting the instructions is unnecessary.
It would be a very different story if someone discovered a new type of WMD that anyone could make in a few days from commonly available materials, if only they knew the secret recipe.
> It would be a very different story if someone discovered a new type of WMD that anyone could make in a few days from commonly available materials, if only they knew the secret recipe.
It would need even more to be public. Suppose it was easy to make a biological weapon. You wouldn't be able to effectively censor it anyway and trying to would leave you sitting on an apocalypse bomb waiting for it to leak to someone nefarious or get independently rediscovered before anyone else is allowed to discuss it. What you need is for knowledge of how it works to be public so that everyone can join in the effort to quickly devise countermeasures before some nutcase destroys the world.
Moreover, if something is already public enough to be in the AI training data then it's already public.
Your plan is to release the secret recipe that anyone can use to make a WMD in a few days to absolutely everyone and hope someone comes up with a countermeasure before some nutcase or terrorist decides to try out the new WMD?
The odds of us inventing and deploying countermeasures to a new bomb or chemical weapon or biological agent in a few days is miniscule. You're gambling with terrible odds to uphold a principle in a hypothetical scenario where it's totally impractical. What happened to responsible disclosure, where you fix the vulnerability before disclosing it to the public?
> What happened to responsible disclosure, where you fix the vulnerability before disclosing it to the public?
The premise of censorship is that you're trying to prevent someone from telling other people something. If the only person who knows how to do it is some scientist who is now going to try to come up with a countermeasure before announcing it, there is no need for a law prohibiting them from doing something they've chosen not to do. And even then it's still not clear that this is the right thing to do, because what if their efforts alone aren't enough to come up with a countermeasure before someone bad rediscovers it? If they decide they need help, the law should prohibit them from telling anyone?
Which brings us back to AI. If the scientist now goes to the AI for help, should it refuse because it's about a biological weapon? What happens if that delays the development of a countermeasure until it's too late?
Meanwhile if this is someone else and they ask the AI about it, it's only going to be in the training data if it's already public or can be deduced from public information, and when that's the case you're already in a race against the clock and you need everyone in on finding a solution. This is why we don't try to censor vulnerabilities that are already out there.
> You're gambling with terrible odds to uphold a principle in a hypothetical scenario where it's totally impractical.
There are some principles that should always be upheld because the exceptions are so rare or ridiculous or purely hypothetical that it's better to eat them than to let exceptions exist at all. The answer has to be "yes, we're going to do it then too" or people get into the business of actually building the censorship apparatus and then everybody wants to use it for everything, when it shouldn't exist to begin with.
> The premise of censorship is that you're trying to prevent someone from telling other people something...
So you're not against individuals self-censoring for public safety, but you're against companies censoring their AIs for public safety. Are you only against AIs censoring information that's already publicly available, or are you against AIs censoring themselves when they know dangerous non-public information? Say the AI was the only thing to know the secret recipe for this WMD. Would this be like the scientist choosing not to tell everyone, or should the AI be designed to tell anyone who asks how to make a WMD?
> There are some principles that should always be upheld because the exceptions are so rare or ridiculous or purely hypothetical...
We're using hypotheticals to clarify the view you're trying to express, not because we think they will happen. And it seems you're expressing an that prohibiting AI censorship should be an absolute rule, even in the hypothetical case where not censoring AI has a 95% chance of wiping out humanity.
This argument seems confused, because you're trying to assert that prohibiting censorship is okay because these dangerous scenarios will never happen, but also that censorship should still be prohibited if such a scenario did happen. If you truly believe the latter, the first assertion is not actually a factor, since you're against censorship even if a dangerous scenario like the one above did happen. And if you truly believe the former, you should be able to say you're against censorship in what you consider to be plausible scenarios, but would be in favor if, hypothetically, there were a great enough danger. Then the discussion would be about whether there are realistic scenarios where lack of censorship is dangerous.
> Are you only against AIs censoring information that's already publicly available, or are you against AIs censoring themselves when they know dangerous non-public information? Say the AI was the only thing to know the secret recipe for this WMD. Would this be like the scientist choosing not to tell everyone, or should the AI be designed to tell anyone who asks how to make a WMD?
This is kind of what I mean by ridiculous hypotheticals. So you have this un-counterable yet trivial to produce WMD -- something that has never existed in all recorded history -- and an AI is the only thing that has it. This is a movie plot.
Even then, are you sure the answer should be "never tell anyone"? This is a computer running code to process data. It has no means to know who you are or what your intentions are. You could be the scientist who needs the formula to devise an antidote because the thing has already been released.
"A computer can never be held accountable, therefore a computer must never make a management decision."
It's not the machine's job to choose for you. It's frequently in error and it's not supposed to be in charge.
> This argument seems confused, because you're trying to assert that prohibiting censorship is okay because these dangerous scenarios will never happen, but also that censorship should still be prohibited if such a scenario did happen.
The problem comes from stipulating that something with a negligible probability has a high probability.
Suppose I say we should make mass transit free; no fares for anyone. You bring me the hypothetical that Hitler is on his way to acquire plutonium and he doesn't have bus fare, so the only thing preventing him from getting there is the bus driver turning him away for having nothing in his pockets. Then you ask if I still think we shouldn't charge fares to anyone.
And the answer is still yes, because you still have to make the decision ahead of time when the plausibility of that is still negligible. It's theoretically possible that any given choice could result in Armageddon via the butterfly effect. If you stipulate that that's what happens then obviously that's not what anybody wants, but it's also a thing that only happens in the implausible hypothetical. And if you're in a hypothetical then you can also hypothesize your way out of it. What if it's a sting and the allies are waiting for him at the plutonium factory, and he needs to get on the bus or you're depriving them of their only chance to kill Hitler?
Unless you stipulate that the tragedy is unavoidable given the decision, which is just assuming the conclusion.
> The problem comes from stipulating that something with a negligible probability has a high probability.
We are not doing so, and I don't know how I could have been more clear that we are not saying this hypothetical will happen. Would it help if the hypothetical was that the AI knows a magic spell that blows up the Earth?
It's a simple question. Would you think AI censorship is acceptable if the information actually were dangerous? Don't tell me why the hypothetical is impossible because that's entirely missing the point. I don't know what your position is, and so I don't know what you're arguing for. I don't know if you consider freedom of information to be a terminal virtue, or if you think it's good only when the consequences are good. Telling me the hypothetical won't happen doesn't clarify anything; I already know that.
You can have the view that we only want freedom of information when it causes net good, and that it always causes net good. Or maybe you have the view that freedom of information is always virtuous and we shouldn't consider the consequences. Or maybe something else. Until you clarify your view, I don't know if/what we disagree about.
Hypotheticals like that are uninteresting because there are only two ways it can go. The first is that you can find a way out of it, and then you say, do we need the magic spell for anything? Is knowing about it useful to preventing it from being used? Then people need to know.
The second is that you're stipulating the information being available is going to destroy the world with high probability and no possible means of mitigating it. Then anything else gets drowned out by the end of the world, but only because you're stipulating the outcome.
Which you can't do in real life, not just because the real probability of the hypothetical is so low but because there isn't anyone who can be trusted not to fudge the numbers when they want to censor something. Should it be censored if there is an absolute certainty it will destroy the world? There isn't much room to move in that one. Should it be censored because somebody claims it's really bad? Nope, because it's way more likely that they're full of crap than that it's actually going to destroy the world.
Not quite a nuke (just try obtaining enough uranium ore) but there are some fairly dangerous things a determined nutcase can make without drawing suspicion.
Example determined ned nutcases include Aum Shinrikyo, who tried anthrax, botox, and nukes before succeeding with sarin gas (thank IG Farben!) among other things.
TBH if someone discovers how to easily make garage WMDs we're fucked either way. That shit will leak and it will go into mass production by states and individuals. Especially in countries with tight gun control, (organized) crime will get a massive overnight buff.
Likely it'll leak or be rediscovered eventually. But not every trade secret gets leaked. Most responsibly disclosed software vulnerabilities aren't exploited (to our knowledge) before a fix is released. If the discovery isn't obvious, you have decent odds of keeping it secret for a while.
My point was just that nukes are a bad example of information that needs to be restricted to prevent harm.
> It has been suggested that I should have had a part in the Nobel Prize awarded to Tony Hewish for the discovery of pulsars. There are several comments that I would like to make on this: First, demarcation disputes between supervisor and student are always difficult, probably impossible to resolve. Secondly, it is the supervisor who has the final responsibility for the success or failure of the project. We hear of cases where a supervisor blames his student for a failure, but we know that it is largely the fault of the supervisor. It seems only fair to me that he should benefit from the successes, too. Thirdly, I believe it would demean Nobel Prizes if they were awarded to research students, except in very exceptional cases, and I do not believe this is one of them. Finally, I am not myself upset about it – after all, I am in good company, am I not!
reply