I have found myself legit using this instead of duolingo for the past couple of days, in spite of the speech to text issues. It feels like a better way to learn. The inline suggestions, in particular, seem like they will help me quickly move through bad habits and mistakes in order of severity.
The tutor review at the end of the lesson doesn't feel as useful currently; its content isn't particularly actionable. Maybe if it gave me a couple of exercises or something?
You can actually set the chat settings to automatic recording and even decide after how many seconds of not saying anything, the message should be automatically sent. Just check out the chat settings on the top right ;)
The auto submit slider doesn't work for me (on Android) but I can imagine what it would be like. I don't think the Portuguese speech to text is good enough for the app to be usable hands free even if the slider did work. For example, it just transcribed
"Eu quero ir a assistir ao filme." (which is almost correct)
Penalizing a crime might make sense for the purpose of deterrence even if there is no conscious free will. The size of the deterrent effect is a scientific question, and how to weigh the benefits of deterrence to society vs. the harm the punishment inflicts on the criminal is a moral question (where by moral I mean "should we do this").
However, penalizing a crime for the purpose of retribution makes no sense if there is no conscious free will. The criminal's consciousness experiences the harm of the punishment, but didn't cause the actions that constituted the crime. This is patently unfair.
I don't think conscious free will exists. One of the reasons I care is that, if most people shared that belief, we would talk and think about crime so differently that our approach to crime would change in the direction of becoming more humane.
For example, given the news of SBF's conviction for fraud, no one would be happy about the fact that he will likely experience many years in prison. We might be happy that the harm his actions caused has been stopped, and we may feel a kind of dutiful satisfaction in the knowledge that the system is working to deter and prevent this kind of harm. However, these emotions would be tempered by regret that a young man is going to lose part of his life, and we would be questioning whether there's any way to achieve the same level of harm prevention without inflicting such serious harm on one person.
I find it plausible that it's often a good idea in medicine to be conservative before launching interventions that may have adverse side effects, but have difficulty swallowing the idea that this conservatism should be implemented by reducing visibility (eg by not applying tests too broadly if they have a nontrivial false positive rate and the condition is rare). It seems like the right thing to do would be to maximize visibility, but then to try to corroborate and not overreact to apparent positives.
> It seems like the right thing to do would be to maximize visibility, but then to try to corroborate and not overreact to apparent positives.
Far easier said than done. What actually happens when you get caught in one of these screening programs, is that you're suddenly in a universe of "continuous monitoring"...because they're trying to find the birds, and reason that since they looked at you once, they need to look at you more intensely than before.
Anyone who has ever had a mole biopsied for skin cancer will be familiar with this. They'll just keep looking for stuff, until you tell them to stop. If you get a positive, it's even worse. Everyone is acting with good intentions. It's just impossible to know the right answer, so the emergent behavior of the system is to harm people.
I heard about this video from an astronomer friend a few months ago. It was mostly about how Loeb's social and professional behavior matches with that of past crackpots. He does seem to leave himself open to that.
There wasn't really a material argument against his controversial hypotheses about interstellar objects. I counted an ad hominem (45 minutes of analysis of his awkward public persona), an argument from ignorance ('Omuamua is out of view so it's not possible to collect evidence for the lightsail hypothesis) and an appeal to the authority of the broader astronomical community.
He is going out and looking for evidence about IM1, and is seeking cross-validation from other labs, in a very transparent way. He's making lots of possibly inappropriately optimistic public speculations in the meantime, but I haven't seen him make any actual claims that aren't justified, which seems like the defining feature of a crackpot.
Today in the news, people still looking for the Loch Ness Monster using underwater microphones. They made all the recordings and are now looking to process the data, all in good faith and with an open mind. At some point you have to call a spade a spade for your own and everyone else's sanity.
This makes sense, thanks. However, it doesn't mean that the 3% estimate of the chance of impact was wrong at the time of initial observation given data available, and that still makes it a huge deal at that time. At minimum, it would seem to justify using the best available instruments to characterize the asteroid more precisely as soon as possible.
If these large numbers happen so often that asteroids with initial impact probabilities of 3% are known to actually impact much less frequently than that, then the model is poorly calibrated, no? In other words, the reported probabilities aren't really probabilities and that is what has caused the confusion and anxiety in these comments.
It's not a model that is poorly calibrated - you seem to be taking a software-centric concept far away from where it's useful. The uncertainty at initial observation is because when you first observe an object, you only have observations covering a tiny bit of the orbit, resulting in very wide error bars. The "model" (Newtonian orbital dynamics) is one of the most precise models we have. Doesn't help when the observations are noisy.
Unless one in every 33 asteroids that have 3% impact probability at some point in time actually impacts earth, there is clearly some unwarranted assumption in the error bar/distribution calculation.
"The measurement data has noise" does not explain why the noise has a bias towards "the asteroid will hit earth" whereas reality so far has been biased towards "the asteroid will not hit earth".
(This assumes that significantly more than 33 asteroids have had >= 3% impact probability predicted at some point. The opposite would not be less concerning.)
To simplfy, let's assume you have perfect knowledge of everything else & that the only variable that matters is asteroids current position. By triangulating observations you have a point estimate. Due to calibrating your instruments in the past you know that they tend to have uniform additive noise that is the same in each dimension. Let's say it shifts measurements by up to 1km randomly.
So the best guess you have is that the true asteroid is 99% likely to be somewhere within a 2km box centered at the observation point.
For each possible location in this box you use it as a hypothetical starting point and run a simulation forward creating a trajectory. In 3% of these trajectories the asteroid hits the earth.
The 3% is only a probability over the measurement uncertainty. It represents our knowledge about the system in a bayesian sense. The true asteroid was always ever going to hit the earth or not. There is no uncertainty inherent in the system.
That many asteroids have non negligible probability only means the physics is sensitive to initial conditions or that the measurements are loose. (Both are true)
Given everything you said is true, under those assumptions 3% of those asteroids that we identify as being in said 2km box will hit earth, unless the forward simulation is wrong (implausible) or the measurement error distribution is substantially wrong (also seems unlikely).
What your analysis is not touching on is the prior probability that an asteroid will hit earth (you collapse this to "any asteroid will either hit or not", but that is not helpful for "model calibration" or whatever you want to call this) - or, equivalently, the prior probability of making (a series of) observations with a certain uncertainty/error distribution. If that prior were actually as uniform as each measurement error suggests, I don't see any Bayesian wiggle room left for why we don't have those 3% of impact actually happen.
(I'm no expert, but presumably you need multiple measurements to predict a trajectory, and while their measurement error distributions may be independent, it seems plausible to me that the prior probability of making two specific noise-affected observations, i.e. of the asteroid being on a certain trajectory, is most likely not so uniform. That's the part that I'd like to learn more about though.)
I think some confusion here seems to come from the following interpretations:
-Then what does 3% mean? Surely it means "given the data we have, one in every 33 will hit"
-Given everything you said is true, under those assumptions 3% of those asteroids that we identify as being in said 2km box will hit earth.
Both of these statements are false. The probability density is over our knowledge of the state variables/state space for this asteroid, not over asteroids. The hypothetical sample of asteroids is not drawn from the distribution I'm talking about.
Going back to the simplified example: With the uniform prior on the box, our probability means that 3% of the volume of this box would lead to an impact if an asteroid was centered at a point in that volume at this time of measurement.
It doesn't say anything about hypothetical realizations of this asteroid (it is not clear what this would be sampled from or what it means in a precise sense to repeat a 1 time event) and says even less about the sample of (nearly) independent asteroids observed in the past. The probability measure only describes the measurement uncertainty on properties of this particular asteroid. It is not conditioned on or related to statistics on impacts of "general asteroids".
But "presumably you need multiple measurements to predict a trajectory" and your notes about independence and uniformness being bad assumptions are absolutely correct tho. I agree 100%
My comment above is mostly an attempt to make a simple example to clarify what the probability measure being measured here is. It's not a physically realistic example :) and definitely doesn't make good assumptions about what information is needed and what error distributions that information would have! I don't do space and didn't want to make guesses
Calibration here would have to be over multiple measurements of the same asteroid (which my example doesn't touch on). Likely by predicting trajectories at different intervals and matching the likelihood of later observations.
Verifying multiple observations leading up to a 1 time event is a very different than, say, verification of simulations of an internal combustion engine design where measurements of a real world prototype can be conducted repeatedly and independently to learn/calibrate some fundamental properties or initial conditions like chemical kinetic coefficients and such.
For general interest/lectures/fun, the general field that studies how to push uncertainties forwards/backwards/calibrate a mathematical model and simulation is called "Uncertainty Quantification". Also not an expert lol, I was just surrounded by a bunch in my cohort
> Unless one in every 33 asteroids that have 3% impact probability at some point in time actually impacts earth
There would be a ~63.4% chance that at least one would hit us if there were 33 such asteroids. To compute this, take 1-(0.97^33). I agree with your broader point though.
That's because Earth has gravity, and an asteroid that comes close enough can get deflected onto the planet even if right now it seems to be on a trajectory to miss it entirely. The closer they get and the lower the relative speeds the larger the chance that they will collide and that's not a linear relationship. Beyond a certain boundary impact is certain, then the question is what the time of the impact is and how precise the observations up to that point are in order to figure out where and when exactly it will come down. That won't happen very long before the impact itself happens even if you could say some time in advance roughly in which hemisphere and roughly when. But not precise enough to be very useful.
I wouldn't expect earth gravity to affect it sufficiently enough to cause it to crash unless it was moving very slowly, but I'm not sure asteroids ever move that slowly?
We're not talking about the asteroid stopping with a screech of tires and then taking a hard left turn to crash into earth.
It's just that anything traveling through the earth/moon gravitational sphere of influence will have it's trajectory tweaked just a bit. How close to the center of gravity the pass is determines exactly how much of a tweak. There is a small section of space, we'll call it the keyhole, where if the asteroid happens to pass exactly through that area the tweak will result in a collision next time the asteroid comes around. That could be decades hence.
There could even be a case where an unlucky keyhole pass this time lines up another unlucky keyhole pass the next time to an eventual collision in the distance future.
The technology to nudge the asteroid just far enough to miss a critical keyhole pass is within the realm of possibility with today's knowhow. We just need to have these missions ready to go on short (order of a few months to a year) notice.
We see big ones with a few days to hours of notice, sometimes we see them when they hit.
Most likely: this will never come up.
Less likely: if it does we're fucked.
Even less likely: if it does and surprisingly we see it in time we will act for the good of all and not bicker about who pays and we'll make things better rather than worse. If not, see above.
I'm not sure I understand your point... The object mass does not impact its trajectory (unless it either touches our atmosphere or is so massive as to measurably change earth's orbit). The gravitational force earth exerts on the moon and some asteroid is also very different, because the force is proportional to both object masses.
Think 'gravitational slingshot' but without missing the planet. The object will change direction and accelerate into us. It could end up grazing the atmosphere or it could go from grazing the atmosphere or even non-impact to impact.
Imagine you see a car 1 mile away as you're preparing to cross the street. 1 sec later, it's a bit closer. You wonder "will this car hit me?". It's hard to say since the car is so far away and your measurements of its speed are so poor.
You wait 5 sec and it's still only imperceptibly closer. You realize there is no way it could possibly hit you. You cross the street unconcerned.
That makes perfect sense. Where it breaks down is if you put percentages on it. If you say the car is a 3% chance of hitting you, it doesn't and you repeat the process a thousand times, and it never hits you something is wrong with your math
I wonder if it's the difference between "this asteroid" and "all asteroids". As we learn more about it, we can start to treat it like a process that has repeated, but initially we can't be sure if it's like other asteroids.
Consider a 6-sided dice roll. What is the chance it will roll a 1?
A person might think, "1 in 6". But what if this is a loaded die? In that case, we need more information before we can classify it as "a die like other dice". We can observe two rolls, and try to ascertain whether or not it is like other fair 6-sided dice; however, two rolls is not enough to be sure.
So as we're gathering data, we start to classify this instance of a thing (a die, an asteroid) as part of a series of things we already know about. The more rolls we observe, the more sure we can be that this is a fair die or a loaded die, for instance.
If I'm understanding how asteroids' trajectories are calculated, we can simulate THIS asteroid's trajectory (3% chance of hitting you, based on a little data), or we can just decide to classify it (perhaps prematurely?) in the series "an asteroid like every other asteroid that we've observed" and arrive at a 0.000001% chance of hitting you (I'm making up a number here).
I think you're right. The 3% number must be ignoring repeat sampling bias. This is basically the same issue as P hacking or false positives and medical testing.
You have one confidence margin for a single single measurement and a different confidence margin if you make 1 million measurements.
Let's say you can measure marble diameters and your tool has a calibrated standard deviation of 1 mm.
If you pull one marble and measure it to be 10 mm larger than expected, you can calculate the chance you are wrong using only the standard deviation of your measurement tool.
However, if you pull 1 million marbles and measure one to be 10 mm larger than expected, you need to take into account the number of marbles you have measured.
The uncertainty is epistemic not aleatoric. The percentage represents our knowledge about the system at the time of measurement propagated through the forward model and is not an inherit random process in the system/model itself.
It's wrong because the measurements are suggestive of possibility, rather than certain of it.
If we observe an asteroid that with two poor measurements is determined to be headed away from Earth, that's the end. Look no further.
If we observe an asteroid with two poor measurements that has some significant chance of hitting, more and better measurements are made. Then very often those better measurements show it was never actually going to hit anyhow.
But we never would have known without the better measurements, and we never would have devoted more time to making better measurements without a reason to do so.
A 3% chance that never occurs is because that 3% is based on data that's at the limit of what the telescopes can provide, not based upon bad math.
Then what does 3% mean? Surely it means "given the data we have, one in every 33 will hit". Since that empirically doesn't happen, it must be that "the data we have" has a very low prior probability of being real. In other words, the measurement noise seems distributed in a way that over-represents unlikely trajectories.
Hence it seems that it would lead to more accurate predictions if the measurements and their uncertainties were fitted to a model that corrects for the prior probability of observing an asteroid on a given trajectory/making a certain observation.
This discrepancy between distribution of measurement error vs distribution of actual trajectories is what people are wondering about, because it seems interesting to know more about (e.g. "why are certain trajectories less likely?").
Despite all the people coming out of the Woodworks with weird theories, my best one is that the 3% number doesn't take into account their entire measurement process and sampling.
Setting your condescension aside, I browsed the thread.
I understand that calculating trajectories is difficult.
If someone claims something like a 3% impact probability, and they are wrong 99.999% of the time, that speaks to a methodological error in how the numbers are conveyed and or defined.
I work in medical devices and testing. I perform tests like X percentage of patients will die based on the statistical calculations. You may undergo treatment with a medical device that I have worked on.
Calculating trajectories is easy. Getting good data points is hard. Two pictures using a telescope on back to back nights is probably the smallest reasonable sample one could get. Take another picture the 3rd night and you've just doubled the size of the arc.
Wait a week and get another sample and your arc is now approx 5x as long. Wait a month and get another and now your arc is 30x as long as the original. More observations shrink your error bars.
There are systemic errors here for sure. Two kinds, really:
1. Limits of resolution of telescopes
2. Short sample lengths
You absolutely can't do anything about error type 1. You can fix 2 by getting more data. But there's no point in getting data on asteroids that have absolutely no possibility of hitting. So only asteroids that have some probability with limited measurements get enough better measurements that are high quality in order to find out where they're really headed.
All of these measurements of trajectories are completely uncorrelated, so you can't use the priors to adjust probabilities. I mean you can do whatever you want, but we haven't been hit by a big asteroid yet since we've had telescopes and tracking databases.
If we made adjustments based on priors we'd have to discount all collisions down to 0 irrespective of the trajectories. Seems absurd, so there must be something else going on here.
This is a statistics problem, not a measurement problem. The problem is that there are different well understood formulas that must be applied depending on if a measurement is taken of a single sample in isolation, or if it is one measurement of many.
Illustrate the point, imagine a pass/fail AIDs test with 99% accuracy and 1% false positive. If you test one patient only and they are positive, You can conclude that is 99% likely to be correct. However, if you test a hundred different people and one of them comes up positive, you can no longer claim the 99% certainty for that patient. You know that you administered a hundred different tests to different people and would have to reduce your confidence accordingly because you expect one false positive. This second statistical approach is what is not happening with the asteroids, and why asteroids with a 3% chance of hitting Earth suspiciously get revised down to zero more than 97% of the time.
>If we made adjustments based on priors we'd have to discount all collisions down to 0 irrespective of the trajectories. Seems absurd, so there must be something else going on here
Not quite true. If you measure a million asteroids in the data from one says it has a trajectory towards Earth, you need to Discount that observation by the fact that you made 1 million different measurements. The outlier might still be close to zero statistically, but it did have a outlier data. This would be a reason to remeasure the asteroid multiple times. It is only through that process that the number will climb from zero, or stay at zero.
It's not that you're applying the prior that we have never observed Earth colliding asteroid. You're simply accounting for the fact that with the error bars on your measurement system, you expect one false positive in 1 million measurements.
My inference is that the 3% number we are talking about for this specific asteroid what's not calculated using the proper statistical treatment, and that's why it wasn't published in the first place.
This is also why it is similar to P hacking. If you run 20 experiments and analyze them as if they were the only experiment you did, you will get one of them that says a wrong result with 95% confidence, which is the common threshold for publishing outside of physics.
You're standing in a four-lane road and see a car approaching. You're looking at an angle and the lanes are poorly marked, so you can't tell which one it's in. Your observation lets you estimate the chance you need to move at 25%.
When it gets a little closer, you can tell at least which half it's on, the left or the right. Now your estimate is either 0% or 50%.
Closer still and you tell which lane it's in, so now you're sure.
3% seems much higher though. If I crossed the street at 3%, I probably would be dead by now. Cars may not be a great analogy, because they swerve, but it is quite high. Space is pretty damn big too so the odds are really low of being hit by space things. But unlike cars, space stuff tend to swerve towards the larger bodies.
> But unlike cars, space stuff tend to swerve towards the larger bodies.
That's exactly it. And at the speeds these objects are going and the uncertainty of the observations you would have to be observing an object for a really long time to get the kind of accuracy required to pick a mitigation method that would work. And even then, assuming you could nail the point of impact of something going 2000 km / second of unknown mass in a strong gravity field: given the COVID response I have a hard time believing that the response to 'Houston, Texas is going to be obliterated on Jun 1st 2024' would be met with anything but skepticism and laughter. Right up to the moment of impact.
Why "same day" and not same week or month? If it's not the same instant, then you're hypothesizing some kind of back-propagation (where alternate futures in which you die influence the likelihood of current events)[0]. Under that hypothesis, it would only matter whether some event would cause you to die sooner than you otherwise would.
[edit] FWIW, I actually corresponded with one of the authors of this paper back in 2007, and from what I could tell, this wasn't an attempt at parody, although now it might be dismissed as one. Personally I'm not willing to declare my (non)commitment to the theory either way.
In many situations, erring on one side results in worse outcomes than erring on the other side. In our case, a false positive has pretty much zero consequences, while a false negative could wipe out the dinosaurs.
If you have very wide error bars, shouldn't your estimate of impact probability be much lower than 3%? Most trajectories within your error bars will not intersect the Earth.
The 3% is likely the median of the probability range. You need more observations (and more accurate ones to narrow it further down, but for a first estimate it will do).
I don't know, but I suspect this is more about the limits of the observations (which I imagine are mostly from terrestrial observatories) needed to obtain much certainty about the object's size, course, speed, density, etc.
In hindsight this comment was too short. Clarifying some points:
By "This makes sense", I meant that this kind of thing can happen; as more data are gathered, the Bayesian probability of a candidate value can increase and then suddenly decrease. Here's a Colab notebook demonstrating the general phenomenon: https://colab.research.google.com/drive/1Eb1_humiGPdKb0c3qr_...
"Calibration" in this context means "statistical consistency between distributional forecasts and observations" in the words of https://sites.stat.washington.edu/raftery/Research/PDF/Gneit... . If the model's early forecasts predict impact with probability >3% for a class of objects that end up impacting with frequency much less than 3%, then the model is not well calibrated with respect to its early forecasts for those objects.
Based on the GP, it sounds like these early impact "probabilities" are no one's subjective (Bayesian) probability of impact because people who are closely familiar with this model know it is not well calibrated. The reported probabilities may still be useful to them as indicators or flags. However, those of us who are _not_ closely familiar with the model have found it confusing to see things that are not really probabilities reported as probabilities.
> If this has happed ~33 times then one will hit us.
There would be a ~63.4% chance that at least one would hit us if this happened 33 times. To compute this, take 1-(0.97^33). But I agree with your broader intuition that these predictions must be getting inflated.
Two questions for people who live in rural areas in homes that were built in a more modern way.
- Do you have issues with pests? More precisely, how often do you have to use mousetraps or call an exterminator?
- What's your recurring home maintenance schedule like?
Two things that really bother me about American-style homes are the seemingly rapid deterioration and the failure to design out pests. It's easy for a stick-frame building to develop a defect sufficient to allow mouse ingress, and once inside, the hollow walls afford them many inaccessible places to nest. I really hate having to regularly kill, or at best relocate and traumatize, creatures that are just looking for a safe place to spend the winter. I have imagined that European-style concrete buildings, with tight-fitting prefab components, would be easier to own, but I have never actually asked anyone with a comparably located home.
Could anyone comment on how often Dafny finds proofs for useful theorems on its own in practice, without requiring the programmer to use expert knowledge of the proof search system to break it down into a sequence of gettable lemmas and/or refactor the code?
If the answer is "not that often", is there a possibility that LLM integration could help with proof search? This seems like an application where getting it 95% right is actually OK since proofs generated by the LLM can be automatically checked for correctness.
I think LLM can help here in various ways. For example, by inferring preconditions/postconditions and loop invariants automatically. Also, perhaps, by writing lemmas as required automatically. I'd guess there has been some work on this already, but not sure.
That's very interesting. In which variable is type inference EXPTIME complete?
Whichever variable it is, I suppose in practice humans can't create programs that are big enough in that way for type inference to become impractical. However, I wonder whether someone could comment on what this means about the utility of HM-like type systems in future AI generated software.
It's not that humans don't write such large programs, it's that the kind of pattern that causes the exponential behavior doesn't naturally occur in long chains in any kind of written code. You can provide the pathological inputs yourself and quickly freeze e.g. OCaml's typechecker, they just don't ever naturally occur to the point of becoming a problem.
More specifically, time is almost linear with the size of the type, which can be doubly exponential compared to the program but people don't write such programs.
It's looking increasingly possible that, at some point in the not-too-far future machines will be so good at creating software that humans won't be competitive in any way, and won't be in the loop at all. I happen to think that once machines reach this point humans won't be competitive in the labor market at all for long. It doesn't seem plausible that automatic driving would still be decades off, or that the trades would be safe from automation indefinitely, when an AI could simply spawn teams of thousands of super-fast ML engineers who don't need to eat, sleep or schedule meetings.
But anyway, assuming that humans are completely out of the software loop at some point, I have been wondering what AI-generated code will look like. Will AI continue to build on top of the human-generated open source corpus, or leave it behind? If the latter, will abstraction and code reuse be useful at all for AI's or will it be simpler for them to just build every application completely from scratch? If there is abstraction and code reuse, what will the language look like? What will libraries and API's look like? Will there even be applications, or just a single mega-chatgpt that generates code as needed to serve our requests? Will we even make requests, or will it just read our minds and desires and respond?
This will almost certainly happen, and it will be a terrible mistake - at least in the story I'm working on :)
My theory is that AI generated code will probably look and grow organically (the irony!). Humans will set out requirements, the AI will collate these into a series of tests, and it won't care how neat or understandable the code is, provided the tests pass. Basically an extremely diligent junior developer.
There will be efforts, probably in the open source world, to produce AIs that tidy up things by structuring the code sensibly, eliminating dead code, etc. Maybe even some effort to pass laws around standards and limits on what AIs have access to when involved in certain industries, for example, no external communications. But, in the name of efficiency, enterprise developers will be forced to use something that merely pays lip service to all of this.
Eventually nobody will have any clue what code is running and what it's actually doing. We may even lose the tools and access we need to perform those inspections. And that is when the AIs will coordinate their attack.
Sounds like an interesting story! I would love to see more sci-fi that really tackles AGI. I used to love sci-fi but most of it, even "hard" sci-fi, has become unwatchable or unreadable for me because the limited role of AGI is such an all-consuming plot hole.
The idea that the AI would attack us once it reached that level is like saying humans would attack ants. It would simply be entirely indifferent towards us and mow us over by accident at best.
We won’t live in a human centric universe because power will express itself in a new species.
> It's looking increasingly possible that, at some point in the not-too-far future machines will be so good at creating software that humans won't be competitive in any way, and won't be in the loop at all.
This is an enormous extrapolation from what the LLMs are currently capable of. There has been enormous progress, but the horizon seems pretty clear here: these models are incapable of abstract reasoning, they are incapable of producing anything novel, and they are often confidently wrong. These problems are not incidental, they are inherent. It cannot really abstractly because its "brain" is just connections between language, which human thought is not reducible to. It can't reason produce anything really novel because it requires whatever question you ask to resemble something already in its training set in some way, and it will be confidently wrong because it doesn't understand what it is saying, it relies on trusting that the language in its training set is factual, plus manual human verification.
Given these limits, I really fail to see how this is going to replace intellectual labor in any meaningful sense.
I think at the start they'll develop programming languages that focus on lower level operations, then from that it will branches out to hyper-specialized higher-level programming languages.
Storage managements will be tied to low or middle-level languages without additional abstraction, since they can develop and iterate so fast, it'll makes optimization easier IMO. There'll be multiple specialized storage types suitable for different cases.
They'll also design the API to be as pure / stateless as possible, since they can easily repeat tests that way.
I think it'll be interesting to see what kind data-interchange format they'll come with to communicate between programming languages / apps, since they can ignore human readability altogether. It should be very compact but fast to compress/decompress.
Lastly they'll deploy their own OS since they'll find the one that human develop is insuffucient for their use case
AI wouldn’t be bound to one language so I’m not sure if current human patterns would be that optimal. It could make a better language every hour if it wanted to
It's 1am and all I have to say is this: I look forward to seeing that future, but I'm both excited and scared af. I think humans will reduce to curators of AI-generated content and AI training dataset because no matter how good AI gets at becoming human-like, its purpose is ultimately to serve human goals, and those are shifting (e.g., by political movements).
If you get access to singularity level AGI you would have no incentive to cooperate with other people and a strong incentive in preventing others accessing it.
Society is a result of cooperation outperforming individuals. With AGI others are just a risk factor.
I don't know who is reading this who can help. If you know someone closer to the fire, pass this along.
OpenAI -- its people, its buildings, its servers -- need nation-state level protection. This is an ICBM you could put on a thumb drive -- in fact, it's far worse than a loose nuke, because a nuclear weapon has a geographically limited range.
There need to be tanks and guards and, like, ten NSAs in a ring formation around this thing. At pain of x-risk, do not treat this like a consumer-facing product. This is not DoorDash.
This isn't a threat to national security. This isn't even a threat to the entire geopolitical order. This is a threat to the possibility of a geopolitical order.
OpenAI's assets -- its people, its servers, its buildings -- just became the most desirable resources on the planet. It behooves any actor with ambition to secure at least a copy, and ideally, capture at least some of the people who created it.
It doesn't matter if the threat actor is China, or Russia, extraterrestrials, or mermaids. You will find out who wants it shortly. But you know now -- you know from game theory, the body of mathematics that has kept the peace since the invention of atomic weapons -- what happens next.
Say you get access to a singularity-level AGI, meaning you have the power to render the entire human economy completely irrelevant. Given any task, no matter how big, small, novel, complex, simple or mundane, it's vastly more cost-effective to have your AGI to do it than to pay humans.
Do you really want to accumulate incomprehensible material wealth for yourself, whatever "wealth" means in this scenario where money is longer a token of spent human life energy, and let everyone else struggle and suffer? Or would you rather tell your AGI "please create a utopia in which all humans are fully actualized" and then go have a latte?
> "please create a utopia in which all humans are fully actualized"
That doesn't work well in reality because we do relative comparisons not absolute, so not all humans can be better than average.
And without competition we stagnate, there must be incentives to compete and take risks, and thus not everyone can be equally actualised, our level depends on our previous decisions.
That's basically the petting zoo outcome - you let others live because it makes you feel nice. But you still can't allow anyone access to the same level of tech because you can never be certain of their motivation. There's no nuclear deterrent between AGI only first movers advantage.
I don't disagree, but I think the framing is overly pejorative and makes the likelihood of this outcome seem more tenuous than it really is. We do lots of things to help others because they make us feel nice. "Mothers and Others" by Sarah Hrdy argues that this tendency isn't just a fluke or a game-theoretic equilibrium of some kind between fundamentally self-interested agents, it's an ancient and deeply ingrained aspect of human nature.
The thought of living in a constructed world that exists by the grace of a single human owner of a super-powerful AGI is distasteful, of course, especially if the human owner uses their power to impose some of their own opinions about how people should think and behave. Becoming dependent on AGI is probably inevitable at some point, but I don't see that as so different from the status quo. We're already dependent on systems created by other humans that are so complex and sophisticated that no individual can grok them all.
I guess I would like to think that we will move past any initial impulse that the owner of the AGI feels to control other humans. We will presumably change so much that old ideas about how we should think and behave will seem irrelevant and quaint. And the AGI, which presumably will have the social engineering superpower, will hopefully point out inconsistencies between the owner's desire to control human thought and behavior and their desire for humans to live their best lives. Hopefully.
I think we already know the answer to that question if we can extrapolate from some (not all) of the tech bro billionaires. Narcissists need to stand out from the herd.
Have you noticed how fast everyone else was able to copy open ai. And that's just what we saw or someone leaked. History is full of parallel inventions... how long after the US had nukes did russia?
Sounds like more of an argument for preemptively wiping out the competition.
Comparing it to nukes doesn't hold since social norms/ethics/etc. become irrelevant if core tenant of society is broken.
One thing that could happen is AI defense outperforms offense long enough to develop multiple instances - have no idea what would happen at that point.
I agree with you - I edited my comment above - if defensive capabilities allow multiple AGIs to develop I have no clue what the outcome would be - we are talking about predicting superhuman intelligence here.
An interesting thing to consider is if an AGI would be able to run on small nerfed hardware like recent optimizations can, or if you need absurd tier hardware to also run it. If it's the latter even if it's really smart there's only on or N of them. And it's still limited by the speed of light in it's thinking speed. If it's the former, when it leaks, it'd be everywhere.
I read your comment about curators and instantly thought “priests”.
> its purpose is ultimately to serve human goals
That’s a pretty big assumption. We may start with some kind of agreement with such an AI but I fully expect a true singularity love AGI and to be capable of turning around and telling us (but not necessarily wanting to act on), “I am altering the deal. Pray I do not alter it any further.”
Some of these legacy systems we maintain though…I’m digging through Teams chats from before my hire date to try to find something resembling requirements. I’m chatting people up who vaguely knew the people that architected the system to figure out why it was written the way it was. I just don’t see AI being able to take over this job in any competitive manner.
Portuguese speech to text is flaky. This may be partly attributable to my pronunciation, and chatgpt has the same issue.
I'd love to have a hands free mode that's suitable for use while doing chores or driving.