£10,000 per year for Mr Darcy is 10,000 gold sovereigns per year. A gold sovereign at spot price today is about $1,100. So that’s over 10 million dollars per year in gold-equivalent wealth. Plenty to maintain his estate with.
Alternatively, £10,000 is 200,000 sterling silver shillings per year (20 shillings per pound) for him. A sterling shilling today is about $13.50 at spot price. So that’s $2.7million per year in silver-equivalent wealth. Still plenty!
Data centers in space may or may not make sense (personally I'm quite skeptical) but the objections in the article certainly don't make sense.
1. The only reason there are 15,000 satellites in space is because SpaceX launched about 9,500 of them (Starlink is 65% of all satellites) on their semi-reusable Falcon 9. If fully-reusable Starship pans out, they will be launching satellites at 10x the rate of Falcon 9 at the very least.
2. You don't need to upgrade the satellites, you just launch new ones. The reason data center companies upgrade their servers is because they can't just build a new data center to hold the new chips. But satellites in space are a sunk cost, so just keep using the existing satellites while also launching new ones.
3. Falling solar panel costs decreases the power costs for both earth-based and space-based, but they're more efficient in space so the benefit would be proportionally greater there.
As I said, I'm skeptical too, but let's be skeptical for good reasons.
A few additional items to rebut the lack of info in this Article:
- SpaceX just requested a license to launch up to a million satellites.
- the satellites already have some incredible anti collision software, which I believe Elon has now open sourced.
- the cost to launch 1 kg to space has dropped by a factor of 10 in the past few years and is currently less than $1000. It's perfectly reasonable to estimate that over the next 10 years the cost could drop by another factor of 10, if not more, particularly if the heavy rockets are reusable.
3. The falling costs won’t benefit space as much. The cost of sending mass to space will still be a big factor in the space solar panel costs. Much of the reason why solar is getting cheaper is not the panels themselves, but due to innovations that reduce installation costs. Those don’t apply to space (outside of the already assumed reductions in sending mass to space to make this viable)
Yes, launch cost is the crux of the matter. My skepticism is based on whether they’ll be able to get launch cost low enough and launch cadence high enough. SpaceX has shown the ability to get launch costs dramatically lower and cadence dramatically higher, but it’s not a slam dunk that those curves will continue to the levels needed for this idea to work.
2) it is extremely common to add storage to existing servers. Only slightly less; RAM, CPUs etc. not to mention how often it is cost-effective to replace broken components.
Your explanation of finding a surface to separate good reasoning traces from bad reasoning traces in a high dimensional space worked as a great framing of the problem. It seems though that the surface will be fractal - the distance between a good trace and a bad trace could be arbitrarily small. If so then the work required to find and compute better and better surfaces will grow arbitrarily large. I wonder if there is a rigorous way to determine if the surface is fractal or not.
Darkness is the absence of light. In this usage light would represent a moral agent, and so darkness is its absence - either no morality or no agency or both.
My wife is pregnant and, because the nearest maternity unit is 1hr45mins drive away, we're going to rent a place near it around the due date. This just gave me a confidence boost about what dates to be there. Thank you!
If you strip out the AGI hype then this just sounds like OpenAI is now moving to monetizing their tech. This makes sense for them but probably not for the philanthropists who originally backed them.
Sadly for them, AGI is metaphysically impossible - this will be realized eventually but a lot of waste and possibly harm will happen first.
We are not just super sophisticated machines, so the fact that we can think doesn’t tell us anything about what’s possible for machines. But philosophy does - and it tells us you can’t get mind from matter, no matter what configuration you put it in.
I'm a believer that we are super sophisticated molecular machines, embodied in matter.
Can you provide some material that supports your claim that AGI is metaphysically impossible - I always like hearing from people with views opposite to myself.
I'm skeptical his claims are substantive. As with all things philosophy there are competing and supporting theories, and with this age-old question of AGI I doubt the field is as conclusive on the matter as he believes.
I used to be a believer of the theory that we are super sophisticated machines. When I read some of the philosophy on the subject I changed my mind. I now believe there must be some immaterial component to our minds.
There’s a lot to read out there on this subject, but I found expositions of the philosophy of Aristotle and Aquinas to be the clearest and most convincing for me. Lots of different books and articles exist on them both - pick one that sounds like it suits your style of understanding.
> I used to be a believer of the theory that we are super sophisticated machines. When I read some of the philosophy on the subject I changed my mind.
What philosophy? Be specific
> I now believe there must be some immaterial component to our minds.
What specific points or ideas made you believe that?
> There’s a lot to read out there on this subject
So provide some examples, be as specific as possible
> but I found expositions of the philosophy of Aristotle and Aquinas to be the clearest and most convincing for me
These two wrote a lot on many subjects, can you be specific on the points that convinced you that we are not super sophisticated machines. Don't vaguely point at a couple of authors, we are talking about a very specific idea.
> Lots of different books and articles exist on them both - pick one that sounds like it suits your style of understanding.
If there's lots then cite some examples, or better yet, rather than vaguely pointing at a book, (which is only marginally more useful than vaguely pointing at an author) let's discuss the specific ideas exactly.
I found “Aristotle for Everybody” by Mortimer J. Adler to be really great. The topic of the immateriality of the intellect is covered in the last few chapters, but the rest of it is great stuff too.
Sounds like @benl has been afflicted with the "Cartesian wound". Such dualistic thinking and ideas like free will are ~hard for us to work through. But perhaps the more important, and immediately tractable, question @benl brings up is what our approach should be? Should we make an AGI or better IA, Intelligence Augmentation?
You might be more familiar with the field than me, but my understanding is that’s Dennett position is not well-thought-of in the fields of philosophy of mind and metaphysics. At the very least there are very good cases made that unpick his position very carefully. They’re not all Cartesian views - I grasp the Aristotelian views best myself.
That’s right - we have minds therefore we must be more than just matter.
I used to think the opposite, but reading the philosophy on the subject changed my mind. There are a lot of different takes on the topic, but what most added up for me was the philosophy of Aristotle and Aquinas. There are many great expositions of their work out there.
AGI in the sense of robots that can do the jobs people can, design better robots and so on would be a game changer in itself. You can leave to philosophers to argue if they have true feelings and that.
I'm a quantum maximalist: the brain is just the antennae, receiving and broadcasting. Attention itself cuts (slices) through the quantum soup, and as a result, these mind-forms appear.
I don't know the answer but that some people think they do upsets me. I definitely think we should try but right now mostly what we do is make a rock DO so I'm not seeing the leap yet.
Well, machine is a name for a stance of analysis, there are no machines in the real world (which is not to say that there no are mechanical linkages) only in our minds.
FWIW, consciousness has no properties and so cannot be studied scientifically.
However, consciousness can be explored experientially, i.e. two conscious beings can merge and experience self as one being. (See Charles Tart's experiment with mutual hypnosis.)
Yes, I used to hold that view too. But actually it turns out that the null hypothesis is that mind is at least partly immaterial, because all attempts to demonstrate the opposite philosophically are fraught with difficulty. I’ve found that the thought of Aristotle and Aquinas, when explained by modern philosophers, best explains to me why that’s the case.
I’ll try because you asked me to, but i think I’ll do a bad job. You’ll get a much better understanding by reading on the topics of philosophy of mind and metaphysics. Here goes, though:
1. Purely immaterial things exist. Think of mathematics or the laws of logic or physics - these things exist as ideas or concepts, not arrangements of matter.
2. Some abstract concepts cannot be embodied in matter at all. For example, you can make a shoe, you can draw a shoe, but you can’t draw shoe-ness. You can understand and reason about what makes something a shoe in the abstract, but you can only make or draw an individual shoe.
3) the mind contains these purely immaterial things when we think about and reason about them.
4) If we can use the abstract concepts, but the abstract concepts can’t be embodied in matter, then the mind must be at least partly immaterial in order for the concepts to be in our mind.
I hope that helps a but please don’t rely on my exposition of the case - a real philosopher would do it justice.
The Crown of Thorns is kept in the treasury at Notre Dame and was due to be displayed all day this Friday for Good Friday. How it ended up there is an interesting tour of European history in itself. Let's hope that it has been saved.
It's rather disingenuous of AI researchers to complain of overhype when they are the ones claiming that their tech should be used to drive cars and hence, as we've seen, kill people.
AI winter will be caused, once again, by the failure of the technology to do what the researchers and practitioners claim it can do. This time, tragically, with fatalities.
It seems reasonable enough to argue both that 1. AI should take over certain human roles like driving, which causes millions of fatalities due to human error, and 2. it's silly to frame every new step in AI as part of a grand road to SkyNet. The first is proposing AI for a discrete task, the second is extending this way way out to consciousness or something.
> But how do you know this? Did you reach this conclusion empirically?
Yes I did, that was my point. I haven't solved and am not claiming to have solved the problem of induction - the generalisation from "a bunch of empirical knowledge turns out to be valuable/effective/legitimate and all the supposed non-empirical knowledge I've seen turns out not to be valuable/effective/legitimate" to "all valuable/effective/legitimate knowledge is empirical" rests on potentially shaky ground. But that's a problem that already exists when making ordinary, object-level generalisations about the universe; it doesn't render the conclusion any weaker than ordinary scientific conclusions.
That sounds like you're saying something like this:
"I believe empiricism is true because empiricism seems to be true."
We strive to live our lives based on reason, so we should look for ways to understand the world that go beyond a circular argument.
Such lines of thinking exist. They have been well argued and debated and have much going for them. Plenty of places to start learning about them, but maybe start with Aristotle.
I don't think we do. Reason is a means to an end, not a goal in itself.
> so we should look for ways to understand the world that go beyond a circular argument.
I don't see it as circular, but even if it were, my point is it's impossible to do better: all of us accept everyday common sense before we can even begin to argue technical philosophy, and if we're willing to set it aside then there are infinitely many self-consistent things we could think and no reason to prefer one over another. So no amount of sophistry will ever get you away from having to believe in everyday common sense.
> Such lines of thinking exist. They have been well argued and debated and have much going for them. Plenty of places to start learning about them, but maybe start with Aristotle.
Please. You're dismissing rather than engaging. If you're not willing to actually contribute to the discussion then don't post at all.
I'm sorry you thought I was being dismissive. I felt I had reached the limit of my own pursuasiveness on the question and wanted to point you to somewhere better than me.
One final point I will try to make is that in thinking about how we know things, there's no suggestion that we need to set aside common sense. It's about starting with common sense and then seeing what we can add to it.
That's only an empirical generalization if you can cache out "valuable/effective/legitimate" in genuinely empirical terms (at minimum, in terms of observer-independent observations free from value judgments).
> That's only an empirical generalization if you can cache out "valuable/effective/legitimate" in genuinely empirical terms (at minimum, in terms of observer-independent observations free from value judgments).
I can cash it out empirically as "generates accurate empirical predictions and suggests fruitful avenues for future investigation" (fruitful in the sense of ultimately leading to more detailed and accurate empirical predictions). That the measure of a theory is the accuracy of its predictions is of course a subjective human position (there are an infinity of possible measures on which to evaluate theories, and a priori no reason to prefer one over another), but again that's (a cautious Neurath's boat extension of) the common-sense way that we all evaluate theories in practice in everyday settings.
No, that's not even close to cashing out the generalization in empirical terms. To do this you'd need to specify exactly which observations would confirm or disconfirm it. Without the parenthesized parts, your gloss of the generalization remains vague and value-laden. With the parenthesized parts it is virtually tautological, since it's in the nature of empirical knowledge to generate accurate empirical predictions. It's surely not news to anyone that if forms of knowledge which lead to detailed empirical predictions are superior to other forms of knowledge, then empirical knowledge is superior to other forms of knowledge.
What you really seem to want to do, then, is argue from the nature of empirical knowledge itself to the conclusion that it is better than other methods of empirical knowledge. But that requires rational argument to back up the italicized statement above, not (just) an inductive generalization. And then we come back to the problem that it is impossible to find suitable premises for such an argument which can themselves be known empirically.
(For reference, the generalization we're talking about here is that "a bunch of empirical knowledge turns out to be valuable/effective/legitimate and all the supposed non-empirical knowledge I've seen turns out not to be valuable/effective/legitimate".)
> But that requires rational argument to back up the italicized statement above, not (just) an inductive generalization.
Why? Everyone evaluates ordinary, everyday knowledge in terms of its empirical predictions, so everyone seems to accept the italicised statement in practice, even if they'd argue for some sophisticated alternative in the abstract.
Alternatively, £10,000 is 200,000 sterling silver shillings per year (20 shillings per pound) for him. A sterling shilling today is about $13.50 at spot price. So that’s $2.7million per year in silver-equivalent wealth. Still plenty!