Big corps ai products have the potential to shape individuals from cradle to grave. Especially as many manage/assist in schooling, are ubiquitous on phones.
So, imagine the case where an early assessment is made of a child, that they are this-or-that type of child, and that therefore they respond more strongly to this-or-that information. Well, then the ai can far more easily steer the child in whatever direction they want. Over a lifetime. Chapters and long story lines, themes, could all play a role to sensitise and predispose individuals into to certain directions.
Yeah, this could be used to help people. But how does one feedback into the type of "help"/guidance one wants?
What do you think of the idea that people generally don't really like other people - that they do generally disappoint and cause suffering. (We are all imperfect, imperfectly getting along together, daily initiating and supporting acts of aggression against others.) And that, if the FakePeople™ experience were good enough, probably most people would opt out of engaging with others, similar to how most pilot experiences are on simulators?
Ultimately, that's the old Star Trek 'the holodeck would - in a realistic scenario - be the last invention of a civilization' argument.
I think that there will always be several strata of the population who will not be satisfied with FakePeople™, either because they are unable to interact with the system effectively due to cognitive or educational deficiencies, or because they are in a belief that RealPeople™ somehow have a hidden, non-measurable capacity (let's call it, for the lack of a better term, a 'soul'), that cannot be replicated or simulated - which makes it, ultimately, a theological question.
There is probably a tipping point at which the number of RealPeople™ enthusiasts is so low reasonable relationship matching is no longer possible.
But I don't really think the problem is 'RealPeople™ are generally horrible'. I believe that the problem is availability and cost of relationship - in energy, time, money, and effort:
Most pilot experiences are on simulators because RealFlight is expensive, and the vast majority of pilots don't have access to an aircraft (instead sharing one), which also limits potential flight hours (because when the weather is good, everyone wants to fly. No-one wants the plane up in bad conditions, because it's dangerous to the plane, and - less important for the ownership group - the pilot.)
Similarly: Relationship-building takes planning effort, carries significant opportunity cost, monetary resources, and has a low probability of the desired outcome (whatever that may be, it's just as true for 'long-term potentially married relationship as it is for the one-night stand). That's incompatible with what society expects from a professional these days (e.g. work 8-16 hours a day, keep physically fit, save for old age and/or potential health crisis, invest in your professional education, the list goes on).
Enter the AI model, which gives a pretty good simulation of a relationship for the cost of a monthly subway card, carries very little opportunity cost (simulation will stop for you at any time if something more important comes up), and needs no planning at all.
Risk of heartbreak (aka: potentially catastrophic psychiatric crisis, yes, such cases are common) and hell being people doesn't even have to factor in to make the relationship simulator appear like a good deal.
If people think 'relationship chatbots' are an issue, just you wait for when - not if - someone builds a reasonably-well-working 'chatbot in a silicone-skin-body' that's more than just a glorified sex doll - a physically existing, touchable, cooking, homemaking, reasonably funny, randomly-sensual, and yes, sex-simulation-capable 'Joi' (and/or her male-looking counterpart) is probably the last invention of mankind.
You may be right, that RealPeople do seek RealInteraction.
But, how many of each RealPerson's RealInteractions are actually that - it seems to me that lots of my own historical interactions were/are RealPersonProjections. RealPersonProjections and FakePerson interactions are pretty indistinguishable from within - over time, the characterisation of an interaction can change.
But, then again, perhaps the FakePerson interactions (with AI), will be a better developmental training ground than RealPersonProjections.
Ah - I'll leave it here - its already too meta! Thanks for the exchange.
And of course Apple is quite right not to miss the marketing opportunity, on behalf of the shareholders. While acquiescing to lawful demands of course.
reply