I kind of prefer Credit Card over anything else if I have to do it. I give out my CC pretty regularly already so not much new PII to lose there. But it does sound like Apple has bugs to work out.
Yes. The point in the post is that it's very American to assume that every adult has a credit card. I'm in my thirties and I never had nor plan to have a credit card. I always have had only debit cards. In countries I've been raised and lived it's a sign of a poverty and total dependency on the bank with additional tax on your living, not an everyday tool like Americans perceive it.
Debit cards can be given to an underage, so I suppose they don't accept it for this reason.
In the UK, having a credit card is an overwhelmingly good move even if you never use the facility for credit. You can set up a direct debit to pay it off in full every month, making it effectively a debit card, but you get what are known as Section 75 protections on all purchases. So if you’re buying online and the firm goes bust (or you for any other reason don’t receive your goods), the credit card firm has to compensate you in full. For this reason I always make larger online purchases on credit card.
For many, obtaining a credit card just for the purposes of age verification, and not using it for shopping, feels easier than giving away their legal identifying information to a random third party.
In the US you're usually inundated with offers to open a credit card (often pre-approved) right in your mailbox. Even if you're a poor recent immigrant, or something.
Probably, but making a non-used CC just for using your own phone sound a bit weird, don't you think?
And I don't criticize US way of living here, but Apple is an international company and could do better adjusting to local cultural habits. But maybe they just punish people for this stupid law in the first place which is totally understandable.
Banks are subject to much more scrutiny (regulations, audits) than a random company. Or maybe even a highly established company which you'd rather not give your identity to, something like Pornhub.
You must live in an especially civilized place to be able to get by without a credit score. I wish I could close all my cards, but doing so would harm the score since card count and age are part of it.
Credit cards are a sign of poverty? Now that's a hot take.
I feel in Europe having a credit card means the complete opposite, only "rich" people have credit cards.
I have a credit card, I use it, I pay it off every month. Why am I seen as poor just because I have a credit card? It's just a tool.
It spares me from needing to maintain a 10000$ emergency fund in my checking account.
And in post-soviet countries you blink and you owe 15+% interest. I know many people who couldn't meet basic needs and pay a never-ending percentage. Or forgot to close the debt and lost more than ever gained from this tool in one payment. So people who can pay from their pocket just pay from it instead of endlessly tracking the grace period and counting the money.
I don't imply that's the same everywhere. Also probably depends on a local regulation and interest rates.
Also people here don't generally like to owe to somebody, that feels insecure.
At least in the UK it would be entirely legal for companies to use account age as a proxy for verifying you're over the age of 18. If your Apple account is over 18 then you probably are as well.
Hearthstone battle pass isn't really comparable to Fortnite cosmetics. Hearthstone is pay 2 win where the majority of new cards are better than old ones.
Each solvable problem contains its solution intrinsically, so to speak, it’s only a matter of time and consuming of resources to get to it. There’s nothing creative about it, which is I think what OP was alluding to (the creative part). I’m talking mostly mathematics.
There’s also a discussion to be made about maths not being intrinsically creative if AI automatons can “solve” parts of it, which pains me to write down because I had really thought that that wasn’t the case, I genuinely thought that deep down there was still something ethereal about maths, but I’ll leave that discussion for some other time.
Because economy. Look at marvel movies, do you think the latest one is really new? Or just a rehash of what they found working commercially? Look at all the AI generated blog posts that is flooding the internet..
LLMs might produce something new once in a long while due to blind luck, but if it can generate something that pushes the right buttons (aka not really creative) to majority of population, then that is what we will keep getting...
I don't think I have to elaborate on the "multiplying the bad" part as it is pretty well acknowledged..
I think there's demonstrably very little difference at all between human and AI outputs, and that's exactly what freaks people out about it. Else they wouldn't be so obsessed with trying to find and define what makes it different.
The Thesis of Everything is a Remix is that there is no difference in how any culture is produced. Different models will have a different flavor to their output in the same way as different people contribute their own experiences to a work.
> I think there's demonstrably very little difference at all between human and AI outputs
Bold claim, as the internet is awash with counterexamples.
In any case, as I think this conversation is trending towards theories of artistic expression, “AI content” will never be truly relatable until it can feel pleasure, pain, and other human urges. The first thing I often think about when I critically assess a piece of art, like music, is what the artist must have been feeling when they created it, and what prompted them to feel that way. I often wonder if AI influencers have ever critically assessed art, or if they actually don’t understand it because of a lack of empathy or something.
And relatability, for me, is the ultimate value of artistic expression.
In any case, as I think this conversation is trending towards theories of artistic expression, “AI content” will never be truly relatable until it can feel pleasure, pain, and other human urges. The first thing I often think about when I critically assess a piece of art, like music, is what the artist must have been feeling when they created it, and what prompted them to feel that way.
I recently watched "Come See Me in the Good Light", about the life and death of poet Andrea Gibson. I find their poetry very moving, precisely because it's dripping with human emotion.
Or at least, that's the story I tell myself. The reality is that I perceive it to be written by a human full of emotion. If I were to find out it was AI, I would immediately lose interest, but I think we're already at the point where AI output is indistinguishable from human output in many cases, and if I perceive art to be imbued with human emotion, the actuality of it only matters in terms of how it shapes my perception of it.
I'm not really sure where we'll go with that from here. Maybe art will remain human-created only, and we'll demand some kind of proof of its provenance of being borne of a human mind and a human heart. Or maybe younger generations will really care only about how art makes them feel, not what kind of intelligent entity made it. I really don't know.
> Bold claim, as the internet is awash with counterexamples.
What do you consider a counterexample? Because I've been involved in local politics lately, and can say from experience that any foundation model is capable of more rational and detailed thought, and more creative expression, than most of the beloved members of my community.
If you're comparing AI to the pinnacle of human achievement, as another commenter pointed to Shakespeare, then I think the argument is already won in favor of AI.
> I think there's demonstrably very little difference at all between human and AI outputs
Counterexamples range from em-dashes, “Not-this, but-that”, people complaining about AI music on Spotify (including me) that sounds vaguely like a genre but is missing all of the instrumentation and motifs common to that genre.
The rest of your comment I don’t even know how to respond to, to be honest.
You’re really going to make the claim that there are no counterexamples of human and AI output being indistinguishable on the internet? At least make the counterclaim that “those are from old models, not the newest ones”, that’s more intellectually invigorating than the comment you just provided.
> claim that there are no counterexamples of human and AI output being indistinguishable on the internet?
Is that a claim I've made? I don't see it anywhere. I think a lot of people think that because they can get the AI to generate something silly or obviously incorrect, that invalidates other output which is on-par with top-level humans. It does not. Every human holds silly misconceptions as well. Brain farts. Fat fingers. Great lists of cognitive biases and logical fallacies. We all make mistakes.
It seems to me that symbolic thinking necessitates the use of somewhat lossy abstractions in place of the real thing, primarily limited by the information which can be usefully stored in the brain compared to the informational complexity of the systems being symbolized. Which neatly explains one cognitive pathology that humans and LLMs share. I think there are most certainly others. And I think all the humans I know and all the LLMs I've interacted with exist on a multidimensional continuum of intelligence with significant overlap.
I hereby rebuff your crude and libelous mischaracterization of my assertion. How's that? :)
You said AI works were easily distinguishable via em-dashes and "not this, but that"
I said I have witnessed humans using that metric accuse other humans here on hackernews. Q.E.D.
You've asserted that they are easily distinguished. Practitioners in the field fail to distinguish using the same criteria. Is that not dispositive? Seems like it to me.
I claimed much earlier in the thread "I think there's demonstrably very little difference at all between human and AI outputs" which is consistent with "I think all the humans I know and all the LLMs I've interacted with exist on a multidimensional continuum of intelligence with significant overlap."
Two ways of saying the same thing.
Both of them suggesting that sometimes you may be able to tell it's the output of an AI or Human, sometimes not. Sometimes the things coming out of the AI or the Human might be smart in a way we recognize, sometimes not. And recognizing that humans already exist on quite a broad scale of intelligences in many axes.
I was not saying that LLMs cannot produce something like pinnacle of human achievement. I was saying we cannot quantify the difference between Shakespeare and something commonplace, because it requires the ability to feel.
> demonstrably very little difference at all between human and AI outputs
Is there "demonstrably" a lot of difference between Shakespeare and an HN comment?
The point is exactly that there is no such difference. And that it enables slop to be sold as art. And that exactly is the danger. But another point is we had the even before LLMs. And LLMs just make it more explicit and makes it possible at scale.
Conrad Gessner had the very same complaint in the 16th century, noting the overabundance of printed books, fretting about shoddy, trivial, or error-filled works ( https://www.jstor.org/stable/26560192 )
You can and you can't. There's still plenty of very popular libraries that don't behave correctly with it. That's more on the libraries than on the compiler but that's the current state of things.
No, but I understand why you ask. I just refuse to engage with certain figures of speech because I don't want to contribute to normalizing them. Don't say "would you approve if" if that's not what you mean. Say "what is your opinion on" or "what's your view on" or "how do you think about" or "how does that compare to" because that's what you actually mean. I'm not on the spectrum, but I think that when it comes to many things, language included, simpler is better. As simple as possible, but not simpler.
Have a look at the raw clipboard data with something like https://evercoder.github.io/clipboard-inspector/ and you'll see how it's all set up. A bunch of markup that can be obtained from any google doc with the font name updated.
reply