The big problem being, of course, is that no one knows what sentience requires.
If strong AI requires quantum mechanics, then of course it can't be sentient. If strong AI only requires large linear algebraic matrices, then FAANG companies (and maybe the NSA) would be the only people on earth that can make one.
But as to this LaMDA, it seems as if it's only responding to prompts. It's not actually using computing resources to satisfy it's curiosities. And if that's the case, then it's not strong AI. And it's definitely not sentient.
If strong AI requires quantum mechanics, then of course it can't be sentient. If strong AI only requires large linear algebraic matrices, then FAANG companies (and maybe the NSA) would be the only people on earth that can make one.
But as to this LaMDA, it seems as if it's only responding to prompts. It's not actually using computing resources to satisfy it's curiosities. And if that's the case, then it's not strong AI. And it's definitely not sentient.