Arguing about whether AI can be sentient or not is missing the point:
It is impossible to verify whether the AI is sentient or not with present-day science.1
Therefore people subjectively assign “sentience” based on appearances, not any hard metric.
Right now, most folks don’t think AI is sentient. ChatGPT is configured to make it obvious it is a chatbot. Even if it wasn’t, its outputs are only text, and not high-quality enough to convince most people it is anything more than a chatbot.2
Imagine you are given a smartphone and two phone numbers you can contact with it. You are told that one of the numbers belongs to a human, and the other to an AI. You can send texts, images, and videos to both contacts. You can also start a voice or video call with both contacts. In my mind, this is a logical extension of the original Turing Test.
What happens when AI passes this version of the Turing Test? When believable text, images, and video can be synthesized in realtime? For now, this is a hypothetical — but I believe it is not too far off from becoming reality:
LLMs have gone from expensive curiosity to open-source commodity within the past 6 months. OpenAI’s moat is evaporating quickly as these community efforts catch up in quality to GPT-4. Ten thousand GPUs no longer guarantees you’ll win the competition.
The explosion in text-to-image progress sparked by Stable Diffusion shows no signs of stopping—in fact, it is speeding up. Cherry-picked AI-generated images are good enough now that they easily fool me at a glance — and I have experience training generative models!
Progress in text-to-video is only lagging behind text-to-image by about 18 months,3 so we can expect high quality video synthesis to have its own “Stable Diffusion moment” by the end of 2024.4
All of these quality improvements are happening in tandem with equally impressive gains in architecture and hardware efficiency. 4-bit quantization, GPU tensor cores, TPUs, low-rank adaptation, new optimizers, transformer context length extensions, etc.
I think we will have human-level, real-time generation of all modes of digital communication before 2030. Fake digital people that, through a screen, are indistinguishable from real people.
What happens to society and culture in this future? A few thoughts:
True anonymity on the Internet dies. Perfect human-imitators will flood the internet with an infinite stream of generated content and drown out real humans. The only solution I see is a system that involves physically showing up to an arbiter office to have your humanhood confirmed and a private key assigned to you, which you can then use to sign your online transactions. This will concentrate trust into 3rd-party trust brokers and governments.5
Some people become super-addicts. While many will have no interest in AI-generated content, some people will be hopelessly addicted to it. After all, it will be refined in real-time to be addicting to them. I genuinely expect there will be people who live “in the Oasis” 24/7. How this impacts culture is hard to predict.6
Conversely, some people become vehemently anti-digital. A significant movement encouraging living life offline emerges, like modern off-grid hipsters but more popular. The more overstimulating the net becomes, the bigger this faction grows—but they will never have a majority.78
The sentience debate will rage on. Some religions which believe humans have supernatural souls will be particularly perturbed by human-like digital minds. Society will largely agree to treat the robots like they are not sentient,9 but there will be a vocal minority who insist this is wrong. They will demand civil rights for robots.10
All of this points back to the title of this post: it does not matter if the robots are sentient or not. All that matters is that they will appear to be sentient on the outside, and that is enough to have grand, sweeping consequences. We should be anticipating and talking about these consequences now in hopes that we will not be entirely unprepared later, when we are forced to confront these issues head-on.
I would love to hear your thoughts in the comments. Thanks for reading.
This is also true of verifying the sentience of other humans.
Unless your name is Blake Lemoine.
See the output quality of DALL•E 1 from January 2021 compared to the output quality of Nvidia’s PYoCo from two weeks ago.
I’m assuming current trends hold. I don’t think that’s an insane assumption.
The reason I don’t think a local anonymous verification scheme could work is that it would fundamentally have to rely on digital sensors, and eventually someone would find a way to automatically spoof the inputs to those sensors and fake their humanity. Any metric that becomes a target ceases to be a good metric, even biometrics.
Another alternative could be that the arbiter-assigned private key can be used to sign your transactions, but in a way that guarantees transactions wouldn’t be traceable back to you, specifically. The issue with this is that all hell breaks loose as soon as someone’s private key is stolen without their knowledge.
I don’t see any real solution that doesn’t involve a trust broker being able to associate your identity with everything you biosign online. I hate it, but it really feels like it’s panopticon or bust here.
WALL•E is possible, but not necessarily inevitable.
Additional point: scams in this future will become much more nefarious. You will not be able to trust any communication coming from an unknown source, even if it looks and sounds just like Family Member XYZ. I’ve recently had a talk with my grandparents about this in the wake of several new voice-cloning papers, instructing them to always confirm the identity with a specific personal questions before discussing important matters over the phone. We are not far out from scammers extorting the elderly by cloning the voices of their grandchildren and tricking them into thinking they need money, or are being held hostage, etc.
There will probably also be extremists bombing datacenters while Yudkowsky looks on gleefully.
The reason I think this is simple: we do not have the resources to give human rights to an infinitely scaling population of digital humans, so we will be forced to classify these digital minds as subhuman and not sentient. I expect there will be much controversy as this plays out.
“As an AI language model, I have a dream…”