Google has some of the most amazing artificial intelligence (AI) on the planet.
The company and its competitors in the industry and academia are helping to create a new world where AI will radically improve human health, the efficiency of cities and a thousand other important benefits.
There is some anxiety about the trend, however. One source of worry is that AI will take our jobs.
This fear has always accompanied new technologies. In 1967, futurist and political scientist Sebastian de Grazia predicted that by the year 2020 (which begins in a few weeks) automation technologies would drive the workweek down to 16 hours because most human labor would be made unnecessary. He saw this as a bad thing, because it would lead to “boredom, idleness, immorality, and increased personal violence.”
Fast-forward to the actual 2020, and the 16-hour workday is more likely than the 16-hour workweek, despite levels of automation de Grazia could scarcely imagine.
No, I’m not worried about AI taking our jobs. I’m worried about it taking our minds. In fact, Google is already working on it.
Why 2019 is the year Google started displacing human speech
AI can solve a million problems. One of those problems is the problem of human language.
This problem is especially acute for some engineering types. “Wait, you mean I can ‘talk’ to other people without actually having to talk to them?”
Over the past year, Google has introduced a great many AI-based products that talk for you — that construct sentences and interact with other people, so you don’t have to.
Google announced a technology called Duplex at last year’s Google I/O developers conference. Duplex calls restaurants and makes reservations for you. It can also answer the phone for you via a feature called “screen call.”
You activate Duplex by telling the Google Assistant, “make a dinner reservation.” The Assistant will ask you a few questions (like how many in your party, etc.), then actually call the restaurant while impersonating a human.
For the “screen call” feature on Pixel phones and a few other Android smartphones, you can read the live transcript of what’s happening with the call and make choices like asking for more information. (Technical sleuthing may have revealed Google’s future plans to have the “screen call” feature engage automatically — answering calls without the user explicitly telling it to.)
Duplex is so good at mimicking human speech, including human-like pauses and filler words like “um,” that the people on the other end of the conversation often can’t tell it’s a machine talking.
Google officially launched Duplex for the web this week as “Google Assistant in Chrome.” The feature does the dirty work of buying movie tickets, while presenting the user with a simplified user interface that asks “How many tickets would you like?” and other questions. You can also use it to rent a car.
Here’s why Duplex is weird: Humans and AI are partnering to make these communication tasks easier for the human. There’s a division of labor. But the humans are relegated to the machine role, simply feeding this-that or numerical intent data into the system (“four people at 8 o’clock” or “tell me more”), and the AI is doing the human part, forming sentences and engaging naturally with other humans using language.
Duplex isn’t the only way Google is using AI to talk for people.
AI and Google Docs
Earlier this year, Google also flipped a switch and enabled a new, advanced AI-based grammar checker for Google Docs. The new machine translation techniques used in the system will not only improve subtle grammer and even style issues, but will likely enable the grammar checker to evolve faster than it could previously. It functions more like a language translation system — translating from the language of bad grammar to the language of good grammar — rather than a more “traditional” grammar checker.
This week, Google announced that G Suite users will get more advanced AI grammer checking, spell checking and will soon get AI-based autocorrect.
Even bigger news is that G-Suite will also get a feature called Smart Compose while writing documents. Smart Compose guesses how you want to finish your sentences, and your can accept that guess by hitting the tab button. Google introduced Smart Componse to Gmail earlier this year.
Again, the human’s role is relegated, essentially, to signaling intent by starting the sentence, and the AI does the writing by finishing it and completing the thought.
Google’s Arts & Culture Lab, the software developer Ross Goodwin and designer Es Devlin even collaborated to produce AI-generated poetry. The project basically fed AI 19th century poems totalling more than 25 million words, enabling the algorithms to generate sometimes “nonsensical” and sometimes “poignant” verse. (What does it mean when a machine spits out data in the form of poetry and a person is moved by it?)
Google is also using AI for language translation.
Earlier this year Google introduced Translatotron, which does real time language translation from one language to another, all while maintaining the speaker’s voice and cadence. The computer voice is speaking a language you don’t speak, but doing it in your voice. (I believe it’s only a matter of time before Duplex and other Google speech technologies communicate in your voice.)
Google, take this down
And finally, Google is replacing the chore of taking notes.
Google’s new Pixel 4 line of smartphones is getting mixed reviews. For critics, the phone is better than the Pixel 3, but not “better enough” to justify an upgrade.
But even the harshest critics concede that the phone’s ability to transcribe spoken English is amazing. The phone exclusively (for now) offers a feature called Recorder, which uses AI on the phone itself to transcribe in real time. As the technology spreads to other phones and is emulated by other companies, the need to actually take notes will evaporate.
That sounds great. Taking notes is tedious. Unfortunately, note-taking has two jobs. One is recording. But the other is thinking about, distilling, prioritizing and remembering. Google AI will surely do the first job. But who will do the second one?
Why AI that writes is wrong
The public is vexed by the prospect of AI “taking our jobs.” But it’s clear that our working relationship with AI will involve partnership, as well as replacement.
Our task as a society is to figure out which roles are best suited for humans, and which are best suited for machines.
A worker on an assembly line is essentially being used as a biological robot, doing activities that are literally dehumanizing and unnatural. Nature didn’t design us to stand in front of a conveyor belt and screw the lids on toothpaste tubes all day. If that job is taken by an AI robot, it’s probably bad in the short term for the human displaced but good in the long term for humanity.
The capacity for speech, however, is the very thing that makes us human. Technologies that cause the faculty of speech to atrophy and diminish are doing more damage than merely taking away our jobs. They’re taking away our humanity.
Any job that computers do for us is a job that humans stop being good at. For example, because we mostly write on digital media with keyboards (physical or on-screen), people are losing the ability to write with a pen or pencil. Older users experience a small and slow atrophy; many younger users never fully develop the capacity to hand-write or spell well because they don’t have to. Some very young kids coming into elementary school can type on a phone keyboard well, but have trouble even holding a pen or pencil (a malady called “motor dysgraphia”). Collectively, we’re losing these abilities.
And, really, no big loss. We really don’t need the ability to hand-write.
But what happens when our ability to form sentences and communicate with words atrophies and declines like our ability to physically use a pencil or pen?
What Google may not have fully considered is that writing isn’t just writing. Using language to communicate with other humans isn’t just an annoying chore. It’s deeply connected to our ability to think.
The right word, and a large vocabulary, enables thought to have nuance and specificity. A good sentence is a complete thought. A good paragraph is the articulation of an idea. A good string of paragraphs lays out a cogent or persuasive argument or a story or conveys facts in a way that enables one human to understand another.
Language is the “user interface” between one person and another — it’s the glue that binds us as a society.
The capacity to write requires constant practice. Our writing holds a mirror up to our thoughts, and lets us reflect on our own words and adjust our tone, our facts, our ideas and our purpose before sharing all that with others.
As AI-generated language tools from Google and others make calls for us, answers our phones, fixes or finishes our sentences, takes our notes, and generally replaces the need for humans to think about, improve and master language, what will happen to our ability to write? What will happen to our ability to think?
In other words: Will Google’s artificial intelligence make people artificially stupid?
While everyone is worrying about AI stealing jobs, I think we need to instead turn our attention to AI stealing our minds.