A.I. Is Mastering Language. Should We Trust What It Says?

0


‘‘I think it lets us be more thoughtful and more deliberate about safety issues,’’ Altman says. ‘‘Part of our strategy is: Gradual change in the world is better than sudden change.’’ Or as the OpenAI V.P. Mira Murati put it, when I asked her about the safety team’s work restricting open access to the software, ‘‘If we’re going to learn how to deploy these powerful technologies, let’s start when the stakes are very low.’’

While GPT-3 itself runs on those 285,000 CPU cores in the Iowa supercomputer cluster, OpenAI operates out of San Francisco’s Mission District, in a refurbished luggage factory. In November of last year, I met with Ilya Sutskever there, trying to elicit a layperson’s explanation of how GPT-3 really works.

‘‘Here is the underlying idea of GPT-3,’’ Sutskever said intently, leaning forward in his chair. He has an intriguing way of answering questions: a few false starts — ‘‘I can give you a description that almost matches the one you asked for’’ — interrupted by long, contemplative pauses, as though he were mapping out the entire response in advance.

‘‘The underlying idea of GPT-3 is a way of linking an intuitive notion of understanding to something that can be measured and understood mechanistically,’’ he finally said, ‘‘and that is the task of predicting the next word in text.’’ Other forms of artificial intelligence try to hard-code information about the world: the chess strategies of grandmasters, the principles of climatology. But GPT-3’s intelligence, if intelligence is the right word for it, comes from the bottom up: through the elemental act of next-word prediction. To train GPT-3, the model is given a ‘‘prompt’’ — a few sentences or paragraphs of text from a newspaper article, say, or a novel or a scholarly paper — and then asked to suggest a list of potential words that might complete the sequence, ranked by probability. In the early stages of training, the suggested words are nonsense. Prompt the algorithm with a sentence like ‘‘The writer has omitted the very last word of the first . . . ’’ and the guesses will be a kind of stream of nonsense: ‘‘satellite,’’ ‘‘puppy,’’ ‘‘Seattle,’’ ‘‘therefore.’’ But somewhere down the list — perhaps thousands of words down the list — the correct missing word appears: ‘‘paragraph.’’ The software then strengthens whatever random neural connections generated that particular suggestion and weakens all the connections that generated incorrect guesses. And then it moves on to the next prompt. Over time, with enough iterations, the software learns.

This past January, OpenAI added a feature that allowed users to give GPT-3 direct instructions as a prompt, rather than simply asking it to expand on a sample passage of text. For instance, using the ‘‘instruct’’ mode, I once gave GPT-3 the prompt: ‘‘Write an essay discussing the role of metafiction in the work of Italo Calvino.’’ In return, the software delivered a tightly constructed five-paragraph précis that began as follows:

Italian author Italo Calvino is considered a master of metafiction, a genre of writing in which the author breaks the fourth wall to discuss the act of writing itself. For Calvino, metafiction is a way of exploring the nature of reality and the ways in which stories can shape our perceptions of the world. His novels often incorporate playful, labyrinthine structures that play with the boundaries between reality and fiction. In If on a winter’s night a traveler, for example, the reader is constantly interrupted by meta-level discussions of the act of reading and the nature of storytelling. . . .

You can give GPT-3 the exact same prompt, over and over, and each time it will generate a unique response, some of them more persuasive than others but almost all of them remarkably articulate. Instruct prompts can take all kinds of forms: ‘‘Give me a list of all the ingredients in Bolognese sauce,’’ ‘‘Write a poem about a French coastal village in the style of John Ashbery,’’ ‘‘Explain the Big Bang in language that an 8-year-old will understand.’’ The first few times I fed GPT-3 prompts of this ilk, I felt a genuine shiver run down my spine. It seemed almost impossible that a machine could generate text so lucid and responsive based entirely on the elemental training of next-word-prediction.

But A.I. has a long history of creating the illusion of intelligence or understanding without actually delivering the goods. In a much-discussed paper published last year, the University of Washington linguistics professor Emily M. Bender, the ex-Google researcher Timnit Gebru and a group of co-authors declared that large language models were just ‘‘stochastic parrots’’: that is, the software was using randomization to merely remix human-authored sentences. ‘‘What has changed isn’t some step over a threshold…



Read More:A.I. Is Mastering Language. Should We Trust What It Says?

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.