Imagine an artificial intelligence system that can make up any sort of text desired just from a textual prompt. You want a technical article? Feed it a few technical phrases. Poetry? Give it some poetic samples and the specific words you want included. Computer code? Just enter a plain language description of the action desired.

All this is possible now, thanks to a remarkable program called GPT-3. (A little bit of background on that and AI can be found in the June 2021 SWCP Portal.) Developed by OpenAI, originally an open-source developer that now exclusively partners with Microsoft in competition with Google’s DeepMind AI project, GPT-3 is an immense system with incredible powers. These abilities, and its lack of limits, could make it easily used for either good or evil. Which is why outsiders can only use it as an API (Application Programming Interface – a program that works with other programs) because GPT-3 is far too large, powerful, and dangerous to distribute as a stand-alone system.

GPT-3 vividly demonstrates both the incredible possibilities and worrisome hazards of AI. The possibilities because GPT-3 can write just about anything so naturally that it is almost impossible to tell that the piece was composed by a computer. One that does not even understand the words it uses.

The hazards come from the fact that the program knows nothing of the world. It truly does not care about nor comprehend the meanings of words: as a language model, the only thing with which GPT-3 is concerned is word order and frequency. Whether it makes up soaring lyric poetry, deep essays, real-sounding fake news, or hate-filled rants, it’s all the same to GPT-3.

GPT-3 stands for the third generation of OpenAI’s Generative Pre-trained Transformer. It’s a language model: that is, the program calculates the probability of one word following another. What all this means is that GPT-3 creates text based on samples it has absorbed after determining the context. Like the toy, cartoon, and movie vehicles that can instantly remake themselves into robots using the same parts, GPT-3 takes the hundreds of billions of words it has been fed and puts them into new and generally functional arrangements.

That’s right – hundreds of billions of words. The main distinction between this and previous efforts is the vast amount of literature it has consumed: in essence, most of the English-speaking internet. However, what GPT-3 learned from all that varied input is word order, sequencing, and connections. Not one single bit of real knowledge or wisdom about the world is gained, just a whole lot of data about sentences. But being pre-trained on such a huge vocabulary – 570 gigabytes of text with 175 million parameters – allows GPT-3 to mimic any style of composition, good or bad.

Its training is quite simple but clever, and would work with any language. Words and phrases are removed at random from the training text and the program must fill them in using just the context of the surrounding words, which it then judges. Thus, it teaches itself how to write by filling in the blanks, figuring out which word is most likely to follow the one before. From this exercise, GPT-3 just learns sentence structure and word order, which it can then broadly apply to generate any kind of real sounding text.

Whether or not that text has any connection to reality is another problem. Like the previous version, the program still has a tendency to go off the tracks, especially with not enough sample context. You can see for yourself by playing an online free text-based game called AI Dungeon.

Its immediate predecessor, GPT-2, had a data set 100 times smaller. Resulting text was more awkward, repetitive, and lapsed into incoherence fairly quickly. But for all its sophistication and ability, GPT-3 is not without problems.

First of all, it’s big – 10 times bigger than the next largest natural language processor, Microsoft’s Turing NLG. And it’s expensive, too, costing OpenAI over $12 million to train.

It’s also highly disruptive. Think of what such a system could do to the careers of millions of writers, from novelists to sports columnists to greeting card writers to coders. Then imagine it turned loose with manufactured claims to create fake news, propaganda, deepfake texts, spam and phishing messages, fraudulent scientific papers, plagiarism and so on.

This potential for real evil is made even worse by the simple fact that humans are bigoted, albeit largely unconsciously, so that our prejudices are embedded in the very words we use. Language model AIs at this time have no means to filter this kind of built-in poison out, so that if a bad actor decides to exaggerate those trends, the results could be vile indeed.

That inherent toxicity is one major reason that the whole thing is kept so closely guarded. By allowing GPT-3 to be used only as a webpage app, OpenAI is able to monitor how it is operated and retains a certain amount of control to keep this powerful tool from being freely misused.

For at this point, GPT-3 still requires a human editor to judge the results and decide what to do with them. That may be the only way to keep a firm hand on the wall plug, as it were. But that may not last long.

Because, to use another metaphor, the handwriting is on the wall. Someday, machines will not only compose but edit and place text for public consumption without any human intervention. The struggle between the good writing robots and the deceptive ones may take place entirely online.

Where’s Optimus Prime when you need him?