VOT-PEL-JIC-RUD. VOT-RUD-JIC-TAM. These aren't word puzzles or acronym-making gone mad. Though the words are meaningless, they are sentences in a language—one of perhaps hundreds of miniature artificial languages that language scientists have created for their research.
These mini-languages are to real languages what Matchboxes are to real cars. You won't find them spoken on the street or written online.
They are, however, useful simulations used in laboratories that show how humans learn languages in all their varied forms.
Such research seeks to understand how we learn the linguistic significance of patterns in speech: where do words begin and end? What are the important words in a sentence? Which parts of language are learned first? How do babies and adults learn languages differently?
Real languages aren't very neat experimental tools. Researchers typically want to manipulate one variable while keeping everything else constant. "You can't do that with real languages," says Carla Hudson Kam, a developmental psycholinguist at the University of British Columbia who has used 20 to 30 mini-languages to study how babies, children, and adults learn aspects of language structure. "Real languages are messy."
By contrast, mini-languages are simple and easy to learn. A participant in a study can master them in an hour, or at most, a few weeks.
For instance, Hudson Kam and colleague Elissa Newport wanted to study how children learn language from non-native speaking adults. How do the kids learn regular patterns when the ones they hear are inconsistent? Hudson Kam and Newport created a mini-language with articles (like "the" and "a" in English) that were random: sometimes they appeared before nouns, sometimes they didn't. After teaching the language to adults and kids and then testing them, they found that kids regularized the randomness more widely than adults do. Kids also created patterns of regularity in ways that didn't exist in what they heard.
Scientists use mini-languages to study babies because they're entirely novel. Even in utero, babies are learning what will become their native language (or languages.) "If we want to see what they can learn and observe the factors that affect learning, we need to start them 'from scratch' on a language that they don't yet know," says developmental psycholinguist Louann Gerken at the University of Arizona.
Gerken sometimes manipulates real languages for her experiments. In one study, she and her team tested 17 month olds on Russian grammatical structures that had been modified to be easier or harder to learn. Indeed, the babies learned the easier grammar more easily. "But we also found that babies who got the unlearnable grammar often failed to even finish the experiment," Gerken says. "They cried and squirmed and wanted to stop." Babies listen to learnable languages longer and stopped listening to unlearnable ones. They know, in other words, whether they're learning.
Hudson Kam prefers to use entirely artificial languages rather than teach study participants a deformed version of a real language. To make a fake lanaguage, you have to first isolate out the linguistic structure that you want to study. You make sure that structure occurs in natural languages. Then you build some vocabulary, wrapping it around the structures. You test it to make sure it's not too much like languages that your participants will already know. Then you make bunch of sentences for teaching and testing.
No matter how difficult Russian class is, all human languages can be learned—to get an unlearnable language, it has to be created
The first language that Hudson Kam created took months, she says. Interestingly, each researcher invents their own artificial grammars; no central repository or library of mini-languages exists for researchers to draw from.
One fear about simulations is that they can escape their controlled settings and affect the real world. Maybe that could happen with languages invented for movies and television shows; in fact, that's already happened with Klingon, which you can now learn on Duolingo.com. There's little chance of mini-languages escaping, though. Their vocabularies are miniscule; they're not learned socially; often there's not even meaning attached to any of the words. VOT-PEL-JIC-RUD doesn't mean anything; it's just a string of sounds. For the regularization research, Hudson Kam and Newport's mini-language had only 51 words: 36 nouns, 12 verbs, one negative, and two articles.
Because the words in that language actually meant something, it felt like a real language to Hudson Kam despite its size. She named it "Sillyspeak." (Another experiment with a similar language that involved a German grad student was called "Sillysprochen.") In these experiments, the goal is to get study participants to actually speak the languages. Interestingly, the children in Sillyspeak experiments always ask to learn the bad words. That tells Hudson Kam that the kids think they're learning a real language, not playing a language game.
These toy languages, which have been used since at least the 1920s, have wide applications. In the early 1990s, linguists Neil Smith and Ianthi Tsimpli gave an unlearnable language, named Epun, to Christopher, an institutionalized British man with a striking ability to learn languages quickly. It was unlearnable because certain features of Epun were arranged based on to the number of words in a sentence, not the structure of the sentence. Christopher's struggles with Epun confirmed certain linguistic theories' predictions about how human languages should be built. (It's worth noting that no matter how difficult Russian class is, all human languages can be learned—to get an unlearnable language, it has to be created.)
Despite their usefulness, artificial languages have some drawbacks, the most obvious of which is that they don't resemble languages in the wild. "That means you can't always be sure that your results will generalize to people learning real language," says Amy Perfors, a cognitive scientist at the University of Adelaide.
One way around this is to build linguistic properties into mini-languages that are inspired by real languages, then look for the same effects within real language. "That way you have the best of both worlds," Perfors says, "There's some experimental control using the artificial languages, and some real-world impact from the real languages."
Still, a surprising amount of our understanding of real languages comes from mini-languages: processes of infant language acquisition, as well as differences between adult and child learning. "The study of syntactic categories has almost exclusively been studied with artificial grammars," said Louann Gerken.
Another use for artificial languages is in the study of language evolution. Tessa Verhoef, a postdoctoral researcher at the University of California at San Diego, teaches a slide whistle language to groups of adults, who then teach it to other adults. As the sounds are transmitted, they become simpler and more conventional.
The whistle language, along with some other artificial languages, allows Verhoef to manipulate the size of the group and other conditions to see what affects the simplifying process, simulating how groups of early humans might have interacted.
Verhoef built the slide whistle language after her advisor remembered a BBC children's show, The Clangers, which featured an alien language made with slide whistles. That's a brilliant idea, Verhoef thought. "People don't have to be musical geniuses to do this experiment," Verhoef says, "and they find it really fun."
These may be toy languages, but when it comes to science, they have power.
Perfect Worlds is a series on Motherboard about simulations, imitations, and models. Follow along here.