In Hopes of Healthier Chickens, Farms Turn to Oregano. By Jessica Kourkounis
Oregano lies loose in trays and tied into bunches on tabletops and counters, and a big, blue drum that held oregano oil stands in the corner. “Have you ever tried oregano tea?” Mr. Sechler asked, mashing leaves between his broad fingers.
Off and on over the last three years or so, his chickens have been eating a specially milled diet laced with oregano oil and a touch of cinnamon. Mr. Sechler swears by the concoction as a way to fight off bacterial diseases that plague meat and poultry producers without resorting to antibiotics, which some experts say can be detrimental to the humans who eat the meat. Products at Bell & Evans, based in this town about 30 miles east of Harrisburg, have long been free of antibiotics, contributing to the company’s financial success as consumers have demanded purer foods.
But Mr. Sechler said that nothing he had used as a substitute in the past worked as well as oregano oil.
“I have worried a bit about how I’m going to sound talking about this,” he said. “But I really do think we’re on to something here.”
Skeptics of herbal medicines abound, as any quick Internet search demonstrates. “Oil of oregano is a perennial one, advertised as a cure for just about everything,” said Scott Gavura, a pharmacist in Toronto who writes for the Web site Science-Based Medicine. “But there isn’t any evidence, there are too many unanswered questions and the only proponents for it are the ones producing it.”
Nonetheless, Mr. Gavura said he would welcome a reduction in the use of antibiotics in animals.
At the same time, consumers are growing increasingly sophisticated about the content of the foods that they eat.
Data on sales of antibiotic-free meat is hard to come by, but the sales are a tiny fraction of the overall meat market. Sales in the United States of organic meat, poultry and fish, which by law must be raised without antibiotics, totaled $538 million in 2011, according to the Organic Trade Association. By comparison, sales of all beef that year were $79 billion.
Still, retailers like Costco, Whole Foods and Trader Joe’s, as well as some restaurant chains, complain that they cannot get enough antibiotic-free meat.
Noodles & Company, a fast-growing chain of more than 300 restaurants, recently added antibiotic-free pork to the choices of ingredients that customers can add to their made-to-order pastas. It ensured its supply by ordering cuts of meat that were not in relatively high demand and by committing in advance to buy a year’s worth, said Dan Fogarty, its executive vice president for marketing.
“We’re deliberately voting with our pocketbooks,” he said.
In a nationwide telephone survey of 1,000 adults in March, more than 60 percent told the Consumer Reports National Research Center that they would be willing to pay at least 5 cents a pound more for meat raised without antibiotics.
“Before, it was kind of a nice little business, and while it’s still microscopic in the grand scheme of things, we’re seeing acceptance from retailers across the country, not just in California and on the East Coast,” said Stephen McDonnell, founder and chief executive of Applegate, an organic and natural meats company.
Mr. McDonnell said a confluence of trends, from heightened interest in whole and natural foods to growing concerns about medical problems like diabetes, obesity and gluten allergies, were contributing to the demand for antibiotic-free meat.
There is growing concern among health care experts and policy makers about antibiotic resistance and the rise of “superbugs,” bacteria that are impervious to one or more antibiotics. Those bacteria can be passed on to consumers, who eat meat infected with them and then cannot be treated.
In November, the Centers for Disease Control and Prevention and 25 national health organizations and advocacy groups issued a statement on antibiotics that, among other things, called for “limiting the use of medically important human antibiotics in food animals” and “supporting the use of such antibiotics in animals only for those uses that are considered necessary for assuring animal health.”
In 2011, there were several prominent recalls involving bacterial strains that are resistant to antibiotics, including more than 60 million pounds of ground beef contaminated with salmonella Typhimurium and about 36 million pounds of ground turkey spoiled with salmonella Heidelberg.
Consumer Reports released a study last month that found the bacteria Yersinia enterocolitica in 69 percent of 198 pork chop and ground pork samples bought at stores around the country. Some of the bacteria were resistant to one or more antibiotics.
Analysis of Food and Drug Administration data by the Center for Science in the Public Interest found that 80 percent of all antibiotics sold in the United States are used in animals. The majority of those antibiotics are used to spur growth or prevent infections from spreading in the crowded conditions in which most animal production takes place today.
The European Union has banned the use of antibiotics to accelerate growth, and the European Parliament is pushing to end their use as tools to prevent disease as well.
The oregano oil product Mr. Sechler uses, By-O-Reg Plus, is made by a Dutch company, Ropapharm International. In the late 1990s, Bayer conducted trials on the product, known as Ropadiar in Europe, comparing its ability to control diarrhea in piglets caused by E. coli with that of four of the company’s products.
A transformation not seen in 3 million years spells big changes for the unique ecosystem of the extreme north
“WE ARE witnessing the early stages of the transformation of the Arctic,” says Louis Fortier of Laval University in Quebec City, Canada. For millennia, the top of the planet has been the preserve of specialist organisms, from fish with antifreeze running through their veins to bears capable of fasting for months. That’s all changing. An increasingly ice-free Arctic is opening a new frontier for life on Earth.
There are some windows into this warmer future: natural open-water hotspots that have always been present in the Arctic. Called polynyas, they are found in places where wind patterns and natural upwellings of warm water prevent ice from forming. The archetypal polynya is the North Water in northern Baffin Bay, says Fortier, “perhaps the most productive ecosystem beyond the Arctic Circle where marine mammals - including large whales - and humans have congregated for centuries”.
Relatively few creatures have evolved to survive at Arctic temperatures, so the fate of entire food chains can pivot on a few species. Shift things slightly in time, space or volume and everything can tip.
Timing is a particular concern. Climate change means the sun is reaching into Arctic waters earlier. Ice that only formed the previous winter lets light through more readily than a 10-metre-thick floe that has been building for several years. This means the annual cycle of life can kick off earlier, creating a problem for large species like whales, whose migrations have evolved to coincide with the historical onset of spring.
In the Amundsen Gulf of north-west Canada, nutrient upwellings have become a recurrent feature since 2002, boosting local biodiversity, says Fortier. In June 2008, the nutrients triggered a phytoplankton bloom. In just three weeks, local primary productivity shot up to more than twice the annual amount. In the Beaufort Sea to the north of Alaska, the biomass of ice algae - which cling to the underside of ice floes and occupy the lowest link of the food chain - was more than three times that reported in 35 years of seasonal observations (Climatic Change, doi.org/h7n).
Algal blooms are just the beginning. They feed tiny zooplankton, which provide vital energy supplies for organisms higher up the food chain - the polar bears that eat the seals, which eat the fish. Less ice, says George Hunt of the University of Washington in Seattle, could cause a cascade of changes to these food chains.
Some of the transformations have already taken place and appear to be here to stay. Connie Lovejoy, also at Laval University, took water samples from the Beaufort Sea between 2003 and 2010, and used DNA analysis to see what algae, plankton and bacteria it contained. The species composition was constant between 2003 and 2006, but in 2007, when summer ice cover was abnormally low, photosynthetic organisms suddenly seemed to take over. Although the following years saw more summer ice cover, the community never reverted to its initial make-up (PLoS One, doi.org/cfrj9n).
Elsewhere, species are moving in from further south. The polar cod once dominated the Hudson Bay and Beaufort Sea, but capelin and sand lance are now making appearances. Pacific salmon are also moving into the Arctic Basin, says Fortier. For now, the local polar cod and Arctic charr seem unaffected. But that could change if further warming brings in more competitive generalists, which can thrive in a wide range of environments. They might be able to outcompete the wildlife that has taken millennia to adapt to the unique conditions of the Arctic.
In the short run, the top of the world looks set to bloom, at least in parts. Some will profit: industrial fisheries are already keen to move in. But at what cost? Fortier hesitantly predicts that Pacific plankton and fish will dominate by 2050 and many marine mammals and birds could be gone entirely by the end of the century.
“It might take decades until we observe the final ‘new state’,” says Rolf Gradinger of the University of Alaska Fairbanks. “But once a tipping point has been reached, there might be no way back - although we’ll see oscillations around a new centre.”
For more on the Arctic’s record low in ice coverage, see “Arctic ice low heralds end of 3-million-year cover”
Future not all bad for bears
Thin ice could be good news for some species, at least initially. It’s a habitat in which seals thrive. “They want to slip up on an ice floe and slip back in,” says Peter Boveng of the National Marine Mammal Laboratory in Seattle. Thin ice that formed the previous winter also crumples more easily than thick multi-year floes, forming ridges and gaps that offer breathing holes.
That makes this first-year ice attractive to the animal at the top of the Arctic food chain: the polar bear. Some areas have seen an increase in Ursus maritimus, says David Barber of the University of Manitoba in Winnipeg, Canada. Polar bears typically move to thicker ice during the autumn but can move onto land to den as well.
In the short run, then, some polar bear populations could benefit from thinner ice. Others won’t. Melissa McKinney and Robert Letcher of Environment Canada in Ottawa studied polar bears near Hudson Bay and found they ate different seal species in years when the sea ice broke up early. That could be a problem: some species live in water with high levels of PCBs, flame retardants and other toxic chemicals from fertiliser runoff. The contaminants move up the food chain, but the consequences are unclear as yet.
Although the changes might bring benefits in the short term, the long-term picture is very different. Seals could lose breeding ground if the ice pulls away from the shore before they can give birth, for example. And more open water means polar bears need to spend more time swimming than they can afford. As the open-water season lengthens, so will their fasting time. Other species from further south - such as brown bears - may stand a better chance.
Bonobo apes early humans by creating stone tools
A bonobo shows he is capable of making stone tools for a particular purpose, in a similar way to early humans. Scientists at the University of Haifa, Israel, sealed food inside logs for bonobo Kanzi, who created tools by knapping flints and used them to chop, drill and scrape – managing to get food out of 24 logs.
Using tablets and customized keyboards, bonobos can become great communicators
[Top: Two-year-old Teco, shown with the author [left] and researcher Susannah Maisel, uses a simplified 25-lexigram app. His first lexigram was grape; Left: Kanzi, a 31-year-old bonobo, can converse with humans by selecting “lexigram” symbols on his Motorola Xoom tablet; Right: When Kanzi presses a lexigram on the touch screen, the computer speaks the word and shows a corresponding picture.]
Have you ever watched a toddler play with an iPhone?
Most likely, the child was completely captivated and surprisingly adept at manipulating the tiny icons. Two-year-old Teco is no different. Sitting with his Motorola Xoom tablet, he’s rapt, his dark eyes fixed on the images, fingers pecking away at the touch screen. He can’t speak, but with the aid of the tablet app I created for him, he’s building a vocabulary that will likely total several thousand words. What’s more, he’ll be able to string those words together into simple sentences and ask questions, tell jokes, and carry on conversations.
Such talents wouldn’t seem exceptional in a human child, but Teco is an ape — a bonobo, to be precise. To the uninitiated, bonobos look very much like chimpanzees, but they are in fact a separate species with distinct physical and behavioral traits. More collaborative and sociable than their chimp cousins, bonobos also seem to be more adept at learning human language. And they are endangered, found in the wild only in the Democratic Republic of the Congo. Recent estimates put the wild bonobo population at between 10 000 and 50 000. Fewer than 150 live in captivity. Along with the chimpanzee, they are our species’ closest relatives.
For more than three decades, researchers have been working with a small group of bonobos, including Teco, to explore their amazing cognitive and linguistic abilities. Teco’s father, Kanzi, is the group’s most famous member: Anderson Cooper has interviewed him, and he’s played piano with Paul McCartney and Peter Gabriel. Animal lovers worldwide have marveled at his ability to communicate by pointing to abstract symbols. He recognizes nearly 500 of these “lexigrams,” which he uses to make requests, answer questions, and compose short sentences. The spoken words he understands number in the thousands.
Even so, many people question these abilities. Indeed, for more than a century scientists have debated whether apes could ever truly comprehend human language. Many researchers argue that language is the exclusive domain of humans, and several influential studies in the 1980s concluded that supposedly “talking” apes were merely demonstrating their capacity for imitation, with lots of unintentional cuing by the animals’ handlers. Linguist Noam Chomsky has likewise argued that the human brain contains a species-specific “language acquisition device,” which allows humans, and only humans, to acquire language.
But the bonobo research I’ve been involved with, led by primatologist Sue Savage-Rumbaugh at the Bonobo Hope Great Ape Trust Sanctuary, in Des Moines, strongly suggests otherwise. Today, the wide availability of touch screens, tablet computers, digital recording, and wireless networking is giving researchers the world over powerful new ways to study and unambiguously document ape communication. The results of these studies are in turn helping to spark a renaissance of technology-aided research into primate development and cognition and shedding light on the origins of culture, language, tools, and intelligence.
What a Chimp Teaches Us About Humans
“Project Nim,” a documentary film examining the story of Nim Chimpsky, a chimpanzee who learned to communicate with people using sign language, reveals more about people than other primates.
A cautionary tale about scientific hubris and overreaching that plays like a Planet of the Apes prequel, Oscar-winning (for Man On Wire) director James Marsh’s latest film, Project Nim, is about a chimp who learned to sign.
A major media story back in the late ‘70s, the story of Nim Chimpsky began when he was taken from his mother at a primate research center in Oklahoma and given to a New York family to be raised as a human. The experiment was the brainchild of Herb Terrace, a Columbia University psychology professor, who felt if the simian could be taught sign language, he might be able to express his thoughts and feelings.
Unfortunately, Nim was initially left with the family of Stephanie LaFarge, a former student (and lover) of Terrace’s, who didn’t seem to see surrogate motherhood as a scientific project but preferred to raise Nim in a chaotic, countercultural atmosphere (where he was given alcohol and allowed puffs on a joint) without bothering to provide any journals or logbooks charting his progress. “We enjoyed letting him hang out and see how it went,” says LaFarge of the way she parented Nim. But, she adds, in one of several “D’oh!” moments scattered throughout the film, “I wasn’t prepared for the wild animal in him.”
So Terrace took Nim from LaFarge and moved him to a Columbia facility where he was taught and nurtured by a series of scientists and sign language experts. His signing began “exponentially increasing” (Nim eventually learned 125 signs), and after New York magazine published a cover story titled “First Message From the Planet of the Apes,” so did his fame.
But as Nim grew, he became dangerous. He started biting people, at one point almost ripping off the entire right cheek of one of his teachers, which leads Laura-Ann Pettito, one of the chimp’s instructors, to note in the film that, “You can’t give human nurturing to an animal that can kill you.”
From that point on, Project Nim moves from cautionary tale to animal horror story. Nim’s aggressiveness, which was also becoming sexual — he started humping the humans and a pet cat — forced Terrace to close the experiment. Nim was shipped back to the Oklahoma facility, where he had to learn to socialize with other chimps. Then the primate center, strapped for cash, sold Nim to an NYU center that tested vaccines on animals, where the director readily admitted that “there’s no way to carry out research on animals and for it to be humane.”
Nim ultimately found peace in his old age with others of his kind and died in 2000 at the age of 26. But Project Nim lingers in the mind for all sorts of reasons, none more important than the ”playing God” aspects of the research and what seems to be a curious case of cluelessness on the part of the people involved.
Whether or not it’s important to find out if animals can be taught to communicate like humans is a question the film refuses to answer. It’s obvious we communicate with animals already, as anyone who has ever had a pet can attest. But what’s the ultimate goal of this research? To create simians as intelligent as Cornelius and Dr. Zira? To discover the difference between human and animal cognition? Or is it simply a way to justify a large research grant? There’s a certain hubristic zeal to the enterprise that comes off as distasteful.
Further, knowing that Nim would grow up to be large and aggressive — in other words, a normal chimp — puts the experiment in another light. Terrace admits “no one keeps a chimp for more than five years,” so treating him like a human, then dumping him when he’s of no further use, is not just insensitive, but it also looks like a sophisticated form of animal cruelty. Not that Terrace will cop to this — he ultimately describes Project Nim as a failure, noting blithely that “knowing words doesn’t mean you can string them together.”
You could also add that knowing how to teach an animal to sign doesn’t necessarily mean you’re going to treat him humanely. In Planet of the Apes, head simian scientist Dr. Zaius declares that “to suggest that we can learn anything about the simian nature from a study of man is sheer nonsense.” But Project Nim suggests that by studying chimps, we might learn more about human nature than we really want to know.
HOW PROMETHEUS GOT ITS ATMOSPHERE
BY: SUSAN KARLIN
Having trouble figuring out the best off-world atmosphere to host your invading humans and indigenous aliens? Talk to the hand—Kevin Hand, Ridley Scott’s and James Cameron’s go-to astrobiologist.
Long before Prometheus launched, director Ridley Scott stopped by NASA’s Jet Propulsion Laboratory in Pasadena, CA to talk a little exobiology.
Opening June 8 in the U.S., the prequel to Scott’s 1979 Alien chronicles an ill-fated exploration team that travels to a distant planet in search of humankind’s origin. To ground the plot in scientific plausibility, Scott turned to Kevin Hand, JPL’s deputy chief scientist for solar system exploration, to explain the kind of terrain, atmosphere, or ecosystem astronauts might encounter on a planet outside of our solar system.
“I met with Ridley and his creative team early in the process to see how science could be utilized in plotlines,” says Hand. “They had lots of questions about what it takes for humans to travel to distant worlds, how those worlds might be uninhabitable for humans, the constraints to consider when thinking about alien life, and how it might have adapted to that environment. It became a creative brainstorming session where we bounced ideas and questions off one another. My goal was to help them get the science right while maintaining a plot that tells a compelling story.”
Among the issues discussed were ways in which a localized portion of a human-friendly atmosphere might suddenly turn toxic. Hand cited volcanic outpouring of toxic gases. Scott also asked him for scientifically justifiable reasons a building interior could contain enough oxygen for astronauts to remove their helmets when the exterior atmosphere did not. Hand suggested water electrolysis (electricity splitting water into hydrogen and oxygen), and radioactive decay, which could also split water molecules.
“I haven’t seen the final film, so I don’t know what they incorporated,” says Hand. “Directors like James Cameron and Ridley Scott are dedicated to getting the science as right as possible, as part of the dialogue and conversation. At the same time you have to know when to let go of the science and allow things to be a fun, action-packed adventure. No one wants to listen to characters explaining five minutes of scientific concept.”
Hand is an established liaison to Hollywood. In addition to consulting on another Scott film in development, he’s also worked with James Cameron. Hand consulted on and appeared in Cameron’s 2005 IMAX documentary Aliens of the Deep, consulted on Avatar, and was among the scientists involved in his March expedition to the Pacific Ocean’s Challenger Deep, the deepest point on Earth. Hand, who holds a Ph.D. in geological and environmental sciences from Stanford University, has an expertise is planetary science and astrobiology, particularly the sub-surface oceans of Jupiter’s and Saturn’s respective moons, Europa and Enceladus.
Hollywood’s interest in scientific accuracy is a growing trend, in part, due to organizations such as Hollywood Health and Society and the Science and Entertainment Exchange that connect filmmakers and TV showrunners with scientists and physicians.
“We are becoming a more technological society, but I also think it makes for a better story if you get it right, because you appeal to the critical thinking skills of the audience,” says Hand. “It’s the difference between a mash-up bang-up science fiction film where people guffaw at everything that’s wrong scientifically, and a great science fiction film that tells an amazing story, while also expanding people’s understanding of science.”
It’s not only humans that can compose and perform music: a new installation created by Wilfried Stoll and a team from engineering firm Festo in Germany can compose a melody all by itself and perform it with five robotic string instruments.
The system writes music by listening to a musician play a tune on a xylophone or MIDI keyboard. By using rules derived from mathematician John Conway’s Game of Life, a computer creates a reinterpretation of the melody, which it breaks down into different parts for each instrument. The processed signal is then transmitted to the robotic strings. “The individual acoustic robots are interlinked in such a way that they can listen to each other,” write the team. “This constantly gives rise to new variations, which differ from the original theme while retaining the essence of the composition.”
Although each robotic instrument only has one string, they mimic the sound of two violins, a cello, a viola and a double bass. An electric actuator moves up and down the string to produce the right pitch, like a human musician’s left hand. The bow is replaced by a pneumatic cylinder that moves a hammer to vibrate the string.
The installation was designed to demonstrate how a manual system can be replaced by a network of autonomous robots. The team is looking at how to automate processes based on evolution theory to transform factories of the future.
Being a movie villain is not easy. Nobody respects your work, everyone loves your sworn enemy, and cheers if he straight up murders your ass.
Of course, the villains deserve it, right? Well, actually Hollywood is littered with supposedly evil characters that, when you take a step back and ignore the cackling laughter and yellow teeth, were clearly the ones getting screwed over. Here are the so called bad guys who got the rawest deals of all:
Oh, come on. Sauron is like the archetypal evil overlord. He’s got massive armies of monsters. He has a flaming eyeball. He has a helmet made of spikes, people, come on. And, he did… you know, he did all of those… things. And…
Sauron, seen here evilly defending his home from an invading army.
Hold on a minute there:
And what exactly? Please tell us, because throughout the entire 2000-hour run of the Jackson trilogy, we couldn’t find a single reason why everyone demonized Sauron like he was a debt-collecting pedophile. Yes, he was building an army to advance on Middle Earth. But who was in that army? What were they fighting for?
This was a world where Orcs were used as target practice among elvish communities. The elves loved that shit. Sauron put a stop to that by offering all the underprivileged creatures a place in his non-race-exclusive army (the only nonsegregated force in Middle Earth other than the Fellowship), with promises of their own country in the future. After what he did for the orcs and the goblins, Sauron was just some towering, mace-wielding folk hero.
“Let freedom ring! Also, let’s eat some man-flesh.”
Of course the humans and elves couldn’t have that, because if orcs moved-in next door to them, their houses’ property value would go down. After all, these creatures are dark and smelly and have weird voices. They must be murdered on sight.
We hear a lot about freedom, and the free peoples of Middle Earth standing up to Mordor. What do we mean by “free?” They’re certainly not fighting for Democracy — each kingdom is a monarchy where the people have no say over what the leader does as long as that leader possesses the right genes. And overwhelmingly it seems like what those leaders like to do is shit on the Orcs, and the countless other minorities who Sauron was able to recruit onto his side.
What you were seeing in these films was not an unprovoked act of aggression, undertaken just for the hell of it. You were seeing generations of pent-up frustration by oppressed minorities, harnessed by a leader they could get behind. What Sauron did was nothing more than try to cut out a piece of that Middle Earth dream for himself and his followers, and find land that doesn’t require them to live under a continuously erupting volcano.
On the plus side, it isn’t Oklahoma.
His methods were violent and there were excesses — as you see in every revolution. But if Middle Earth doesn’t take a moment to understand why Sauron was able to draw tens of thousands of disenfranchised individuals to his cause, then they’re destined to fight the same war all over again, as soon as the next Sauron shows up.
Humans speak 7000 different tongues—and not just to be difficult.
Everything from genes to jungles has played a part
ALONGSIDE almost every creation myth about the origin of the Earth or the genesis of humankind, you’ll find another story about the diversity of language. In the Old Testament, “confounding the one language” is God’s punishment on humans for building the Tower of
Babel. In Greek mythology, Hermes divides language to spite his father Zeus. The Wa-Sania people of east Africa put it down to a jabbering madness brought on by famine, while the Iroquois story tells of a god who directed his people to disperse across the world.
Creation myths are just myths, of course, but the question of linguistic diversity is a genuine problem. If Americans and the British are two peoples divided by a common language, then the whole world is one united by the mutual incomprehension of nearly 7000.
Language is perhaps the defining feature of our species, and yet also the most divisive. Why is it that we communicate in so many different ways?
Science has come only so far in addressing this question. During the past century, the existence of different languages had been explained by a process not unlike the Iroquois origin myth. Isolated societies adapt existing words and phrases and coin new ones and, over time, the changes accumulate to the point where their language is no longer intelligible to outsiders.
This process of cultural evolution is similar to biological speciation, where two populations of the same species become separated from one another and diverge until they can no longer interbreed. But while biologists have used evolutionary theory to explain the variety of life, linguists have been slow to explain the staggering diversity of human languages.
Why, for example, does Latin have complex grammar while its daughters, the modern Romance languages, follow simpler rules? How come some languages, such as Mandarin, are tonal, so that the pitch of a word changes its meaning? And why does linguistic diversity mirror biodiversity, with more around the equator than in the temperate regions?
Thanks to a spate of recent studies, we can now start to answer these questions. In the same way that species are adapted to fit certain habitats, languages evolve to suit the particular needs of their speakers. Everything from a population’s genetic and social make-up to the climate and plant cover of the place they live seem to exert an influence. Understand these factors, and we might be able to predict how the world’s languages will change in the face of
It’s no wonder that linguistic diversity fascinates us—the enormous variation in our languages is one of humanity’s oddest characteristics. If you take a chimp born in London Zoo and place it
back in its African homeland, it will have little trouble communicating. That’s because all chimps share a small repertoire of grunts, barks and hoots. Humans need to be more flexible. Our brains can handle a huge range of abstract concepts, so we have evolved an open-ended form of communication to express our thoughts. It is built from a set of discrete sounds, called phonemes, which we string together in elaborate combinations to form words and sentences, structured by the rules we call grammar. Each language is a unique combination of these elements. “We are capable of effectively infinite variety,” says Mark Pagel at the University of
This flexibility is one of the drivers of linguistic diversity. It opens the door to cultural evolution, which can quickly drive a wedge through a language. Following a split, it takes as little as 500 years for one language to diverge into two. Pagel and colleagues have found that many of the changes occur immediately after the split, perhaps because people invent new ways of speaking to assert their group identity (Science, vol 319, p 588).
The cultural wedge may also explain why languages, like living organisms, proliferate in the tropics. Around 60 per cent of the world’s nearly 7000 languages are found in two areas coinciding almost exactly with the two great belts of equatorial forest, one in
Africa and the other across southern Asia and the Pacific.
The richest place of all is Papua New Guinea, home to 1 in 7 of the world’s languages. One explanation is that a climate that favours biodiversity also makes it easier for people in small splinter groups to grow food and survive on their own (Journal of Anthropological Archaeology, vol 17, p 354). Equatorial regions also tend to have a higher incidence of infectious disease, which might lead groups to isolate themselves from others (Oikos, vol 117, p
Over the millennia, cultural evolution has carved out thousands of mutually unintelligible tongues, most of which are now extinct.
Pagel has estimated that half a million languages may have lived and died since modern humans first evolved.
Few researchers have been interested in explaining their differences, however. That’s partly due to Noam Chomsky’s influential theory of universal grammar, which stated that, despite
their superficial differences, all languages follow the same set of basic rules. With this in mind, most researchers focused on similarities rather than differences, says Gary Lupyan at the
University of Wisconsin-Madison. “It wasn’t considered crucial to look at language diversity,” he says. But as universal grammar has fallen out of favour (New Scientist, 29 May 2010, p 32), linguists are becoming more interested in the forces that push languages apart.
Tracking humankind’s first movements out of Africa seems a good place to start. Quentin Atkinson at the University of Auckland, New Zealand, was inspired by the “serial founder effect”, which explains why human genetic diversity declines as you get further away from
Africa. Bands of migrating humans took only a subset of genes from the gene pool in their place of origin, reducing genetic diversity as they migrated further and further away.
He suspected migration might have whittled down language in a similar way. As groups splintered off the ancestral population in Africa, they may have left behind some of the lesser-used phonemes, which were perhaps only spoken in minority dialects. Each subsequent
migration from the splinter group would have further diminished the repertoire.
An analysis of 504 languages offers some evidence in support. Atkinson found the highest phoneme diversity in Africa and the lowest in South America and Oceania. Taa, spoken in Botswana, uses about 110 phonemes whereas the Papuan language Rotokas has just 11
(English uses about 50). Atkinson concluded that the serial founder effect accounts for about 30 per cent of the variation in the phoneme content of the world’s languages (Science, vol 332, p 346).
What might explain the other 70 per cent? Since the 11 phonemes of Rotokas can convey just as much meaning as the 110 of Taa, it’s clear that we don’t need a huge inventory of sounds to make ourselves understood. This redundancy creates a lot of room for random shifts. Each language could add or lose phonemes without reducing its usefulness, building linguistic diversity over time in much the same way that genetic drift can amplify the differences
The result is a huge amount of random variation that might mask other more systematic changes. Perhaps that can explain why it took so long for researchers to consider another important factor: the challenges of conversing in difficult surroundings.
Robert L. Munroe, an anthropologist at Pitzer College in Claremont, California, first began to ponder this possibility during field trips to Belize, Kenya and American Samoa. He noticed that languages in these tropical places tend to separate their consonants with vowels—they barely have any words like “linguistics”, for instance, with its bunches of consonants rubbing shoulders. Since vowels are easier to hear at a distance than most consonants, Munroe began to suspect that people in warmer countries use sounds that help them communicate outdoors. In contrast, people in chillier climates might be more likely to talk indoors, so it’s not as important to use sounds that carry.
Subsequent studies by Munroe and his colleagues have confirmed that people in warmer climates do tend to use more vowels. Think of the distinctive rhythm of Italian, with its evenly spaced vowels and consonants—spaghetti, tortellini, Pavarotti—not found in northern European languages. Climate seems to influence the consonants we use too. Nasal sounds like “n” and “m” are more common in warm regions, while “obstruents” like “t”, “g” and the Scottish ”och” sound are more common in cooler ones.
What’s more, studies by Carol and Melvin Ember at Yale University have found that these effects are less pronounced in areas with dense vegetation. Foliage standing between you and another speaker makes it more difficult to communicate at a distance, so sonorous
sounds are less useful. Conversely, a certain amount of tree cover can take the chill out of a wind in a colder region, so people in these areas might spend more time outside than they would on a frigid plain—and their language adapts accordingly (American
Anthropologist, vol 109, p 180).
Another influence on language diversity may be hiding in our genes.
Dan Dediu at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, and Robert Ladd at the University of Edinburgh, UK, have found that certain variants of two genes associated with brain development are more common in places where people speak tonal languages, including China, south-east Asia and sub-Saharan Africa. It is not known if these gene variants are involved in language, but Dediu doubts that it is a coincidence. He has created a mathematical model showing that if the genes help people differentiate between pitches, in areas where they are common they will push language towards a tonal system (Human Biology, vol 83, p 279). This model is by no means proof that genes influence language, but it suggests the idea is worth pursuing.
Even more so than the differences in sounds, it’s difficult to see why different languages have such vastly divergent grammars.
Consider the sentence “I walked the dog”. English changes the ending of the verb “to walk” to signal that the event happened in the past.
In Mandarin, the verb doesn’t change—if the timing isn’t obvious a word is simply added to make it clear. Speakers of the Peruvian language Yagua, on the other hand, must choose one of five verb endings depending on whether the walk happened hours, days, weeks, months or years ago.
Such diversity is mystifying until you look at who speaks the language, says Lupyan. In an analysis of more than 2000 languages, he found that complex grammars are more common in small languages whose speakers have little contact with outsiders. Those with simpler rules—such as English and Mandarin—tend to be spoken by larger populations that have contact with lots of other societies (PLoS One, vol 5, p e8559). The crucial factor is that many more
people learn these languages as an adult than you would find learning the more insular languages—and this seems to influence the complexity of the grammar.
Lupyan points out that adults find it difficult to master intricate or irregular rules so they tend to simplify when they learn a language. Children, in contrast, seem to favour complexity, as the
additional linguistic cues help clarify the sentence’s meaning.
Lupyan’s latest computer simulations suggest that grammar is swayed by the need to balance these competing demands. Pidgins and creoles, which emerge when groups of people who don’t share a common language are forced to work together, would seem to reinforce this argument -
both tend to use simpler grammars than you would find in other languages.
It’s not hard to imagine how this may have shaped the linguistic past. As the Romans civilised the ancient world they also spread their language. Latin has complex rules in which a noun’s ending changes in one of six ways depending on its role in the sentence. As adults in the provinces began to learn the lingo, they simplified it into vulgar forms that eventually became Italian, Spanish, French and other languages—each of which lacks some of Latin’s
complexities. English tells a similar story. Successive waves of invasion brought in huge numbers of immigrants who would have had to converse with their new neighbours. “They were forced to become bilingual,” says Lupyan, which may explain why English is missing
many of the rules you see in its sister Germanic languages.
Lupyan has also studied recent language change, analysing Google’s archive of literature to compare American and British English. He found that Americans seem to use more regular forms of words which would be easier for an adult to learn. This fits with his hypothesis, since America’s historically high rate of immigration means a greater proportion of second-language learners.
Other linguists are cautiously welcoming of Lupyan’s ideas. “It’s definitely plausible,” says Stephen Levinson, also at the Max Planck Institute for Psycholinguistics.
The recent findings may be just a taster of what’s to come. Having established that the differences between languages aren’t arbitrary, the hunt is now on for more laws that dictate their evolution. “We have got an interesting few years ahead of us,” says Atkinson.
With an increased understanding of language evolution, linguists may be able to answer a harder question: what will languages sound like in the future? English, in particular, is being pulled in many different directions (New Scientist, 29 March 2008, p 28). “With
exposure to the common media, you might expect differences to diminish, but they’re not going away, since we use language to confirm our social identity,” says Lupyan. For this reason, he
foresees widening gulfs between British, American and Australian English.
Sadly, many smaller languages won’t be able to exert their independence in this way. “Mass extinction is the future,” says Pagel. Around half of the world’s languages are in danger and the
majority haven’t even been documented yet. Once they’re gone, their intricacies will be lost forever. The need to study and explain the confusion of the tongues has never been more urgent.
David Robson is a feature editor at New Scientist
Struggling to make your mind up? Interpret your gut instincts to help you make the right choice
DECISION-MAKING was supposed to have been cracked by science long ago. It started in 1654 with an exchange of letters between two eminent French mathematicians, Blaise Pascal and Pierre Fermat. Their insights into games of chance formed the foundation of probability theory. And in the 20th century the ideas were developed into decision theory, an elegant formulation beloved of economists and social scientists today. Decision theory sees humans as “rational optimisers”. Given a choice, we weigh up each option, considering its value and probability, and then choose the one with the “highest expected utility”.
With your experience of making decisions, you have probably noticed some flaws here. There’s the risible idea that humans are rational, and the dubious notion that we would be capable of the on-the-hoof calculations of probability, even if we could access all the necessary information. Decision theory explains how we would make choices if we were logical computers or all-knowing beings. But we’re not. We are just rather clever apes with a brain shaped by natural selection to see us through this messy world.
Decision researchers had largely ignored this inconvenient reality, occasionally patching up their theory when experiments revealed exceptions to their rules. But that make-do-and-mend approach may soon change. Earlier this year, an independent institute called the Ernst Strüngmann Forum assembled a group of big-thinking scientists in Frankfurt, Germany, to consider whether we should abandon the old, idealistic decision theory and start afresh with a new, realistic one based on evolutionary principles. The week-long workshop provided a fascinating exploration of the forces that actually shape our decisions: innate biases, emotions, expectations, misconceptions, conformity and other all-too-human factors. While our decision-making may seem inconsistent or occasionally downright perverse, the truly intriguing thing is just how often these seemingly irrational forces help us make the right choice.
We must start by acknowledging that many of our choices are not consciously calculated. Each day we may face between 2500 and 10,000 decisions, ranging from minor concerns about what brand of coffee to drink to the question of who we should marry, and many of these are made in the uncharted depths of the subconscious mind. Indeed, Ap Dijksterhuis at the Radboud University Nijmegen in the Netherlands and colleagues have found that our subconscious thinking is particularly astute when we are faced with difficult choices such as which house to buy or deciding between two cars with many different features (Science, vol 311, p 1005).
What drives these gut feelings? Being inaccessible to conscious examination, the processes are particularly difficult to fathom. One idea is that they are based on heuristics - mental rules of thumb which, applied in appropriate situations, allow us to make fast decisions with minimal cognitive effort. The “recognition heuristic”, for example, will direct you to choose a familiar option where there is very little information to go on. The “satisficing heuristic”, meanwhile, tells you to pick the first option that meets or exceeds your expectations, when delaying a choice for too long is not in your interests.
Heuristics are shaped by previously successful choices - either hard-wired by evolution or learned through trial and error - so it’s no wonder they tend to work. Peter Todd from Indiana University, Bloomington, has shown, for example, that satisficing is a sound basis for choosing a romantic partner (New Scientist, 4 September 1999, p 32). The recognition heuristic, meanwhile, may underpin some of your better guesses in multiple choice quizzes. However, some critics doubt whether our subconscious choices really are based on heuristics; they argue that this approach to decision-making would be neither fast nor cognitively simple since we would need a complex mental mechanism to select the correct heuristic to use.
Our emotions may instead be the driving force in subconscious decision-making. We now know that far from being the antithesis of rationality, emotions are actually evolution’s satnav, directing us towards choices that have survival benefits. Anger can motivate us to punish a transgressor, for instance, which might help us to maintain social order and group cohesion. So says Peter Hammerstein from Humboldt University of Berlin, Germany, who helped organise the workshop. Disgust, meanwhile, makes us fastidious and moralistic, which should prompt choices that help us avoid disease and shun people who don’t play by the rules. And while fear often seems to lead to overreactions, this makes sense when you consider the dangers facing prehistoric humans, says Daniel Nettle from Newcastle University, UK. On that one occasion where a rustle in the bushes really was made by a predator, the less neurotic peers of our ancestors would have paid the ultimate price, failing to pass their laid-back genes on to the next generation (Personality and Social Psychology Review, vol 10, p 47).
Heuristics and emotions help us subconsciously focus on what matters. This is just as important when we make conscious decisions. Even the most basic everyday situations are too complex for our brains to compute all the necessary information. Instead, we must simplify.
Gordon Brown at the University of Warwick, UK, argues that we rank alternatives based on cognitively easy, binary comparisons. For example, when deciding whether £2.20 is too much to pay for a cup of coffee, you might recall half a dozen occasions when you paid less and only two when you paid more, leading you to place this particular coffee in the “expensive” category, and choose not to buy it. This so-called “decision by sampling” approach simplifies the options, but it can also lead to bad decisions when the limited information used to rank alternatives is incorrect or based on false beliefs (Cognitive Psychology, vol 53, p 1). If, for instance, frequent nights out with boozy friends leads you to conclude that your alcohol consumption is in the top 20 per cent of drinkers, when in fact it falls in the top 1 per cent, you are more likely to decide to ignore the problem. Decision by sampling could even sway your choice when you face more immediate threats: people living in a society with high mortality rates are more likely to decide to put themselves at risk than someone who has had little experience of danger.
That’s not very heartening, but Alex Kacelnik at the University of Oxford takes a more optimistic view of our ability to pick and choose the information upon which we base our decisions. “Natural selection allows us to correct our behaviour to do what works,” he says. Kacelnik believes the main force influencing decision-making is reinforcement learning. In other words, we learn from experience and favour what has worked in the past. Nothing controversial there. But, he notes, we are also swayed by our changing internal states - things like hunger, thirst and libido - so that choices are tailored to our needs. Decision theory has long struggled with the problem that people are inconsistent (see “The logic of inconsistency”), but Kacelnik argues that apparent inconsistencies in choice can arise simply because our preferences change according to our needs. “Utility is a moving target,” he says. We may not show the “economic rationality” of traditional decision theory but our choices have their own logic, which he calls “biological rationality”.
Natural selection can even explain our puzzling propensity to eschew choice altogether and simply follow the herd. Rob Boyd from the University of California, Los Angeles, pointed out at the workshop that we have evolved to learn from others because this is often a wise option. “In most situations it is way beyond an individual’s capacity to know the best thing to do,” he says. But we are good at recognising who to copy, says Laura Schulz of Massachusetts Institute of Technology who has found that even young children assess the expertise of their “teachers”. As a result, our conformist tendencies often lead to surprisingly good choices (New Scientist, 1 May 2010, p 40). They allow us to fit in when we start a new school or job and make wise purchases of the latest products without full information on the alternatives. The flip side is that we can also all fall into line with the immoral or illegal behaviour of those around us or be swayed by manipulative leaders.
Consideration of others is yet another aspect of human behaviour that flies in the face of decision theory. There are many situations in which a rational optimiser should not cooperate, since such actions can use up precious resources that we could use to better our own circumstances. From an evolutionary standpoint, it could be argued that some forms of apparent altruism help us to build alliances and improve our standing on the social ladder, but what about the times we overdo cooperation? An anonymous donation to charity, for example, will not boost your reputation or persuade others to help you in your hour of need. In purely evolutionary terms, it is a bad choice. But we do it anyway because the warm glow of altruism, which is evolution’s reward to team players, makes us feel good. In effect, we are tricked by a mental glitch. And it is not the only such glitch we possess. Researchers in decision theory have uncovered a variety of mental biases underlying some of our more illogical and arbitrary behaviours (see “Mental glitches that make fools of us”).
So what’s going on? Have our brains evolved to direct our behaviour in ways that have become maladaptive in the modern world? That should become clear as more decision researchers consider how we actually make up our minds, rather than how we should. Accepting that we are not rational optimisers will make life difficult for economists and anyone searching for a formula for choice, which is why some members of the Frankfurt group were reluctant to abandon decision theory altogether. But a better understanding of the forces that underpin our decisions should help everyone make better choices.
Conformists, for example, might be persuaded to adopt environmentally sustainable habits simply because others already have. Governments wanting us to save up for retirement need to understand why we are so bad at making long-term decisions. And we could all be more aware of the misconceptions and biases shaping our behaviour. The discovery of “decision fatigue”, for instance, which makes judges four times more likely to grant bail in the morning than in the afternoon, might persuade you to take more time out when facing a string of tough problems (Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1018033108). And understanding that the behaviours of your nearest and dearest can bias your view of your own lifestyle might remind you to dig into the facts before you choose to follow or reject a new health regime.
Of all the choices that you face everyday, the decision to try to make better decisions is surely the biggest no-brainer.
Mental glitches that make fools of us
The human brain does not compute options like a rational computer, yet our decisions are often effective. Nevertheless, some of our mental biases are hard to explain.
In novel situations, or ones where information is limited, we have the unfortunate habit of basing decisions on random connections. This so-called anchoring effect was first shown by Daniel Kahneman of Princeton University and the late Amos Tversky, and the consequences can be bemusing. One study found that people asked to write a high number subsequently bid more for items whose value was unknown than people who wrote down a low number.
Kahneman and Tversky also revealed our peculiar attitudes to risk. We tend to be more cautious than is logical when there is the possibility of making large gains or small losses. However, we choose unduly risky options when there is the chance of making a small gain or a large loss. In recent years, our inclination to undervalue rare but catastrophic events has been dubbed the Black Swan effect.
Another factor underpinning some bad decisions is the confirmation bias - our tendency to overemphasise anything that confirms what we already believe. Then there’s loss aversion: it feels worse to lose something than to gain the equivalent amount, making us protect what we have rather than take a chance to make a gain. Also, when choosing whether to continue with a venture we irrationally consider the investment we have already made in it - the sunk-cost fallacy. Meanwhile, our short-term bias - temporal discounting - means we tend to prefer smaller rewards now to bigger ones later.
The logic of inconsistency
If you prefer apples to plums, and plums to pears, then given the choice between apples and pears you will obviously pick apples. Or will you? In reality, people fail to show such logical behaviour. This kind of inconsistency, known as intransitivity, has been a headache for mathematicians trying to understand decision-making. But their mistake may have been to think of the human brain as a computer rather than a biological entity that must solve the problem of how to compare apples, pears and plums.
Admittedly, our understanding of what goes on in a brain when it makes a choice is very hazy, as became apparent at an Ernst Strüngmann Forum on decision-making in Frankfurt earlier this year. It is generally agreed that there must be a mental “common currency” for comparing options. What this is, or how it converts into apples, pears, or whatever, is a mystery. However, Nick Chater from the University of Warwick, UK, argues that because the brain lacks time and computing power, it evaluates only a limited number of attributes for each alternative. This process could explain intransitivity, according to cognitive psychologist Danny Oppenheimerof Princeton University.
He believes the brain uses a kind of voting system: different brain areas weigh up the various attributes of apples, pears and plums, say, and compete with each other to have their preference chosen. If there’s no clear winner, you might decide on any of the fruit, depending on which region happens to gain the upper hand at that moment (see diagram). Intransitivity could be a by-product of the way our brains work, rather than a trait we have evolved for its own advantage.
Kate Douglas is an editor at New Scientist
The fuzziness and weird logic of the way particles behave applies surprisingly well to how humans think
THE quantum world defies the rules of ordinary logic. Particles routinely occupy two or more places at the same time and don’t even have well-defined properties until they are measured. It’s all strange, yet true - quantum theory is the most accurate scientific theory ever tested and its mathematics is perfectly suited to the weirdness of the atomic world.
Yet that mathematics actually stands on its own, quite independent of the theory. Indeed, much of it was invented well before quantum theory even existed, notably by German mathematician David Hilbert. Now, it’s beginning to look as if it might apply to a lot more than just quantum physics, and quite possibly even to the way people think.
Human thinking, as many of us know, often fails to respect the principles of classical logic. We make systematic errors when reasoning with probabilities, for example. Physicist Diederik Aerts of the Free University of Brussels, Belgium, has shown that these errors actually make sense within a wider logic based on quantum mathematics. The same logic also seems to fit naturally with how people link concepts together, often on the basis of loose associations and blurred boundaries. That means search algorithms based on quantum logic could uncover meanings in masses of text more efficiently than classical algorithms.
It may sound preposterous to imagine that the mathematics of quantum theory has something to say about the nature of human thinking. This is not to say there is anything quantum going on in the brain, only that “quantum” mathematics really isn’t owned by physics at all, and turns out to be better than classical mathematics in capturing the fuzzy and flexible ways that humans use ideas. “People often follow a different way of thinking than the one dictated by classical logic,” says Aerts. “The mathematics of quantum theory turns out to describe this quite well.”
Humanity 101 of the Day: Vlogbrother Hank Green presents visiting aliens with a time-saving primer on humans.
What I want people to realize is, this isn’t a video about talking to aliens, it’s a video about understanding ourselves. We are stuck inside of humanity…it is the only way we have of experiencing things. Attempting to explain humanity to an imaginary, ignorant, intelligent being, actually sheds a great deal of light on the knowledge that we take for granted.