Terrifyingly plausible ways the human race could go extinct that never occurred to you. By
The meteorite that exploded in southern Russia last month and the near-miss asteroid that passed by the same week have more than a few of us pondering humanity’s potential extinction. But to Oxford philosopher Nick Bostrom, these are mere trifles. Asteroids, global warming, or even a super volcano that would take us 100,000 years to recover from “would be a brief pause on cosmic time scales,” Bostrom tells Aeon’s Ross Andersen, who recently profiled him and some of his peers at the university’s Future of Humanity Institute, an article that is worth reading in full.
These experts devote their lives to thinking about the risks that could not merely kill billions but put paid to humanity once and for all. And what really scares them is us—more specifically, what our primitive primate brains are capable of creating. Here’s what gives a human extinction theorist the heebie-jeebies, in increasing order of existential dread:
Nuclear weapons were the first human technology capable of wiping out the human race. It’s not mushroom cloud explosions or radiation that would do it, but rather the resulting nuclear winter, which would potentially create so much dust that the whole planet’s climate is shifted and all of its crops are destroyed. But cheer up: Even then it might not be enough to wipe out every last one of us, and even the risk of a Cold-War-style holocaust seems to be receding for the moment.
Biological Warfare—and Accidents
Scarier are biological weapons, especially once advances in DNA manipulation put DIY plagues within the reach of any apocalyptic cult with a bit of imagination. A disastrous biological event doesn’t even have to be intentional, as Andersen notes: “Imagine an Australian logging company sending synthetic bacteria into Brazil’s forests to gain an edge in the global timber market. The bacteria might mutate into a dominant strain, a strain that could ruin Earth’s entire soil ecology in a single stroke, forcing 7 billion humans to the oceans for food.” Note to hypothetical Aussie miners: Don’t do that.
It seems so innocuous the way that software keeps getting better—fixing our typos, optimizing our high-frequency stock trades, beating us at Jeopardy and chess. But when an artificial intelligence (AI) becomes just a tiny bit smarter than the humans that created it, things could get catastrophically unpredictable, very quickly.
Here’s Andersen again:
Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.
Any safeguards you might think of carry their own unintended consequences. Design an AI with empathy? It might decide that you are happiest with a non-optional intravenous drip of heroin, and do whatever it takes to give it to you. Design an AI to answer questions and crave the reward of a button being pushed? It might figure out a way to dupe you into building a machine that presses the button at the AI’s command. Then, Bostrom explains, “It quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of.”
The scenarios basically end up with humans trying to keep a super-intelligent AI in a super-secure prison, and hoping that they’ve thought of everything. They will probably forget something.
The ‘Cosmic Omen’
The above threats may be a subset of a bigger problem. Russian physicist Konstantin Tsiolkovsky noticed in the 1930s that there just weren’t many aliens around. Enrico Fermi made a similar observation 20 years later, and the paradox that bears his name amounts down to a question: “Where are they?”
Robin Hanson, a research associate at the Future of Humanity Institute, “says there must be something about the universe, or about life itself, that stops planets from generating galaxy-colonizing civilizations.” Maybe intergalactic space travel is basically impossible. Or, more ominously, it may be that any sufficiently advanced civilization creates nukes, or biological weapons, or artificial intelligences that lead to its demise. Or it could be something that hasn’t occurred to us yet, or that we are not yet capable of conceiving. But the absence of aliens leads Hanson to a few troubling conclusions:
It could be that life itself is scarce, or it could be that microbes seldom stumble onto sexual reproduction. Single-celled organisms could be common in the universe, but Cambrian explosions rare. That, or maybe Tsiolkovsky misjudged human destiny. Maybe he underestimated the difficulty of interstellar travel. Or maybe technologically advanced civilisations choose not to expand into the galaxy, or do so invisibly, for reasons we do not yet understand. Or maybe, something more sinister is going on. Maybe quick extinction is the destiny of all intelligent life.
For this reason, Bostrom hopes that NASA’s Curiosity rover doesn’t find life on Mars. Because then, perhaps the reason we haven’t met advanced civilizations is that they never existed, not that they self-destructed. The thought that life on Earth is unique would be a lonely one, but for a philosopher of human extinction, much more comforting.
This robot is fly.
MIT’s Robust Robotics program is showing off its nifty new robotic plane that navigates itself. Using only on-board sensors, the plane requires neither pilot, remote control nor GPS. A basic Intel Atom processor powers the little self-guided plane that could.
MIT team leaders are hoping to eventually trick out the plane with independent mapping, according to Dvice.com. Then it could truly be an autonomous ‘bot that could fly into duty for military or rescue ops, the site explained.
While the technology has been used with slower craft such as helicopters, what separates this flying machine is its ability to maneuver with speed in tight spaces, according to the video (above).
“The reason that we switched from the helicopter to the fixed-wing vehicle is that the fixed-wing vehicle is a more complicated and interesting problem, but also that it has a much longer flight time,” Nick Roy, an associate professor of aeronautics and astronautics and head of the Robust Robotics Group, told MIT News. “The helicopter is working very hard just to keep itself in the air, and we wanted to be able to fly longer distances for longer periods of time.”
WATCH the plane buzz through an indoor parking lot with all the assuredness of a winged Mini. You can get all the technical deets as you go along for the ride.
ALSO ON HUFFPOST:
From Ancient Greece to quantum mechanics, or what a Chinese room and a cat have to do with infinity.
The Paradox of the Tortoise and Achilles comes from Ancient Greece and explores motion as an illusion:
The Grandfather Paradox grapples with time travel:
Chinese Room comes from the work of John Searle, originally published in 1980, and deals with artificial intelligence:
Hilbert’s paradox of the Grand Hotel, proposed by German mathematician David Hilbert, tackles the gargantuan issue of infinity:
The Twin Paradox, first explained by Paul Langevin in 1911, examines special relativity:
Schrödinger’s Cat, devised by Austrian physicist Erwin Schrödinger in 1935, is a quantum mechanics mind-bender:
A SUITE of artificial intelligence algorithms may become the ultimate chemistry set. Software can now quickly predict a property of molecules from their theoretical structure. Similar advances should allow chemists to design new molecules on computers instead of by lengthy trial-and-error.
Our physical understanding of the macroscopic world is so good that everything from bridges to aircraft can be designed and tested on a computer. There’s no need to make every possible design to figure out which ones work. Microscopic molecules are a different story. “Basically, we are still doing chemistry like Thomas Edison,” says Anatole von Lilienfeld of Argonne National Laboratory in Lemont, Illinois.
The chief enemy of computer-aided chemical design is the Schrödinger equation. In theory, this mathematical beast can be solved to give the probability that electrons in an atom or molecule will be in certain positions, giving rise to chemical and physical properties.
But because the equation increases in complexity as more electrons and protons are introduced, exact solutions only exist for the simplest systems: the hydrogen atom, composed of one electron and one proton, and the hydrogen molecule, which has two electrons and two protons.
This complexity rules out the possibility of exactly predicting the properties of large molecules that might be useful for engineering or medicine. “It’s out of the question to solve the Schrödinger equation to arbitrary precision for, say, aspirin,” says von Lilienfeld.
So he and his colleagues bypassed the fiendish equation entirely and turned instead to a computer-science technique.
Machine learning is already widely used to find patterns in large data sets with complicated underlying rules, including stock market analysis, ecology and Amazon’s personalised book recommendations. An algorithm is fed examples (other shoppers who bought the book you’re looking at, for instance) and the computer uses them to predict an outcome (other books you might like). “In the same way, we learn from molecules and use them as previous examples to predict properties of new molecules,” says von Lilienfeld.
His team focused on a basic property: the energy tied up in all the bonds holding a molecule together, the atomisation energy. The team built a database of 7165 molecules with known atomisation energies and structures. The computer used 1000 of these to identify structural features that could predict the atomisation energies.
When the researchers tested the resulting algorithm on the remaining 6165 molecules, it produced atomisation energies within 1 per cent of the true value. That is comparable to the accuracy of mathematical approximations of the Schrödinger equation, which work but take longer to calculate as molecules get bigger (Physical Review Letters, DOI: 10.1103/PhysRevLett.108.058301).
The algorithm found solutions in a millisecond that would take these earlier methods an hour. “Instead of having to wait years to screen lots of new molecules, you might have to wait weeks or a month,” says Mark Tuckerman of New York University, who was not involved in the new work.
The algorithm is still mainly a proof of principle. If it can learn to predict something else, such as how well a molecule binds to an enzyme, it could help with designing drugs, fuel cells, batteries or biosensors. “The applications can be as broad as chemistry,” von Lilienfeld says.
- Failure is not an option — it comes bundled with Windows.
- Artificial Intelligence usually beats natural stupidity.
- To err is human … to really foul up requires the root password.
- Like car accidents, most hardware problems are due to driver error.
- If at first you don’t succeed; call it version 1.0.
- Why do we want intelligent terminals when there are so many stupid users?
- See daddy? All the keys are in alphabetical order now.
- The only problem with troubleshooting is that sometimes trouble shoots back.
- If brute force doesn’t solve your problems, then you aren’t using enough.
- I’m not anti-social; I’m just not user friendly.
- Any fool can use a computer. Many do.
- Hardware: The parts of a computer system that can be kicked.
- Computer language design is just like a stroll in the park. Jurassic Park, that is.
When blondes have more fun do they know it?
If at first you don’t succeed, skydiving is not for you.
We are born naked, wet and hungry.
Then things get worse.
Artificial intelligence is no match for natural stupidity.
Artificial intelligence robots have held a conversation with one another for the first time with surprising and surreal results.
Two graduate PhD students at Cornell University gave voices and 2D avatars to a pair of online “chatbots”, which they named Alan and Sruthi.
Jason Yosinski and Igor Labutov explained to BBC News what happened when they left the robots to converse and why they were “stunned” at the results.