Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Saturday, November 21, 2015

Opening the door to the Chinese Room

The idea of artificial intelligence terrifies a lot of people.

The reasons for this fear vary.  Some are repelled by the thought that our mental processes could be emulated in a machine. Others worry that if we do develop AI, it will rise up and overthrow us, √† la The Matrix.  Still others are convinced that humans have something that is inherently unrepresentable -- a heart, a soul, perhaps even simply consciousness -- so any machine that appeared to be intelligent and human-like would only be a clever replica.

The people who believe that human intelligence will never be emulated in a machine usually fall back on something like the John Searle's "Chinese Room Analogy" as an argument.  Searle, an American philosopher, has said that computers are simply string-conversion devices; they take an input string, manipulate it in some completely predictable way, and then create an output string which they then give you.  What they do is analogous to someone sitting in a locked room with a Chinese-English dictionary who is given a string of Chinese text, and uses the dictionary to convert it to English.  There is no true understanding; it's mere symbol manipulation.

[image courtesy of the Wikimedia Commons]

There are two significant problems with Searle's Chinese Room.  One is the question of whether our brains themselves aren't simply string-conversion devices.  Vastly more sophisticated ones, of course; but given our brain chemistry and wiring at a given moment, it's far from a settled question whether our neural networks aren't reacting in a completely deterministic fashion.

The second, of course, is the problem that even though the woman in the Chinese Room starts out being a simple string-converter, if she keeps doing it long enough, eventually she will learn Chinese.  At that point there will be understanding going on.

Yes, says Searle, but that's because she has a human brain, which can do more than a computer can.  A machine could never abstract a language, or anything of the sort, without having explicit programming -- lists of vocabulary, syntax rules, morphological structure -- to go by.  Humans learn language starting with a highly receptive tabula rasa that is unlike anything that could be emulated in a computer.

Which was true, until this month.

A team of researchers at the University of Sassari (Italy) and the University of Plymouth (UK) have devised a network of two million interconnected artificial neurons that is capable of learning language "organically" -- starting with nothing, and using only communication with a human interlocutor as input.  Called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning), this network is capable of doing what AI people call "bootstrapping" or "recursive self-improvement" -- it begins with only a capacity for plasticity and improves its understanding as it goes, a feature that up till now has been considered by some to be impossible to achieve.

Bruno Golosio, head of the team that created ANNABELL, writes:
ANNABELL does not have pre-coded language knowledge; it learns only through communication with a human interlocutor, thanks to two fundamental mechanisms, which are also present in the biological brain: synaptic plasticity and neural gating.  Synaptic plasticity is the ability of the connection between two neurons to increase its efficiency when the two neurons are often active simultaneously, or nearly simultaneously.  This mechanism is essential for learning and for long-term memory.  Neural gating mechanisms are based on the properties of certain neurons (called bistable neurons) to behave as switches that can be turned "on" or "off" by a control signal coming from other neurons.  When turned on, the bistable neurons transmit the signal from a part of the brain to another, otherwise they block it.  The model is able to learn, due to synaptic plasticity, to control the signals that open and close the neural gates, so as to control the flow of information among different areas.
Which in my mind blows a neat hole in the contention that the human mind has some je ne sais quoi that will never be copied in a mechanical device.  This simple model (and compared to an actual brain, it is rudimentary, however impressive Golosio's team's achievement is) is doing precisely what an infant's brain does when it learns language -- taking in input, abstracting rules, and adjusting as it goes so that it improves over time.

Myself, I think this is awesome.  I'm not particularly concerned about machines taking over the world -- for one thing, a typical human brain has about 100 billion neurons, so to have something that really could emulate anything a human could do would take scaling up ANNABELL by a factor of 50,000.  (That's assuming that an intelligent mind couldn't operate out of a brain that was more compact and efficient, which is certainly a possibility.)  I also don't think it's demeaning to humans that we may be "nothing more than meat machines," as one biologist put it.  This doesn't diminish our own personal capacity for experience, it just means that we're built from the same stuff as the rest of the universe.

Which is sort of cool.

Anyhow, what Golosio et al. have done is only the beginning of what appears to be a quantum leap in AI research.  As I've said many times, and about many things; I can't imagine what wonders await in the future.


  1. The idea of free will is something humans are loath to surrender. I suppose we have no choice.

  2. First we would have to decide what constitutes free will. No being exists immune to its experience and physical structure.