Published in New Scientist, 11 Jan 2014
A much misunderstood quantum property is set to turn computing on its head, says Jon Cartwright
ONE foggy day in early December 1943, a giant awoke. Born in an engineering lab in north-west London, Colossus was the world’s first digital, programmable electronic computer – a tower of racks and wires that ate its way through miles of punched-card instructions every hour. Its processing power would enable the Allies to quickly decipher messages from Nazi high command – and help them win the second world war.
Computers have come a long way in the 70 years since, but deep down they work in essentially the same way: by manipulating electrical charge. Guided by its punched cards, Colossus moved charge through thousands of glowing valves. In modern computers, charge passes through millions upon millions of transistors to make the texts, images and sounds that form our digital worlds. It has proved an eminently upscalable approach: the average smartphone today is a million times faster than Colossus, not to mention a hundred thousand times smaller.
But charge is beginning to feel the squeeze. As components have shrunk, they have been handling fewer and fewer moving charges. There is only so far this can go before random charge fluctuations make the operation of the transistors unreliable. “We’re simply running out of electrons,” says Dan Hutcheson of VLSI Research, a company that analyses technology markets. Add to that the effect of the large amounts of heat generated by very many transistors in a small space, and it is clear we are reaching some fundamental barriers. […]
The rest of this article is available here.