MOORE'S THEOREM

A Brief History of Moore's Law and The Next Generation of Computer Chips and Semiconductors
• Over a year ago

Super-powerful desktop computers, video game systems, cars, iPads, iPods, tablet computers, cellular phones, microwave ovens, high-def television... Most of the luxuries we enjoy during our daily lives are a result of the tremendous advancements of computing power which was made possible by the development of the transistor.
The first patent for transistors was filed in Canada in 1925 by Julius Edgar Lilienfeld; this patent, however, did not include any information about devices that would actually be built using the technology. Later, in 1934, the German inventor Oskar Heil patented a similar device, but it really wasn't until 1947 that John Bardeen and Walter Brattain at Bell Telephone Labs produced the first point-contact transistor. During their initial testing phases, they produced a few of them and assembled an audio amplifier which was later presented to various Bell Labs executives. What impressed them more than anything else was the fact that the transistor didn't need time to warm up, like it's predecessor the vacuum tube did. People immediately started to see the potential of the transistor for computing. The original computers from the late-1940s were gigantic, with some even taking up entire rooms. These huge computers were assembled with over 10,000 vacuum tubes and took a great deal of energy to run. Almost ten years later, Texas Instruments physically produced the first silicon transistor. In 1956, Bardeen and Brattain won the Nobel Prize in physics, along with William Shockely, who also did critically important work on the transistor.
Today, trillions of transistors are produced each year, and the transistor is considered one of the greatest technological achievements of the 20th century. The number of transistors on an integrated circuit has been doubling approximately every two years, as rate that has held strong for more than half a century. This nature of this trend was first proposed by the Intel co-founder, Gordon Moore in 1965. The name of the trend was coined "Moore's Law" and its accuracy is now used in the semiconductor industry as somewhat of a guide to define long-terms planning and the ability to accurately set targets for R&D. But it's likely that our ability to double our computing power this way will eventually break down.
For years, we have been hearing announcements from chip makers stating that they have figured out new ways to shrink the size of transistors. But in truth we are simply running out of space to work with. The question here is "How Far Can Moore's Law Go?" Well, we don't know for sure. We currently use etchings of ultraviolet radiation on microchips, and it's this very etching process that allows us to cram more and more transistors on the chip. Once we start hitting layers and components that are 5 atoms thick, the Heisenberg Uncertainty Principle starts to kick in and we would no longer know where the electron is. Most likely, the electrons on such a small transistor would leak out, causing the circuit to short. There are also issues of heat which is ultimately caused by the increased power. Some have suggested we could use X-rays instead of ultraviolet light to etch onto the chip—but while it's been shown that X-rays will etch smaller and smaller components, the energy used is also proportionally larger, causing them to blast right through the silicon.
The other questions are the steps that we are going to take to find a suitable replacement for silicon when we hit the tipping point. We are of course looking at the development of quantum computers, molecular computers, protein computers, DNA computers, and even optical computers. If we are creating circuits that are the size of atoms, then why not compute with atoms themselves? This is now our goal. There are, however, enormous roadblocks to overcome. First of all, molecular computers are so small that you can't even see them—how do you wire up something so small? The other question is our ability to determine a viable way to mass-produce them. There are a great deal of talk about the world of quantum computers right now, but there are still hurdles to overcome, including impurities, vibrations and even decoherence. Every time we've tried to look at one of these exotic architectures to replace silicon, we find a problem. Now, this doesn't mean that we won't make tremendous advances with these different computing architectures or figure out a way to extend Moore's law beyond 2020. We just don't quite know how yet.
So let's look at some of the things that large chip makers, labs and think tanks are currently working on; trying to find a suitable replacement for silicon and take computing to the next level.
• I wrote a previous post "Graphene Will Change the Way We Live" that described how IBM is already testing a 100 GHz transistor with hopes of a 1 THz processor on the horizon. Graphene has amazing electronic properties which could make it a suitable replacement. However, there isn't an easy method for large-scale processing of graphene-based materials so this may take a considerable amount of time before we start seeing graphene-based computers on the shelf at Best Buy. But, like most advances in computing; it may come sooner than we think. Here is an example of a company with a new method of creating graphene by assembling atoms within a reactor.
• Researchers with the U.S. Department of Energy's Lawrence Berkeley National Laboratory and the University of California Berkeley, have successfully integrated ultra-thin layers of the semiconductor indium arsenide onto a silicon substrate to create a nanoscale transistor with excellent electronic properties.
• Researchers have harnessed chaos theory for a new class of CPUs with the development of field-programmable gate arrays (FPGAs). The researchers state that "processors that are dedicated to a single task are more efficient than a general purpose process like the ones Intel provides. That's why a small, low-power chip dedicated to decoding video can easily handle a task that can strain a CPU. The downside is that they're only good for the task they're made for.
With some 2% of the world's total energy being consumed by building and running computer equipment, a pioneering research effort could shrink the world's most powerful supercomputer processors to the size of a sugar cube, IBM scientists say.
So I think the next decade of computing advancements is going to bring us gadgets and devices that today we only dream of. What technology will dominate the Post Silicon Era? What will replace Silicon Valley? No one knows. But nothing less than the wealth of nations and the future of civilization may rest on this question.

What Is the Future of Computers?
Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.
Credit: Creative Commons Attribution-Share Alike 3.0 Unported | Zephyris
In 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of an 11-millimeter-long "chip" of semiconducting germanium, creating the first ever integrated circuit. Because the circuit contained a single transistor — a sort of miniature switch — the chip could hold one "bit" of data: either a 1 or a 0, depending on the transistor's configuration.
Since then, and with unflagging consistency, engineers have managed to double the number of transistors they can fit on computer chips every two years. They do it by regularly halving the size of transistors. Today, after dozens of iterations of this doubling and halving rule, transistors measure just a few atoms across, and a typical computer chip holds 9 million of them per square millimeter. Computers with more transistors can perform more computations per second (because there are more transistors available for firing), and are therefore more powerful. The doubling of computing power every two years is known as "Moore's law," after Gordon Moore, the Intel engineer who first noticed the trend in 1965.
Moore's law renders last year's laptop models defunct, and it will undoubtedly make next year's tech devices breathtakingly small and fast compared to today's. But consumerism aside, where is the exponential growth in computing power ultimately headed? Will computers eventually outsmart humans? And will they ever stop becoming more powerful?
Report this Advertisement
The singularity
Many scientists believe the exponential growth in computing power leads inevitably to a future moment when computers will attain human-level intelligence: an event known as the "singularity." And according to some, the time is nigh.
Physicist, author and self-described "futurist" Ray Kurzweil has predicted that computers will come to par with humans within two decades. He told Time Magazine last year that engineers will successfully reverse-engineer the human brain by the mid-2020s, and by the end of that decade, computers will be capable of human-level intelligence.
The conclusion follows from projecting Moore's law into the future. If the doubling of computing power every two years continues to hold, "then by 2030 whatever technology we're using will be sufficiently small that we can fit all the computing power that's in a human brain into a physical volume the size of a brain," explained Peter Denning, distinguished professor of computer science at the Naval Postgraduate School and an expert on innovation in computing. "Futurists believe that's what you need for artificial intelligence. At that point, the computer starts thinking for itself." [How to Build a Human Brain]
What happens next is uncertain — and has been the subject of speculation since the dawn of computing.
"Once the machine thinking method has started, it would not take long to outstrip our feeble powers," Alan Turing said in 1951 at a talk entitled "Intelligent Machinery: A heretical theory," presented at the University of Manchester in the United Kingdom. "At some stage therefore we should have to expect the machines to take control." The British mathematician I.J. Good hypothesized that "ultraintelligent" machines, once created, could design even better machines. "There would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make," he wrote.
Buzz about the coming singularity has escalated to such a pitch that there's even a book coming out next month, called "Singularity Rising" (BenBella Books), by James Miller, an associate professor of economics at Smith College, about how to survive in a post-singularity world. [Could the Internet Ever Be Destroyed?]
Brain-like processing
But not everyone puts stock in this notion of a singularity, or thinks we'll ever reach it. "A lot of brain scientists now believe the complexity of the brain is so vast that even if we could build a computer that mimics the structure, we still don't know if the thing we build would be able to function as a brain," Denning told Life's Little Mysteries. Perhaps without sensory inputs from the outside world, computers could never become self-aware.
Others argue that Moore's law will soon start to break down, or that it has already. The argument stems from the fact that engineers can't miniaturize transistors much more than they already have, because they're already pushing atomic limits. "When there are only a few atoms in a transistor, you can no longer guarantee that a few atoms behave as they're supposed to," Denning explained. On the atomic scale, bizarre quantum effects set in. Transistors no longer maintain a single state represented by a "1" or a "0," but instead vacillate unpredictably between the two states, rendering circuits and data storage unreliable. The other limiting factor, Denning says, is that transistors give off heat when they switch between states, and when too many transistors, regardless of their size, are crammed together onto a single silicon chip, the heat they collectively emit melts the chip.
For these reasons, some scientists say computing power is approaching its zenith. "Already we see a slowing down of Moore's law," the theoretical physicist Michio Kaku said in a BigThink lecture in May.
But if that's the case, it's news to many. Doyne Farmer, a professor of mathematics at Oxford University who studies the evolution of technology, says there is little evidence for an end to Moore's law. "I am willing to bet that there is insufficient data to draw a conclusion that a slowing down [of Moore's law] has been observed," Farmer told Life's Little Mysteries. He says computers continue to grow more powerful as they become more brain-like.
Computers can already perform individual operations orders of magnitude faster than humans can, Farmer said; meanwhile, the human brain remains far superior at parallel processing, or performing multiple operations at once. For most of the past half-century, engineers made computers faster by increasing the number of transistors in their processors, but they only recently began "parallelizing" computer processors. To work around the fact that individual processors can't be packed with extra transistors, engineers have begun upping computing power by building multi-core processors, or systems of chips that perform calculations in parallel."This controls the heat problem, because you can slow down the clock," Denning explained. "Imagine that every time the processor's clock ticks, the transistors fire. So instead of trying to speed up the clock to run all these transistors at faster rates, you can keep the clock slow and have parallel activity on all the chips." He says Moore's law will probably continue because the number of cores in computer processors will go on doubling every two years.
And because parallelization is the key to complexity, "In a sense multi-core processors make computers work more like the brain," Farmer told Life's Little Mysteries.
And then there's the future possibility of quantum computing, a relatively new field that attempts to harness the uncertainty inherent in quantum states in order to perform vastly more complex calculations than are feasible with today's computers. Whereas conventional computers store information in bits, quantum computers store information in qubits: particles, such as atoms or photons, whose states are "entangled" with one another, so that a change to one of the particles affects the states of all the others. Through entanglement, a single operation performed on a quantum computer theoretically allows the instantaneous performance of an inconceivably huge number of calculations, and each additional particle added to the system of entangled particles doubles the performance capabilities of the computer.
If physicists manage to harness the potential of quantum computers — something they are struggling to do — Moore's law will certainly hold far into the future, they say.
Ultimate limit
If Moore's law does hold, and computer power continues to rise exponentially (either through human ingenuity or under its own ultraintelligent steam), is there a point when the progress will be forced to stop? Physicists Lawrence Krauss and Glenn Starkman say "yes." In 2005, they calculated that Moore's law can only hold so long before computers actually run out of matter and energy in the universe to use as bits. Ultimately, computers will not be able to expand further; they will not be able to co-opt enough material to double their number of bits every two years, because the universe will be accelerating apart too fast for them to catch up and encompass more of it.
So, if Moore's law continues to hold as accurately as it has so far, when do Krauss and Starkman say computers must stop growing? Projections indicate that computer will encompass the entire reachable universe, turning every bit of matter and energy into a part of its circuit, in 600 years' time.
That might seem very soon. "Nevertheless, Moore's law is an exponential law," Starkman, a physicist at Case Western University, told Life's Little Mysteries. You can only double the number of bits so many times before you require the entire universe.
Personally, Starkman thinks Moore's law will break down long before the ultimate computer eats the universe. In fact, he thinks computers will stop getting more powerful in about 30 years. Ultimately, there's no telling what will happen. We might reach the singularity — the point when computers become conscious, take over, and then start to self-improve. Or maybe we won't. This month, Denning has a new paper out in the journal Communications of the ACM, called "Don't feel bad if you can't predict the future." It's about all the people who have tried to do so in the past, and failed.

This story was provided by Life's Little Mysteries,

Comments

Popular Posts