Another View: Moores Law Slowing

Three weeks ago this blog presented Moores Law is Alive and Well.  But reader reaction was mixed because the post emphasized growth in  computing power rather than number of transistors per chip. But really Moores Law has two meanings in electronics. The strict original circuit design  definition was that the number of transistors on a 2D integrated circuit chip would double in number for the same cost about every 18-24 months.How and what was done with those transistors has been unstated but the assumption has been that the resulting computing processing will take advantage of those additional transistors. As well, Moores Law is merely a statistical  observation of trends in electronics. It is not a law of physics or electronics.

However, Moores Law has a second, broader meaning or interpretation – that computing power on a variety of electronic devices will double every 18-24 months based on what is done with those transistors. Computing power can be measured in various language benchmarks and program performance tests. But generally broad computing power has stayed in lock step with on-chip transistor doubling. AI researcher Ray Kurzweill has championed this broader interpretation and has charted its performance over a 100 year period:
Ray Kurzweil’s plotting of Calculations/second over 20th century

So in the popular mind, Moores Law is about effective computer power as much as numbers of transistors per standard chip area. Thus it is of interest that there have been two recent and telling cautions on Moore’s Law.

Two New Warnings About Moores Law’s Continuation

There have been two notable  cautions from experts in electronics where they have run up against physical limits that appear to mean the demise of Moores Law in their  products. The first, is SanDisk, the makers of flash chips for cameras, mobile devices, and PCs. SanDisk flash chips have an enviable and better than Moores Law record.  14 times in the last 19 years they have doubled in capacity for the same price-the current 64GB chip sells for roughly the same price as the original 500MB. The NYTimes described the  barriers that SanDisk was encountering:

“When we started out we had about one million electrons per cell,” or locations where information is stored on a chip, he said. “We are now down to a few hundred.” This simply can’t go on forever, he noted: “We can’t get below one.” SanDisk and other flash memory makers have figured out how to cram even more information into that tiny cell. Until a few years ago, each of those cells worked the way most computer memory does — it represented either a zero or a one. Now the chip can actually count how many electrons are in a cell, and depending on the number it can write and read up to 16 states (recording a number between zero and 15, or four bits to a computer).

Let’s stop for a second to take stock of the wonder of all this. The last flash memory card I bought for my camera held two gigabytes (16 billion bits). It cost me $6. And somewhere inside it is something that is counting electrons 40 at a time. An electron, in case you forgot your high-school physics, has a radius of 2.8179 × 10-15 meters. In layman’s terms it is pretty much the smallest thing you could ever count. The problem here is that the way current flash technology stores those electrons, they don’t always follow instructions, especially as the memory card gets older. “When you have a billion cells, you cannot uniformly control them to one electron,” Mr. Harari said. “If I want 40 electrons, plus or minus two electrons, I can do that when the device is new. But seven years out, it will start to smear.” In other words, the electron count will start to vary from one cell to the next.

So SanDisk engineers  incorporated a number of methods to continue to improve their chip performance. They had already gone with multi-state half-byte cells in the style of memresistors considered by CPU chip developers. They  started to work more concertedly on layered 3D-designs having bought Matrix Semiconductors for this purpose. Read-only designs of 4 and 8 layers have been successfully tested but not read-write. The engineers are also working on more precise controllers on the cells plus methods to stuff more  electrons into cells. But SanDisk CEO Eli Harari saw 2-3 more generations of doubling and then SanDisk would be hard pressed to continue with Moores Law improvements. More than 3 years later and SanDisk introduced 64GB flash memory which doubled its previous maximum size thus  providing evidence of  the slowdown in Moores Law improvements.

AMD Sees Moores Law Foundering on Economic Barriers

AMD  has recently hit the 28nm to 20nm economic barrier. The process investment for fab facilities and improved equipment, processes and yields are so high, it does not economic sense to push the Moore law doubling of transistors without getting sufficient returns for the capital investments. John Gustafson, chief graphics product architect at AMD, has said that Moore’s Law is endangered because for many chip manufacturers because it actually refers to a doubling of transistors that are economically viable to produce doe not make economic sense.

So the current problem is that as investment in fabricating equipment to produce the  30 to 10 nanometer line sizes on ever larger silicon dies increase  exponentially in costs. Thus  the time for economy recovery of those costs goes out well beyond 18-24 months when doubling is to occur. In short, AMD is arguing that the economics of current chip development demands a product recovery cycle longer than the Moores Law period. However, Intel the leader in chip production makes so many chips that they are able to amortize their latest 20 and 14 nanometer chip lines in less than 2 year periods. So Intel is still charging down the shrinking chip size route. An  IEEE reports spells out some of these costs:

These threats are in the form of a convergence of three waves which requires major necessary changes relating to: a) lithography below 0.13 nanometers which involves printing and aligning at submicron wavelength dimensions with new unproven lasers/lens systems; b) Cu/low-? interconnect technology, which is facing major challenges in achieving commercially viable yields; and c) 300 mm wafer size conversion, which requires an extensive retooling of the entire industry.

This is not an unanticipated trend and led to Moores Second Law proposed by Venture Capitalist Arthur Rock :

As the cost of computer power to the consumer falls, the cost for producers to fulfill Moore’s law follows an opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. Rising manufacturing costs are an important consideration for the sustaining of Moore’s Law. This had led to the formulation of “Moore’s second law”, aka Rock’s law, which is that the capital cost of a semiconductor fab also increases exponentially over time.

In sum the argument here is that the cost of manufacturing, not physical limits and barriers, will be instrumental in slowing up Moores Law progression in usable and resultant computing power.

A Small History of Moores Law Ebb and Flow

But Moores Law has encountered impossible barriers in the past. Several time in the past 40 years Moores Law has appeared to come up against physical limits in the cell designs and cost and reliable chip processing barriers. The first important barrier was the power requirements associated with fast TTL and NMOS circuits of the 1970’s along with increasing difficulties in etching ever smaller circuit layouts. CMOS circuits became the breakthrough as described in Wikipedia:

CMOS circuitry dissipates less power than logic families with resistive loads. Since this advantage has increased and grown more important, CMOS processes and variants have come to dominate, thus the vast majority of modern integrated circuit manufacturing is on CMOS processes. As of 2010, CPUs with the best performance per watt each year have been CMOS static logic since 1976.

The next limit encountered was the inability of mercury based photo-lithography of the early 1980’s to etch circuit patterns below 1000 nanometers and at the rates required for effective economies of production. But IBM’s development of the excimer laser technology in 1982 has allowed etching down to the 10 nanometer levels with dozens of layering steps. The result is that “excimer laser lithography has been a crucial factor in the continued advance of Moore’s Law, enabling minimum features sizes in chip manufacturing to shrink from 0.5 micrometer in 1990 to 45 nanometers and below in 2010. This trend is expected to continue into this decade for even denser chips, with minimum features approaching 10 nanometers”.

The third recent limit has been the GHz barrier in chips. Here again power consumption needed to achieve ever higher clock cycles with reliable accuracy became an effective barrier to just increasing the cycle time. So instead of increasing clock speed, chip makers added more cpus or cores on a chip. First dual core, then quad core and now octo cores and greater. This requires more sophisticated operating system process control as welll as parallel processing/threading methods in programming languages.So for the first time, software improvements have bene as important as hardware and chip improvements.

And new techniques such as 3D layered circuits, more sophisticated cell interconnects, andspecialized coatings have extended Moore Law performance for 3-8 years varying with industry spokeperson. But the Department of Defense is so concerned that Moores law will not be sustained for the next 10-15 years and thus endanger their Big Data based analysis and embedded critical control+response systems that they have already invested multi-millions in basic Moores Law Preserving/Extending research. The program is called PERFECT – Power Efficiency Revolution for Embedded Computing Technology and as seen below it has started to effect basic approaches to chip design and production.

Some current Approaches to Moores Law Extension

Like Mark Twain’s quip that reports of his death were premature – so have been the nay-saying about Moores Law over the past 10-15 years.

The DARPA interest in prolonging Moores Law performance has broad electronics industry support. Here are some promising technologies and a rough assessment of their tradeoffs in preserving Moores Law-like improvements in computing power.

Graphene – IBM and Georgia Tech are pursuing the use of graphene in ultrathin layers. The advantage of graphene  in its “chicken-wire”  thin-layer lattice is that they can  conduct electricity with virtually no resistance, very little heat generation – and less power consumption than silicon. However, just as in the HP molecular level methods cited below, the trick is to be able to layout circuits effectively at nanometer sizes. The Georgia Tech research has developed silicon carbide deposition techniques with graphene coatings using methods from silicon chip-making. The nanometer processing still has nearby limits but the graphene layers provide the performance improvement opportunities.

IBM Low Power change of state – The advantage of IBM’s recent development in metal oxides change of state technology is that continuous power is not required but rather microbursts of electricity make change of state and reads of state require less energy. But the experimental technology just to create/print circuits is 2-4 years away and still not scaled to meet massive circuit layouts.

HP and molecular level cells – had a big splash in 2003-2004 with possibility of molecular chip designs. But the problems of applying layouts on substrates at the 1-5 nanometer size have been  proving difficult. However, some of the processing technology has been instrumental in memresistor breakthroughs just below.

HP memresistors – SanDisks flash memory limits cited above  may have a solution with the use of memresistors although SanDisk has already gone to half-byte memory cells.  HP’s memresistor technology has the advantage of non-volatile and therefore lower power requirements along with potentially great capacity. But the change to a new processing technology has seen engineering delays.

Wikipedia – cites three promising chip design technologies:

In February 2010, Researchers at the Tyndall National Institute in Cork, Ireland announced a breakthrough in transistors with the design and fabrication of the world’s first junctionless transistor. The research led by Professor Jean-Pierre Colinge was published in Nature Nanotechnology and describes a control gate around a silicon nanowire that can tighten around the wire to the point of closing down the passage of electrons without the use of junctions or doping. The researchers claim that the new junctionless transistors can be produced at 10-nanometer scale using existing fabrication techniques.
In April 2011, a research team at the University of Pittsburgh announced the development of a single-electron transistor 1.5 nanometers in diameter made out of oxide based materials. According to the researchers, three “wires” converge on a central “island” which can house one or two electrons. Electrons tunnel from one wire to another through the island. Conditions on the third wire results in distinct conductive properties including the ability of the transistor to act as a solid state memory.
In February 2012, a research team at the University of New South Wales announced the development of the first working transistor consisting of a single atom placed precisely in a silicon crystal (not just picked from a large sample of random transistors).Moore’s Law expected for this milestone to be reached, in lab, by 2020.

Clearly these methods will require major shifts in design and production. Thus, they are even further away from fruition than the previously discussed process improvements. But clearly both industry and University Research groups are busy working on the next generation of computing technology with Moores Law performance clearly in mind.

However, all of these technologies underline the difficulty of moving away from silicon to molecular chips or  graphene where  many of the underlying manufacturing processes will have to change at very high entry costs. Thus the risk if it does not deliver expected performance and reliability is even greater. Yet despite the obstacles, the electronics industry’s past  success with the addition  of  DARPA funding has spurred a great deal of interest in finding how to preserve Moores Law in the electronics sector.


The consensus is that Moores Law may be reaching an inflection point. Traditional silicon based processes with their dependence on  super-light spectrum lithography may require a transition to non-traditional technologies to continue the 2 year doubling in transisitors and resultant computing power.  A range of stop gap methods may carry Moores law improvements  for the next 2-5 years, but there may  be  a distinct slowdown if the industry is  not already in one[basic CPU speed has not come close to doubling in the past 3-4 years. Overall silicon enabled features have grown with a proliferation of sensors, storage capacity, and communication capabilities].

Hence  a period of reculer pour sauter mieux – step back to better leap forward may be taking place right now. Thus the electronics industry is shifting its focus from hardware to software to take advantage of dozens of GPU chips, core processors, on-chip memory caches, and heterogeneous  processors. Threads and parallel processing technologies are already being used. So the industry has already produced a wide range of improved  results that have  mitigated any actual slowdown in transistors produced per chip. Thus today’s improvements are derived more from  better performance from the existing computing elements on the chip.

Finally here is a cautionary advisory on why Moores-Law-like exponential improvements in performance cannot be applied to other technologies like batteries, solar cells, or bio-processes. The movement of electrons in semi-crystalline substrates like silicon operates at a microscopic scale and simplified environ not easily duplicated by chemical  battery ions or biochemical systems. So the next few years will be interesting for  electronics. The question will be how well and fast can the inevitable transition from silicon to other “chip” technologies take place? Will the industry be able to extract  more performance from smaller increases in transistor counts.  And as the industry moves to improve its performance with more advanced interfaces and software based methods – will this allow for out-of–the-silicon-box improvements in chip technology  to emerge and restore Moores Law as a driver of innovation in the industry?