CRN Science & Technology
Essays - 2005
"Four
stages of acceptance: 1) this is worthless nonsense; 2) this is an interesting,
but perverse, point of view; 3) this is true, but quite unimportant; 4) I always
said so."
— Geneticist J.B.S. Haldane,
on the stages scientific theory goes through
Each issue of the
C-R-Newsletter
features a brief article explaining technical aspects of advanced
nanotechnology. They are gathered in these archives for your review. If you have comments
or questions, please
email
Research Director
Chris Phoenix.
1. Advantages of Engineered Nanosystems (January 2005)
2.
What Is Molecular Manufacturing? (February 2005)
3.
Information Delivery for
Nanoscale Construction (March 2005)
4.
Protein Springs and Tattoo
Needles — Work in progress at CRN (April 2005)
5.
Molecular Manufacturing vs. Tiny
Nanobots (May 2005)
6. Sudden Development
of Molecular Manufacturing (June 2005)
7. Fast Development of
Nano-Manufactured Products (July 2005)
8.
Molecular
Manufacturing Design Software (August 2005)
9. Early
Applications of Molecular Manufacturing (October 2005)
10. Notes
on Nanofactories (November 2005)
11. Simple Nanofactories vs. Floods of Products (December 2005)
2004 Essays Archive
2006 Essays Archive
2007 Essays Archive
2008 Essays Archive
Simple Nanofactories vs. Floods of Products
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
In last month's essay,
I explained why even the earliest meter-scale nanofactories will necessarily
have a high throughput, manufacturing their own mass in just a few hours. I also
explained how a nanofactory can fasten together tiny functional
blocks—nanoblocks—to make a meter-scale product. The next question is what range
of products an early nanofactory would be able to build.
For several reasons, it is important to know the range and functionality of the
products that the nanofactory will produce, and how quickly new products can be
developed. Knowing these factors will help to estimate the economic value of the
nanofactory, as well as its impacts and implications. The larger the projected
value, the more likely it is to be built sooner; the more powerful an early
nanofactory is and the faster new products appear, the more disruptive it can
be.
Because a large nanofactory can be built only by another nanofactory, even the
earliest nanofactories will be able to build other nanofactories. This means
that the working parts of the nanofactory will be available as components for
other product designs. From this reasoning, we can begin to map the lower bound
of nanofactory product capabilities.
This essay is a demonstration of how CRN's thinking and research continue to
evolve. In 2003, I published a peer-reviewed paper called "Design
of a Primitive Nanofactory" in which I described the simplest nanofactory I
could think of. That nanofactory had to do several basic functions, such as
transporting components of various sizes, that implied the need for motors and
mechanical components also in a variety of sizes, as well as several other
functions. However, not long after that paper was published, an even simpler
approach was proposed by John Burch and Eric Drexler.
Their approach can build large products without ever having to handle large
components; small blocks are attached rapidly, directly to the product.
The planar assembly approach to building products is more flexible than the
convergent assembly approach, and can use a much more compact nanofactory.
Instead of having to transport and join blocks of various sizes within the
nanofactory, it only needs to transport tiny blocks from their point of
fabrication to the area of the product under construction. (The Burch/Drexler
nanofactory does somewhat more than this, but their version could be
simplified.) This means that the existence of a nanofactory does not, as I
formerly thought, imply the existence of centimeter-scale machinery. A planar
nanofactory can probably be scaled to many square centimeters without containing
any moving parts larger than a micron.
Large moving parts need to slide and rotate, but small moving parts can be built
to flex instead. It is theoretically possible that the simplest nanofactory may
not need much in the way of bearings. Large bearings could be simulated by
suspending the moving surface with numerous small load-bearing rollers or
"walkers" that could provide both low-friction motion and power. This might
actually be better than a full-contact surface in some ways; failure of one
load-bearing element would not compromise the bearing's operation.
Another important question is what kind of computers the nanofactory will be
able to build. Unlike my "primitive nanofactory," a simple planar-assembly
nanofactory may not actually need embedded general-purpose computers (CPU's). It
might have few enough different components that the instructions for building
all the components could be fed in several times over during construction, so
that information storage and processing within the nanofactory might be minimal.
But even a planar-assembly nanofactory, as currently conceived, would probably
have to incorporate large amounts of digital logic (computer-like circuitry) to
process the blueprint file and direct the operations of the nanofactory
fabricators. This implies that the nanofactory's products could contain large
numbers of computers. However, the designs for the computers will not
necessarily exist before they are needed for the products.
Any nanofactory will have to perform mechanical motions, and will need a power
source for those motions. However, that power source may not be suitable for all
products. For example, an early nanofactory might use chemicals for power. It
seems more likely to me that it would use electricity, because electric motors
are simpler than most chemical processing systems, since chemical systems need
to deliver chemicals and remove waste products, while electrical systems only
need wires. In that case, products could be electrically powered; it should not
be difficult to gang together many nanoscale motors to produce power even for
large products.
The ability to fasten nanoscale blocks to selected locations on a growing
product implies the ability to build programmable structures at a variety of
scales. At the current level of analysis, the existence of a large nanofactory
implies the ability to build other large structures. Because the nanofactory
would not have to be extremely strong, the products might also not be extremely
strong. Further analysis must wait for more information about the design of the
nanofactory.
Sensing is an important part of the functionality of many products. An early
nanofactory might not need many different kinds of sensing, because its
operations would all be planned and commands delivered from outside. One of the
benefits of mechanosynthesis
of highly cross-linked covalent solids is that any correctly built structure
will have a very precise and predictable shape, as well as other properties.
Sensing would be needed only for the detection of errors in error-prone
operations. It might be as simple as contact switches that cause operations to
be retried if something is not in the right place. Other types of sensors might
have to be invented for the products they will be used in.
Nanofactories will not need any special appearance, but many products will need
to have useful user interfaces or attractive appearances. This would require
additional R&D beyond what is necessary for the
nanofactory.
The planar assembly approach is a major simplification relative to all previous
nanofactory approaches. It may even be possible to build wet-chemistry
nanofactory-like systems, as described in my
NIAC report
that was completed in spring 2005, and bootstrap incrementally from them to
high-performance nanofactories. Because of this, it seems less certain that the
first large nanofactory will be followed immediately by a flood of products.
A flood of products still could occur if the additional product functionality
were pre-designed. Although pre-designed systems will inevitably have bugs that
will have to be fixed, rapid prototyping will help to reduce turnaround time for
troubleshooting, and using combinations of well-characterized small units should
reduce the need for major redesign. For example, given a well-characterized
digital logic, it should not be more difficult to build a CPU than to write a
software program of equivalent complexity—except that, traditionally, CPU's have
required months to build each version of the hardware in the semiconductor fab.
An incremental approach to developing molecular manufacturing might start with a
wet-chemical self-assembly system, then perhaps build several versions of
mechanosynthetic systems for increasingly higher performance, then start to
develop products. Such an incremental approach could require many years before
the first general-purpose product design system was available. On the other
hand, a targeted development program probably would aim at a dry
mechanosynthetic system right from the start, perhaps bypassing some of the wet
stages. It would also pre-design product capabilities that were not needed for
the basic nanofactory. By planning from the start to take advantage of the
capabilities of advanced nanofactories, a targeted approach could develop a
general-purpose product design capability relatively early, which then would
lead to a potentially disruptive flood of products.
Notes on Nanofactories
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
This month's science essay is prompted by several questions about nanofactories
that I've received over the past few months. I'll discuss the way in which
nanofactories combine nanoscale components into large integrated products; the
reason why a nanofactory will probably take about an hour to make its weight in
product; and how to cool a nanofactory effectively at such high production
rates.
In current nanofactory designs, sub-micron components are made at individual
workstations and then combined into a product. This requires some engineering
above and beyond what would be needed to build a single workstation. Tom Craver,
on
our blog, suggested that there might be a transitional step, in which
workstations are arranged in a two-dimensional sheet and make a thin sheet of
product. The sheet of manufacturing systems would not have to be flat; it could
be V-folded, and perhaps a solid product could be pushed out of a V-folded
arrangement of sheets. With a narrow folding angle, the product might be
extruded at several times the mechanosynthetic deposition rate.
Although the V-fold idea is clever, I think it's not necessary. Once you can
build mechanosynthetic systems that can build sheets of product, you're most of
the way to a 3D nanofactory. For a simple design, each workstation produces a
sub-micron "nanoblock" of product (each dimension being the thickness of the
product sheet) rather than a connected sheet of product. Then you have the
workstations pass the blocks "hand over hand" to the edge of the workstation
sheet. In a primitive nanofactory design, much of the operational complexity
would be included in the incoming control information rather than the
nanofactory's hardware. This implies that each workstation would have a
general-purpose robot arm or other manipulator capable of passing blocks to the
next workstation.
After the blocks get to the edge of the sheet, they are added to the product.
Instead of the product being built incrementally at the surface of V-folded
sheets, the sheets are stacked fully parallel, just like a ream of paper, and
the product is built at the edge of the ream.
Three things will limit the product ‘extrusion’ speed:
- The block delivery speed. This would be about 1 meter per
second, a typical speed for mechanisms at all scales. This is not a
significant limitation.
- The speed of fastening a block in place. Even a
100-nanometer block has plenty of room for nanoscale mechanical fasteners that
can basically just snap together as fast as the blocks can be placed.
Fasteners that work by molecular reactions could also be fast.
- The width (or depth, depending on your point of view) of
the sheet: how many workstations are supplying blocks to each
workstation-width edge-of-sheet. The width of the sheet stack is limited by
the ability to circulate cooling fluid, but it turns out that even micron-wide
channels can circulate fluid for several centimeters at moderate pressure. So
you can stack the sheets quite close together, making a centimeter-thick slab.
With 100-nanometer workstations, that will have several thousand workstations
supplying each 100-nanometer-square edge-of-stack area. If a workstation takes
an hour to make a 100-nanometer block, then you're depositing several
millimeters per hour. That's if you build the product solid; if you provide a
way to shuffle blocks around at the product-deposition face, you can include
voids in the product, and 'extrude' much faster; perhaps a mm per second.
Tom pointed out that a nanofactory that built products by
block deposition would require extra engineering in several areas, such as block
handling mechanisms, block fasteners, and software to control it all. All this
is true, but it is the type of problem we have already learned to solve. In some
ways, working with nanoblocks will be easier than working with today's
industrial robots; surface forces will be very convenient, and gravity will be
too weak to cause problems.
On the same blog post, Jamais Cascio
asked why I keep saying that a nanofactory will take about an hour to make
its weight of product. The answer is simple: If the underlying technology is
much slower than that, it won't be able to build a kilogram-scale nanofactory in
any reasonable time. And although advanced nanofactories might be somewhat
faster, a one-hour nanofactory would be revolutionary enough.
A one-kilogram one-hour nanofactory could, if supplied with enough feedstock and
energy, make thousands of tons of nanofactories or products in a single day. It
doesn't much matter if nanofactories are faster than one hour (3600 seconds).
Numbers a lot faster than that start to sound implausible. Some bacteria can
reproduce in 15 minutes (900 seconds). Scaling laws suggest that a 100-nm
scanning probe microscope can build its mass in 100 seconds. (The
non-manufacturing overhead of a nanofactory—walls, computers, and so on—would
probably weigh less than the manufacturing systems, imposing a significant but
not extreme delay on duplicating the whole factory.) More advanced
molecule-processing systems could, in theory, process their mass even more
quickly, but with reduced flexibility.
On the slower side, the first nanofactory can't very well take much longer than
an hour to make its mass, because if it did, it would be obsoleted before it
could be built. It goes like this: A nanofactory can only be built by a smaller
nanofactory. The smallest nanofactory will have to be built by very difficult
lab work. So you'll be starting from maybe a 100-nm manufacturing system (10-15
grams) and doubling sixty times to build a 103 gram nanofactory. Each
doubling takes twice the make-your-own-mass time. So a one-hour nanofactory
would take 120 hours, or five days. A one-day nanofactory would take 120 days,
or four months. If you could double the speed of your 24-hour process in two
months (which gives you sixty day-long "compile times" to build increasingly
better hardware using the hardware you have), then the half-day nanofactory
would be ready before the one-day nanofactory would.
Tom Craver pointed out that if the smaller nanofactory can be incorporated into
the larger nanofactory that it's building, then doubling the nanofactory mass
would take only half as long. So, a one-day nanofactory might take only two
months, and a one-hour nanofactory less than three days. Tom also pointed out
that if a one-day tiny-nanofactory is developed at some point, and its size is
slowly increased, then when the technology for a one-hour nanofactory is
developed, a medium-sized one-hour nanofactory could be built directly by the
largest existing one-day nanofactory, saving part of the growing time.
In my "primitive
nanofactory" paper, which used a somewhat inefficient physical architecture
in which the fabricators were a fraction of the total mass, I computed that a
nanofactory on that plan could build its own mass in a few hours. This was using
the Merkle pressure-controlled fabricator, (see "Casing
an Assembler"), with a single order of magnitude speedup to go from pressure
to direct drive.
In summary, the one-hour estimate for nanofactory productivity is probably
within an order of magnitude of being right.
The question about cooling a nanofactory was asked at a talk I gave a few weeks
ago, and I don't remember who asked it. To build a kilogram per hour of diamond
requires rearranging on the order of 1026 covalent bonds in an hour.
The bond energy of carbon is approximately 350 kJ/mol, or 60 MJ/kg. Spread over
an hour, that much energy would release 16 kilowatts, about as much as a plug-in
electric heater.
Of course, you don't want a nanofactory to glow red-hot. And the built-in
computers that control the nanofactory will also generate quite a bit of
heat--perhaps even more than the covalent reactions themselves. So, fluid
cooling looks like a good idea. It turns out that, although the inner features
of a nanofactory will be very small—on the order of one micron—cooling fluid can
be sent for several centimeters down a one-micron channel with only a modest
pressure drop. This means that the physical architecture of the nanofactory will
not need to be adjusted to accommodate variable-sized tree-structured cooling
pipes.
In the years I have spent thinking about nanofactory design, I have not
encountered any problem that could not be addressed with standard engineering.
Of course, engineering in a new domain will present substantial challenges and
require a lot of work. However, it is not safe to assume that some unexpected
problem will arise to delay nanofactory design and development. As work on
enabling technologies progresses, it is becoming increasingly apparent that
nanofactories can be addressed as an integration problem rather than a
fundamental research problem. Although their capabilities seem futuristic, their
technology may be available before most people expect it.
Early Applications
of Molecular Manufacturing
Chris Phoenix, Director of Research, CRN
Molecular manufacturing (MM) will be able to build a wide variety of products --
but only if their designs can be specified.
Recent science essays
have explained some reasons why nanofactory products may be relatively easy to
design in cases where we know what we want, and only need to enter the design
into a CAD program. Extremely dense functionality, strong materials, integrated
computers and sensors, and inexpensive full-product rapid prototyping will
combine to make product design easier.
However, there are several reasons why the design of certain products may be
quite difficult. Requirements for backward compatibility, advanced requirements,
complex or poorly understood environments, regulations, and lack of imagination
are only a few of the reasons why a broad range of nanofactory products will be
difficult to get right. Some applications will be a lot easier than others.
Products are manufactured for many purposes, including transportation,
recreation, communication, medical care, basic needs, military support, and
environmental monitoring, among others. This essay will consider a few products
in each of these categories, in order to convey a sense of the extent to which
the initial MM revolution, though still profound, may be limited by practical
design problems.
Transportation is simple in concept: merely move objects or people from
one place to another place. Efficient and effective transportation is quite a
bit more difficult. Any new transportation system needs to be safe, efficient,
rapid, and compatible with a wide range of existing systems. If it travels on
roads, it will need to comply with a massive pile of regulations. If it uses
installed pathways (future versions of train tracks), space will have to be set
aside for right-of-ways. If it flies, it will have to be extremely safe to
reassure those using it and avoid protest from those underneath.
Despite these problems, MM could produce fairly rapid improvements in
transportation. There would be nothing necessarily difficult about designing a
nanofactory-built automobile that exceeded all existing standards. It would be
very cheap to build, and fairly efficient to operate -- although air resistance
would still require a lot of fuel. Existing airplanes also could be replaced by
nanofactory-built versions, once they were demonstrated to be safe. In both
cases, a great deal of weight could be saved, because the motors would be many
orders of magnitude smaller and lighter, and the materials would be perhaps 100
times as strong. Low-friction skins and other advances would follow shortly.
Molecular manufacturing could revolutionize access to space. Today's rockets can
barely get there; they spend a lot of energy just getting through the
atmosphere, and are not as efficient as they could be. The most efficient rocket
nozzle varies as atmospheric pressure decreases, but no one has built a
variable-nozzle rocket. Far more efficient, of course, would be to use an
airplane to climb above most of the atmosphere, as Burt Rutan did to win the X
Prize. But this has never been an option for large rockets. Another problem is
that the cost of building rockets is astronomical: they are basically
hand-built, and they must use advanced technology to minimize weight. This has
caused rocketry to advance very slowly. A single test of a new propulsion
concept may cost hundreds of millions of dollars.
When it becomes possible to build rockets with automated factories and materials
ten times as strong and light as today's, rockets will become cheap enough to
test by the dozen. Early advances could include disposable airplane components
to reduce fuel requirements; far less weight required to keep a human alive in
space; and far better instrumentation on test flights -- instrumentation built
into the material itself -- making it easier and faster to determine the cause
of failures. It seems likely that the cost of owning and operating a small
orbital rocket might be no more than the cost of owning a light airplane today.
Getting into space easily, cheaply, and efficiently will allow rapid development
of new technologies like high-powered ion drives and solar sails. However, all
this will rely on fairly advanced engineering -- not only for the advanced
propulsion concepts, but also simply for the ability to move through the
atmosphere quickly without burning up.
Recreation is typically an early beneficiary of inventiveness and new
technology. Because many sports involve humans interacting directly with simple
objects, advances in materials can lead to rapid improvements in products. Some
of the earliest products of nanoscale technologies (non-MM nanotechnology)
include tennis rackets and golf balls, and such things will quickly be replaced
by nano-built versions. But there are other forms of recreation as well. Video
games and television absorb a huge percentage of people's time. Better output
devices and faster computers will quickly make it possible to provide users with
a near-reality level of artificial visual and auditory stimulus. However, even
this relatively simple application may be slowed by the need for
interoperability: high-definition television has suffered substantial delays for
this reason.
A third category of recreation is neurotechnology, usually in the form of drugs
such as alcohol and cocaine. The ability to build devices smaller than cells
implies the possibility of more direct forms of neurotechnology. However, safe
and legal uses of this are likely to be quite slow to develop. Even illegal uses
may be slowed by a lack of imagination and understanding of the brain and the
mind. A more mundane problem is that early MM may be able to fabricate only a
very limited set of molecules, which likely will not include neurotransmitters.
Medical care will be a key beneficiary of molecular manufacturing.
Although the human body and brain are awesomely complex, MM will lead to rapid
improvement in the treatment of many diseases, and before long will be able to
treat almost every disease, including most or all causes of aging. The first
aspect of medicine to benefit may be minimally invasive tests. These would carry
little risk, especially if key results were verified by existing tests until the
new technology were proved. Even with a conservative approach, inexpensive
continuous screening for a thousand different biochemicals could give doctors
early indications of disease. (Although early MM may not be able to build a wide
range of chemicals, it will be able to build detectors for many of them.) Such
monitoring also could reduce the consequences of diseases inadvertently caused
by medical treatment by catching the problem earlier.
With full-spectrum continuous monitoring of the body's state of health, doctors
would be able to be simultaneously more aggressive and safer in applying
treatments. Individual, even experimental approaches could be applied to
diseases. Being able to trace the chemical workings of a disease would also help
in developing more efficient treatments for it. Of course, surgical tools could
become far more delicate and precise; for example, a scalpel could be designed
to monitor the type and state of tissue it was cutting through. Today, in
advanced arthroscopic surgery, simple surgical tools are inserted through holes
the size of a finger; a nano-built surgical robot with far more functionality
could be built into a device the width of an acupuncture needle.
In the United States today, medical care is highly regulated, and useful
treatments are often delayed by many years. Once the technology becomes
available to perform continuous monitoring and safe experimental treatments,
either this regulatory system will change, or the U.S. will fall hopelessly
behind other countries. Medical technologies that will be hugely popular with
individuals but may be opposed by some policy makers, including anti-aging,
pro-pleasure, and reproductive technologies, will probably be developed and
commercialized elsewhere.
Basic needs, in the sense of food, water, clothing, shelter, and so on,
will be easy to provide with even minimal effort. All of these necessities,
except food, can be supplied with simple equipment and structures that require
little innovation to develop. Although directly manufacturing food will not be
so simple, it will be easy to design and create greenhouses, tanks, and
machinery for growing food with high efficiency and relatively little labor. The
main limitation here is that without cleverness applied to background
information, system development will be delayed by having to wait for many
growing cycles. For this reason, systems that incubate separated cells (whether
plant, animal, or algae) may be developed more quickly than systems that grow
whole plants.
The environment already is being impacted as a byproduct of human
activities, but molecular manufacturing will provide opportunities to affect it
deliberately in positive ways. As with medicine, improving the environment will
have to be done with careful respect for the complexity of its systems. However,
also as with medicine, increased ability to monitor large areas or volumes of
the environment in detail will allow the effects of interventions to be known
far more quickly and reliably. This alone will help to reduce accidental damage.
Existing damage that requires urgent remediation will in many cases be able to
be corrected with far fewer side effects.
Perhaps the main benefit of molecular manufacturing for environmental cleanup is
the sheer scale of manufacturing that will be possible when the supply of
nanofactories is effectively unlimited. To deal with invasive species, for
example, it may be sufficient to design a robot that physically collects and/or
destroys the organisms. Once designed and tested, as many copies as required
could be built, then deployed across the entire invaded range, allowed to work
in parallel for a few days or weeks, and then collected. Such systems could be
sized to their task, and contain monitoring apparatus to minimize unplanned
impacts. Because robots would be lighter than humans and have better sensors,
they could be designed to do significantly less damage and require far fewer
resources than direct human intervention. However, robotic navigation software
is not yet fully developed, and it will not be trivial even with million-times
better computers. Furthermore, the mobility and power supply of small robots
will be limited. Cleanup of chemical contamination in soil or groundwater also
may be less amenable to this approach without significant disruption.
Advanced military technology may have an immense impact on our future. It
seems clear that even a modest effort at developing nano-built weapon systems
will create systems that will be able to totally overwhelm today's systems and
soldiers. Even something as simple as multi-scale semi-automated aircraft could
be utterly lethal to exposed soldiers and devastating to most equipment. With
the ability to build as many weapons as desired, and with motors, sensors, and
materials that far outclass biological equivalents, there would be no need to
put soldiers on the battlefield at all. Any military operation that required
humans to accompany its machines would quickly be overcome. Conventional
aircraft could also be out-flown and destroyed with ease. In addition to
offensive weapons, sensing and communications networks with millions if not
billions of distributed components could be built and deployed. Software design
for such things would be far from trivial, however.
It is less clear that a modest military development effort would be able to
create an effective defense against today's high-tech attack systems. Nuclear
explosives would have to be stopped before the explosion, and intercepting or
destroying missiles in flight is not easy even with large quantities of
excellent equipment. Hypersonic aircraft and battle lasers are only now being
developed, and may be difficult to counter or to develop independently without
expert physics knowledge and experience. However, even a near parity of
technology level would give the side with molecular manufacturing a decisive
edge in a non-nuclear exchange, because they could quickly build so many more
weapons.
It is also uncertain what would happen in an arms race between opponents that
both possessed molecular manufacturing. Weapons would be developed very rapidly
up to a certain point. Beyond that, new classes of weapons would have to be
invented. It is not yet known whether offensive weapons will in general be able
to penetrate shields, especially if the weapons of both sides are unfamiliar to
their opponents. If shields win, then development of defensive technologies may
proceed rapidly until all sides feel secure. If offense wins, then a balance of
terror may result. However, because sufficient information may allow any
particular weapon system to be shielded against, there may be an incentive to
continually develop new weapons.
This essay has focused on the earliest applications of molecular manufacturing.
Later developments will benefit from previous experience, as well as from new
software tools such as genetic algorithms and partially automated design. But
even a cursory review of the things we can plan for today and the problems that
will be most limiting early in the technology's history shows that molecular
manufacturing will rapidly revolutionize many important areas of human endeavor.
Molecular
Manufacturing Design Software
Chris Phoenix, Director of Research, CRN
Nanofactories, controlled by computerized blueprints, will be able to build a
vast range of high performance products. However, efficient product design will
require advanced software.
Different
kinds of products will require different approaches to design. Some, such as
high-performance supercomputers and advanced medical devices, will be packed
with functionality and will require large amounts of research and invention. For
these products, the hardest part of design will be knowing what you want to
build in the first place. The ability to build test hardware rapidly and
inexpensively will make it easier to do the necessary research, but that is not
the focus of this essay.
There are
many products that we easily could imagine and that a nanofactory easily could
build if told exactly how. But as any computer programmer knows, it's not easy
to tell a computer what you want it to do — it's more or less like trying to
direct a blind person to cook a meal in an unfamiliar kitchen. One mistake, and
the food is spilled or the stove catches fire.
Computer
users have an easier time of it. To continue the analogy, if the blind person
had become familiar with the kitchen, instructions could be given on the level
of "Get the onions from the left-hand vegetable drawer" rather than "Move your
hand two inches to your right... a bit more... pull the handle... bend down and
reach forward... farther... open the drawer... feel the round things?" It is the
job of the programmer to write the low-level instructions that create appliances
from obstacles.
Another
advantage of modern computers, from the user's point of view, is their input
devices. Instead of typing a number, a user can simply move a mouse, and a
relatively simple routine can translate its motion into the desired number, and
the number into the desired operation such as moving a pointer or a scroll bar.
Suppose I
wanted to design a motorcycle. Today, I would have to do engineering to
determine stresses and strains, and design a structure to support them. The
engineering would have to take into account the materials and fasteners, which
in turn would have to be designed for inexpensive assembly. But these choices
would limit the material properties, perhaps requiring several iterations of
design. And that's just for the frame.
Next, I
would have to choose components for a suspension system, configure an engine,
add an electrical system and a braking system, and mount a fuel tank. Then, I
would have to design each element of the user interface, from the seat to the
handgrips to the lights behind the dials on the instrument panel. Each thing the
user would see or touch would have to be made attractive, and simultaneously
specified in a way that could be molded or shaped. And each component would have
to stay out of the way of the others: the engine would have to fit inside the
frame, the fuel tank might have to be molded to avoid the cylinder heads or the
battery, and the brake lines would have to be routed from the handlebars and
along the frame, adding expense to the manufacturing process and complexity to
the design process.
As I
described in last month's essay, most nanofactory-built human-scale products
will be mostly empty space due to the awesomely high performance of both active
and passive components. It will not be necessary to worry much about keeping
components out of each other's way, because the components will be so small that
they can be put almost anywhere. This means that, for example, the frame can be
designed without worrying where the motor will be, because the motor will be a
few microns of nanoscale motors lining the axles. Rather than routing large
hydraulic brake lines, it will be possible to run highly redundant microscopic
signal lines controlling the calipers — or more likely, the regenerative braking
functionality built into the motors.
It will not
be necessary to worry about design for manufacturability. With a planar-assembly
nanofactory, almost any shape can be made as easily as any other, because the
shapes are made by adding sub-micron nanoblocks to selected locations in a
supported plane of the growing product. There will be less constraint on form
than there is in sand casting of metals, and of course far more precision. This
also means that what is built can contain functional components incorporated in
the structure. Rather than building a frame and mounting other pieces later, the
frame can be built with all components installed, forming a complete product.
This does require functional joints between nanoblocks, but this is a small
price to pay for such flexibility.
To specify
functionality of a product, in many cases it will be sufficient to describe the
desired functionality in the abstract without worrying about its physical
implementation. If every cubic millimeter of the product contains a networked
computer — which is quite possible, and may be the default — then to send a
signal from point A to point B requires no more than specifying the points.
Distributing energy or even transporting materials may not require much more
attention: a rapidly rotating diamond shaft can transport more than a watt per
square micron, and would be small enough to route automatically through almost
any structure; pipes can be made significantly smaller if they are configured
with continually inverting liners to reduce drag.
Thus, to
design the acceleration and braking behavior of the motorcycle, it might be
enough to specify the desired torque on the wheels as a function of speed, tire
skidding, and brake and throttle position. A spreadsheet-like interface could
calculate the necessary power and force for the motors, and from that derive the
necessary axle thickness. The battery would be fairly massive, so the user would
position it, but might not have to worry about the motor-battery connection, and
certainly should not have to design the motor controller.
In order to
include high-functionality materials such as motor arrays or stress-reporting
materials, it would be necessary to start with a library of well-characterized
"virtual materials" with standard functionality. This approach could
significantly reduce the functional density of the virtual material compared to
what would be possible with a custom-designed solution, but this would be
acceptable for many applications, because functional density of nano-built
equipment may be anywhere from six to eighteen orders of magnitude better than
today's equipment. Virtual materials could also be used to specify material
properties such as density and elasticity over a wide range, or implement active
materials that changed attributes such as color or shape under software control.
Prototypes
as well as consumer products could be heavily instrumented, warning of
unexpected operating conditions such as excessive stress or wear on any part.
Rather than careful calculations to determine the tradeoff between weight and
strength, it might be better to build a first-guess model, try it on
increasingly rough roads at increasingly high speeds, and measure rather than
calculate the required strength. Once some parameters had been determined, a new
version could be spreadsheeted and built in an hour or so at low cost. It would
be unnecessary to trade time for money by doing careful calculations to minimize
the number of prototypes. Then, for a low-performance application like a
motorcycle, the final product could be built ten times stronger than was thought
to be necessary without sacrificing much mass or cost.
There are
only a few sources of shape requirements. One is geometrical: round things roll,
flat things stack, and triangles make good trusses. These shapes tend to be
simple to specify, though some applications like fluid handling can require
intricate curves. The second source of shape is compatibility with other shapes,
as in a piece that must fit snugly to another piece. These shapes can frequently
be input from existing databases or scanned from an existing object. A third
source of shape is user preference. A look at the shapes of pen barrels, door
handles, and eyeglasses shows that users are pleased by some pretty
idiosyncratic shapes.
To input
arbitrary shapes into the blueprint, it may be useful to have some kind of
interface that implements or simulates a moldable material like clay or taffy. A
blob could simply be molded or stretched into a pleasing shape. Another useful
technique could be to present the designer or user with several variations on a
theme, let them select the best one, and build new variations on that until a
sufficiently pleasing version is produced.
Although there is
more to product design than the inputs described here, this should give some
flavor of how much more convenient it could be with computer-controlled rapid
prototyping of complete products. Elegant computer-input devices, pervasive
instrumentation and signal processing, virtual material libraries, inexpensive
creation of one-off spreadsheeted prototypes, and several other techniques could
make product design more like a combination of graphic arts and computer
programming than the complex, slow, and expensive process it is today.
Fast Development of
Nano-Manufactured Products
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
The extremely high performance of the products of
molecular manufacturing will make the technology transformative—but it is
the potential for fast development that will make it truly disruptive. If
it took decades of research to produce breakthrough products, we would have time
to adjust. But if breakthrough products can be developed quickly, their
effects can pile up too quickly to allow wise policymaking or adjustment. As if
that weren't bad enough, the anticipation of rapid development could cause
additional problems.
How quick is "quickly?" Given a programmable factory that can make a product
from its design file in a few hours, a designer could create a newly improved
version every day. Today, building prototypes of a product can take weeks, so
designers have to take extra time to double-check their work. If building a
prototype takes less than a day, it will often be more efficient to build and
test the product rather than taking time to double-check the theoretical design.
(Of course, if taken to extremes, this can encourage sloppy work that costs more
time to fix in the long run.)
In addition to being faster, prototyping also would be far cheaper. A
nanofactory would go
through the same automated operations for a single prototype copy as for a
production run, so the prototype should cost no more per unit than the final
product. That's quite a contrast with today, where rapid prototyping can cost
thousands of dollars per component. And it means that destructive testing will
be far less painful. Let's take an example. Today, a research rocket might cost
hundreds of dollars to fuel, but hundreds of thousands to build. At that rate,
tests must be held to a minimum number, and expensive and time-consuming efforts
must be made to eliminate all possible sources of failure and gather as much
data as possible from each test. But if the rocket cost only hundreds of dollars
to build—if a test flight cost less than $1000, not counting support
infrastructure—then tests could be run as often as convenient, requiring far
less support infrastructure, saving costs there as well. The savings ripple out:
with less at stake in every test, designers could use more advanced and less
well-proved technologies, some of which would fail but others of which would
increase performance. Not only would the product be developed faster, but it
also would be more advanced, and have a lot more testing.
The equivalence between prototype and production manufacturing has an additional
benefit. Today, products must be designed for two different manufacturing
processes—prototyping and scaled-up production. Ramping up production has its
own costs, such as rearranging production lines and training workers. But with
direct-from-blueprint building, there would be no need to keep two designs in
mind, and also no need to expend time and money ramping up production. When a
design was finalized, it could immediately be shipped to as many nanofactories
as desired, to be built efficiently and almost immediately. (For those just
joining us, the reason nanofactories aren't scarce is that a nanofactory would
be able to build another nanofactory on command, needing only data and supplies
of a few refined chemicals.) A product design isn't really proved until people
buy it, and rolling out a new product is expensive and risky today—after
manufacture, the product must be shipped and stored in quantity, waiting for
people to buy it. With last-minute nanofactory manufacturing, the product
rollout cost could be much lower, reducing the overhead and risk of
market-testing new ideas.
There are several other technical reasons why products could be easier to
design. Today's products are often crammed full of functionality, causing severe
headaches for designers trying to make one more thing fit inside the package.
Anyone who's looked under the hood of a 1960 station wagon and compared it with
a modern car's engine, or studied the way chips and wires are packed into every
last nook and cranny of a cell phone, knows how crowded products can get. But
molecular manufactured products will be many orders of magnitude more compact;
this is true for sensors, actuators, data processing, energy transformation, and
even physical structure. What this means is that any human-scale product will be
almost entirely empty space. Designers will be able to include functions without
worrying much about where they will physically fit into the product. This
ability to focus on function will simplify the designer's task.
The high performance of molecularly precise nanosystems also means that
designers can afford to waste a fair amount of performance in order to simplify
the design. For example, instead of using a different size of motor for every
different-sized task, designers might choose from only two or three standard
sizes that might differ from each other by an order of magnitude or more. In
today's products, using a thousand-watt motor to do a hundred-watt motor's job
would be costly, heavy, bulky, and probably an inefficient use of energy
besides. But nano-built motors have been calculated to be at least a million
times as powerful. That thousand-watt motor would shrink to the size of a grain
of sand. Running it at low power would not hurt its efficiency, and it wouldn't
be in danger of overheating. It wouldn't cost significantly more to build than a
carefully-sized hundred-watt motor. And at that size, it could be placed
wherever in the product was most convenient for the designer.
Another potential advantage of having more performance than needed is that
design can be performed in stages. Instead of planning an entire product at
once, integrated from top to bottom, designers could cobble together a product
from a menu of lower-level solutions that were already designed and understood.
For example, instead of a complicated system with lots of custom hardware to be
individually specified, designers could find off-the-shelf modules that had more
features than required, string them together, and tweak their specifications or
programming to configure their functionality to the needed product—leaving a lot
of other functionality unused. Like the larger-than-necessary motor, this
approach would include a lot of extra stuff that was put in simply to save the
designer's time; however, including all that extra stuff would cost almost
nothing. This approach is used today in computers. A modern computer spends at
least 99% of its time and energy on retroactively saving time for its designers.
In other words, the design is horrendously inefficient, but because computer
hardware is so extremely fast, it's better to use trillions of extra
calculations than to pay the designer even $10 to spend time on making the
program more efficient. A modern personal computer does trillions of
calculations in a fraction of an hour.
Modular design depends on predictable modules—things that work exactly as
expected, at least within the range of conditions they are used in. This is
certainly true in computers. It will also be true in molecular manufacturing,
thanks to the digital nature of covalent bonds. Each copy of a design that has
the same bond patterns between the atoms will have identical behavior. What this
means is that once a modular design is characterized, designers can be quite
confident that all subsequent copies of the design will be identical and
predictable. (Advanced readers will note that isotopes can make a difference in
a few cases, but isotope number is also discrete and isotopes can be sorted
fairly easily as necessary to build sensitive designs. Also, although radiation
damage can wipe out a module, straightforward redundancy algorithms will take
care of that problem.)
With all these advantages, development of nano-built products, at least to the
point of competing with today's products, appears to be easier in some important
ways than was development of today's products. It's worth spending some thought
on the implications of that. What if the military could test-fire a new missile
or rocket every day until they got it right? How fast would the strategic
balance of power shift, and what is the chance that the mere possibility of such
a shift could lead to pre-emptive military strikes? What if doctors could build
new implanted sensor arrays as fast as they could find things to monitor, and
then use the results to track the effects of experimental treatments (also
nano-built rapid-prototyped technology) before they had a chance to cause
serious injury? Would this enable doctors to be more aggressive—and
simultaneously safer—in developing new lifesaving treatments? If new versions of
popular consumer products came out every month—or even every week—and consumers
were urged to trade up at every opportunity, what are the environmental
implications? What if an arms race developed between nations, or between police
and criminals? What if products of high personal desirability and low social
desirability were being created right and left, too quickly for society to
respond? A technical essay is not the best place to get into these questions,
but these issues and more are directly raised by the possibility that molecular
manufacturing nanofactories will open the door to true rapid prototyping.
Sudden Development of
Molecular Manufacturing
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
Development of molecular manufacturing technology probably will not be gradual,
and will not allow time to react to incremental improvements. It is often
assumed that development must be gradual, but there are several points at which
minor improvements to the technology will cause massive advances in capability.
In other words, at some points, the capability of the technology can advance
substantially without breakthroughs or even much R&D.
These jumps in capability could happen quite close together, given the
pre-design that a well-planned development program would certainly do. Advancing
from laboratory demos all the way to megatons of easily designed, highly
advanced products in a matter of months appears possible. Any policy that will
be needed to deal with the implications of such products must be in place before
the advances start.
The first jump in capability is exponential manufacturing. If a manufacturing
system can build an identical copy, then the number of systems, and their mass
and productivity, can grow quite rapidly. However, the starting point is quite
small; the first device may be one million-billionth of a gram (100 nanometers).
It will take time for even exponential growth to produce a gram of manufacturing
systems. If a copy can be built in a week, then it will take about a year to
make the first gram. A better strategy will be to spend the next ten months in R&D
to reduce the manufacturing time to one day, at which point it will take less
than two months to make the first gram. And at that point, expanding from the
first gram to the first ton will take only another three weeks.
It's worth pointing out here that nanoscale machinery is vastly more powerful
than larger machinery. When a machine shrinks, its power density and functional
density improve. Motors could be a million times more powerful than today's;
computers could be billions of times more compact. So a ton of nano-built stuff
is a lot more powerful than a ton of conventional product. Even though the
products of tiny manufacturing systems will themselves be small, they will
include computers and medical devices. A single kilogram of nanoscale computers
would be far more powerful than the sum of all computers in existence today.
The second jump in capability is nanofactories—integrated manufacturing systems
that can make large products with all the advantages of precise nanoscale
machinery. It turns out that nanofactory design can be quite simple and
scalable, meaning that it works the same regardless of the size. Given a
manufacturing system that can make sub-micron blocks ("nanoblocks"), it doesn't
take a lot of additional work to fasten those blocks together into a product. In
fact, a product of any size can be assembled in a single plane, directly from
blocks small enough to be built by single nanoscale manufacturing systems,
because assembly speed increases as block size decreases. Essentially, a
nanofactory is just a thin sheet of manufacturing systems fastened side by side.
That sheet can be as large as desired without needing a re-design, and the low
overhead means that a nanofactory can build its own mass almost as fast as a
single manufacturing system. Once the smallest nanofactory has been built,
kilogram-scale and ton-scale nanofactories can follow in a few weeks.
The third jump in capability is product design. If it required a triple Ph.D. in
chemistry, physics, and engineering to design a nanofactory product, then the
effects of nanofactories would be slow to develop. But if it required a triple
Ph.D. in semiconductor physics, digital logic, and operating systems to write a
computer program, the software industry would not exist. Computer programming is
relatively easy because most of the complexity is hidden—encapsulated and
abstracted within simple, elegant high-level commands. A computer programmer can
invoke billions of operations with a single line of text. In the case of
nanofactory product design, a good place to hide complexity is within the
nanoblocks that are fastened together to make the product. A nanoblock designer
might indeed need a triple Ph.D. However, a nanoblock can contain many millions
of features—enough for motors, a CPU, programmable networking and connections,
sensors, mechanical systems, and other high-level components.
Fastening a few types of nanoblocks together in various combinations could make
a huge range of products. The product designer would not need to know how the
nanoblocks worked—only what they did. A nanoblock is quite a bit smaller than a
single human cell, and a planar-assembly nanofactory would impose few limits on
how they were fastened together. Design of a product could be as simple as
working with a CAD program to specify volumes to be filled and areas to be
covered with different types of nanoblocks.
Because the internal design of nanoblocks would be hidden from the product
designer, nanoblock designs could be changed or improved without requiring
product designers to be retrained. Nanoblocks could be designed at a functional
level even before the first nanofactory could be built, allowing product
designers to be trained in advance. Similarly, a nanofactory could be designed
in advance at the nanoblock level. Although simple design strategies will cost
performance,
scaling laws indicate that molecular-manufactured machinery will have
performance to burn. Products that are revolutionary by today's standards,
including the nanofactory itself, could be significantly less complex than
either the software or the hardware that makes up a computer—even a 1970's-era
computer.
The design of an exponential molecular manufacturing system will include many of
the components of a nanofactory. The design of a nanofactory likewise will
include components of a wide range of products. A project to achieve exponential
molecular manufacturing would not need much additional effort to prepare for
rapid creation of nanofactories and their highly advanced products.
Sudden availability of advanced products of all sizes in large quantity could be
highly disruptive. It would confer a large military advantage on whoever got it
first, even if only a few months ahead of the competition. This implies that
molecular manufacturing technology could be the focus of a high-stakes arms
race. Rapid design and production of products would upset traditional
manufacturing and distribution. Nanofactories would be simple enough to be
completely automated—and with components small enough that this would be
necessary. Complete automation implies that they will be self-contained and easy
to use. Nanofactory-built products, including nanofactories themselves, could be
as hard to regulate as Internet file-sharing. These and other problems imply
that wise policy, likely including some global-scale policy, will be needed to
deal with molecular manufacturing. But if it takes only months to advance from
100-nanometer manufacturing systems to self-contained nanofactories and
easily-designed revolutionary products, there will not be time to make wise
policy once exponential manufacturing is achieved. We will have to start ahead
of time.
Molecular Manufacturing vs. Tiny
Nanobots
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
A few days ago, a high-ranking official of the National Nanotechnology
Initiative told me that statements against "nanobots" on their website had been
intended to argue against three-nanometer devices that could build anything.
This is frustrating, because no one has proposed such devices.
A three-nanometer cube would contain a few thousand atoms. This is about the
right size for a single component, such as a switch or gear. No one has
suggested building an entire robot in such a tiny volume. Even ribosomes, the
protein-constructing machinery of cells, are more like 30 nanometers. A
mechanical molecular fabrication system might be closer to 100 or 200
nanometers. That's still small enough to be built molecule-by-molecule in a few
seconds, but large enough to contain thousands or millions of components.
Nanosystems a few hundred nanometers in size are convenient for several other
reasons. They are small enough to be built error-free, and remain error-free for
months or years despite background radiation. They are large enough to be
handled mechanically with high efficiency and speed. They are smaller than a
human cell. They are large enough to contain a complete CPU or other useful
package of equipment. So it seems likely that designs for molecular
manufacturing products and nanofactories will be based on components of this
size.
So much for size. Let's look at the other half of that strawman, the part about
"could build anything." There has been a persistent idea that molecular
manufacturing proposes, and depends on, devices that can build any desired
molecule. In fact, such devices have never been proposed. The idea probably
comes from a misinterpretation of a section heading in Drexler's early book
Engines of Creation.
The
section in question talked about designing and building a variety of
special-purpose devices to build special molecular structures: "Able to tolerate
acid or vacuum, freezing or baking, depending on design, enzyme-like
second-generation machines will be able to use as 'tools' almost any of the
reactive molecules used by chemists -- but they will wield them with the
precision of programmed machines. They will be able to bond atoms together in
virtually any stable pattern, adding a few at a time to the surface of a
workpiece until a complex structure is complete. Think of such nanomachines as
assemblers."
Unfortunately, the section was titled "Universal Assemblers." This was misread
as referring to a single "universal" assembler, rather than a collective
capability of a large number of special-purpose machines. But there is not, and
never was, any proposal for a single universal assembler. The phrase has always
been plural.
The development of molecular manufacturing theory has in fact moved in the
opposite direction. Instead of planning for systems that can do a very broad
range of molecular fabrication, the latest designs aim to do just a few
reactions. This will make it easier to develop the reactions and analyze the
resulting structures.
Another persistent but incorrect idea that has attached itself to molecular
manufacturing is the concept of "disassemblers." According to popular belief,
tiny nanomachines will be able to take apart anything and turn it into raw
materials. In fact, disassemblers, as
described in Engines, have a far more mundane purpose: "Assemblers
will help engineers synthesize things; their relatives, disassemblers, will help
scientists and engineers analyze things." In other words, disassemblers are a
research tool, not a source of feedstock.
Without universal assemblers and disassemblers, molecular manufacturing is
actually pretty simple. Manufacturing systems built on a 100-nanometer scale
would convert simple molecular feedstock into machine parts with fairly simple
molecular structure—but, just as simple bricks can be used to build a wide
variety of buildings, the simple molecular structure could serve as a backbone
for rather intricate shapes. The manufacturing systems as well as their products
would be built out of modules a few hundred nanometers in size. These modules
would be fastened together to make large systems.
As I explained in my recent 50-page paper, "Molecular
Manufacturing: What, Why, and How," recent advances in theory have shown
that a planar layout for a nanofactory system can be scaled to any size,
producing about a kilogram per square meter per hour. Since the factory would
weigh about a kilogram per square meter, and could build a larger factory by
extruding it edgewise, manufacturing capacity can be doubled and redoubled as
often as desired. The implications of non-scarce and portable manufacturing
capacity, as well as the high performance, rapid fabrication, and low cost of
the products, are far beyond the scope of this essay. In fact, studying and
preparing for these implications is the reason that CRN exists.
Protein
Springs and Tattoo Needles—Work in progress at CRN
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
This month's science essay will be a little different. Rather than explaining
how a known aspect of the nanoscale works, I'll provide a description of my
recent research activities and scientific thinking. I'll explain what the ideas
are, where the inspirations came from, and what they might mean. This is a view
"behind the scenes" of CRN. As always, I welcome comments and questions.
=========
I'm currently investigating two topics. One is how to make the simplest possible
nanoscale molecular manufacturing system. I think I've devised a version that
can be developed with today's technology, but can be improved incrementally to
approach the tabletop diamondoid nanofactory that is the major milestone of
molecular manufacturing. The other topic is how proteins work. I think I've had
an insight that solves a major mystery: how protein machines can be so
efficient. And if I'm right, it means that natural protein machines have
inherent performance limitations relative to artificial machines.
I'll talk about the proteins first. Natural proteins can do things that we can't
yet even begin to design into artificial proteins. And although we can imagine
and even design machines that do equivalent functions using other materials, we
can't build them yet. Although I personally don't expect proteins to be on the
critical path to molecular manufacturing, some very smart people do, both within
and outside the molecular manufacturing community. And in any case, I want to
know how everything at the nanoscale works.
One of the major questions about protein machines is how they can be so
efficient. Some of them, like ATP synthase, are nearly 100% efficient. ATP
synthase has a fairly complex job: it has to move protons through a membrane,
while simultaneously converting molecules of ADP to ATP. That's a pump and an
enzyme-style chemical reaction--very different kinds of operation--linked
together through a knobby floppy molecule, yet the system wastes almost no
energy as it transfers forces and manipulates chemicals. A puzzle, to be sure:
how can something like a twisted-up necklace of different-sized soft rubber
balls be the building material for a highly sophisticated machine?
I've been thinking about that in the back of my mind for a few months. I do that
a lot: file some interesting problem, and wait for some other random idea to
come along and provide a seed of insight. This time, it worked. I have been
thinking recently about entropy and springiness, and I've also been thinking
about what makes a nanoscale machine efficient. And suddenly it all came
together.
A nanoscale machine is efficient if its energy is balanced at each point in its
action. In other words, if a motion is "downhill" (the machine has less energy
at the end of the motion) then that energy must be transferred to something that
can store it, or else it will be lost as heat. If a motion is "uphill" (requires
energy) then that energy must be supplied from outside the machine. So a machine
with large uphills and downhills in its energy-vs.-position trajectory will
require a lot of power for the uphills, and will waste it on the downhills. A
machine with sufficiently small uphills and downhills can be moved back and
forth by random thermal motion, and in fact, many protein machines are moved
this way.
A month or so ago, I read an article on ATP synthase in which the researchers
claimed that the force must be constant over the trajectory, or the machine
couldn't be efficient. I thought about it until I realized why this was true. So
the question to be answered was, how was the force so perfectly balanced? I knew
that proteins wiggled and rearranged quite a bit as they worked. How could such
a seemingly ad-hoc system be perfectly balanced at each point along its
trajectory?
As I said, I have been thinking recently about entropic springs. Entropy, in
this application, means that nanoscale objects (including molecular fragments)
like to have freedom to wiggle. A stringy molecule that is stretched straight
will not be able to wiggle. Conversely, given some slack, the molecule will coil
and twist. The more slack it has, the more different ways it can twist, and the
happier it will be. Constraining these entropic wiggles, by stretching a string
or squashing a blob, costs energy. At the molecular scale, this effect is large;
it turns out that entropic springiness, and not covalent bond forces, is the
main reason why latex rubber is springy. This means that any nanoscale wiggly
thing can function as an entropic spring. I sometimes picture it as a tumbleweed
with springy branches--except that there is only one object (for example, a
stringy molecule) that wiggles randomly into all the different branch positions.
Sometimes I compare it to a springy cotton ball.
One Saturday morning I happened to be thinking simultaneously about writhing
proteins, entropic springs, and efficient machines. I suddenly realized, as I
thought about the innards of a protein rearranging themselves like a nest of
snakes, that installing lots of entropic springs in the middle of that complex
environment would provide lots of adjustable parameters to balance whatever
force the machine's function generated. Because of the complex structural
rearrangement of the protein, each spring would affect a different fraction of
the range of motion. Any uphills and downhills in its energy could be smoothed
out.
Natural protein machines are covered and filled with floppy bits that have no
obvious structural purpose. However, each of those bits is an entropic spring.
As the machine twists and deforms, its various springs are compressed or allowed
to expand. An entropic spring only has to be attached at one point; it will
press against any surface that happens to come into its range. Compressing the
spring takes energy and requires force; releasing the spring will recover the
energy, driving the machine forward.
As soon as I had that picture, I realized that each entropic spring could be
changed independently, by blind evolution. By simply changing the size of the
molecule, its springiness would be modified. If a change in a spring increased
the efficiency of the machine, it would be kept. The interior reconfiguration of
proteins would provide plenty of different environments for the springs--plenty
of different variables for evolution to tweak.
Always before, when I had thought about trying to design a protein for
efficiency and effectiveness, I had thought about its backbone--the molecular
chain that folds up to form the structure. This is large, clumsy, and soft--not
suitable for implementing subtle energy balancing. It would be very hard (no pun
intended) to design a system of trusses, using protein backbones and their
folded structure, that could implement the right stiffness and springiness to
balance the energy in a complex trajectory. But the protein's backbone has lots
of dangling bits attached. The realization that each of those was an entropic
spring, and each could be individually tuned to adjust the protein's energy at a
different position, made the design task suddenly seem easy.
The task could be approached as: 1) Build a structure to perform the protein's
function without worrying about efficiency and energy balance. Make it a large
structure with a fair amount of internal reconfiguration (different parts having
different relative orientations at different points in the machine's motion); 2)
Attach lots of entropic springs all over the structure; 3) Tune the springs by
trial and error until the machine is efficient--until the energy stored by
pressure on the myriad springs exactly balances the energy fluctuations that
result from the machine's functioning.
I proposed this idea to a couple of expert nanoscale scientists--a molecular
manufacturing theorist and a physicist. And I learned a lot. One of the experts
said that he had not previously seen the observation that adding lots of springs
made it easier to fine-tune the energy accurately. That was pretty exciting. I
learned that proteins do not usually disfigure themselves wildly during their
operation--interior parts usually just slip past each other a bit. I watched
some movies of proteins in action, and saw that they still seemed to have enough
internal structural variation to cause different springs to affect different
regions of the motion trajectory. So, that part of the idea still seems right.
I had originally been thinking in terms of the need to balance forces; I learned
that energy is a slightly more general way to think about the problem. But in
systems like these, force is a simple function of energy, and my theory
translated perfectly well into a viewpoint in terms of energy. It turned out
that one of my experts had studied genetic algorithms, and he warned that there
is no benefit to increasing the number of evolvable variables in the system if
the number of constraints increases by the same number. I hadn't expected that,
and it will take more theoretical work to verify that adding extra structures in
order to stick more entropic springs on them is not a zero-sum game. But my
preliminary thinking says that one piece of structure can have lots of springs,
so adding extra structures is still a win.
The other expert, the physicist, asked me how much of the effect comes from
entropic springiness vs. mechanical springiness. That's a very good question. I
realized that there is a measurable difference between entropic springs and
mechanical (covalent bond) springs: the energy stored by an entropic spring is
directly proportional to the temperature. If a machine's efficiency depends on
fine-tuning of entropic springs, then changing the temperature should change all
the spring constants and destroy the delicate energy balance that makes it
efficient. I made the prediction, therefore, that protein machines would have a
narrow temperature range in which they would be efficient. Then I thought a bit
more and modified this. A machine could use a big entropic spring as a
thermostat, forcing itself into different internal configurations at each
temperature, and fine-tuning each configuration separately. This means that a
machine with temperature-sensitive springs could evolve to be insensitive to
temperature. But a machine that evolved at a constant temperature, without this
evolutionary pressure, should be quite sensitive to temperature.
After thinking this through, I did a quick web search for the effect of
temperature on protein activity. I quickly found
a page containing a sketch of enzyme activity vs. temperature for various
enzymes. Guess what--the enzyme representing Arctic shrimp has maximum activity
around 4 C, and mostly stops working just a few degrees higher. That looks like
confirmation of my theory.
That web page, as well as another one, says that enzymes stop working at
elevated temperatures due to denaturation--change in three-dimensional
structure brought on by breaking of weak bonds in the protein. The
other web page also asserts that the rate of enzyme activity, "like all
reactions," is governed by the Arrhenius equation, at least up to the point
where the enzyme starts to denature. The Arrhenius equation says that if an
action requires thermal motion to jump across an energy barrier, the rate of the
action increases as a simple exponential function of temperature. But this
assumes that the height of the barrier is not dependent on temperature. If the
maintenance of a constant energy level (low barriers) over the range of the
enzyme's motion requires finely tuned, temperature dependent mechanisms, then
spoiling the tuning--by a temperature change in either direction--will decrease
the enzyme's rate.
I'll go out on a limb and make a testable prediction. I predict that many
enzymes that are evolved for operation in constant or nearly constant
temperature will have rapid decrease of activity at higher and lower
temperatures, even without structural changes. When the physical structure of
some of these supposedly denatured enzymes is examined, it will be found that
the enzyme is not in fact denatured: its physical structure will be largely
unchanged. What will be changed is the springiness of its entropic springs.
If I am right about this, there are several consequences. First, it appears that
the design of efficient protein machines may be easier than is currently
believed. There's no need to design a finely-tuned structure (backbone). Design
a structure that barely works, fill it with entropic springs, and fine-tune the
springs by simple evolution. Analysis of existing proteins may also become
easier. The Arrhenius equation should not apply to a protein that uses entropic
springs for energy balancing. If Arrhenius is being misapplied, then permission
to stop using it and fudging numbers to fit around it should make protein
function easier to analyze. (The fact that 'everyone knows' Arrhenius applies
indicates that, if I am right about entropic springs being used to balance
energy, I've probably discovered something new.)
Second, it may imply that much of the size and intricate reconfiguration of
protein machines exists simply to provide space for enough entropic springs to
allow evolutionary fine-tuning of the system. An engineered system made of stiff
materials could perform an equivalent function with equivalent efficiency by
using a much simpler method of force/energy compensation. For example, linking
an unbalanced system to an engineered cam that moves relative to a mechanical
spring will work just fine. The compression of the spring, and the height of the
cam, will correspond directly to the energy being stored, so the energy required
to balance the machine will directly specify the physical parameters of the cam.
The third consequence, if it turns out that protein machines depend on entropic
springs, is that their speed will be limited. To be properly springy, an
entropic spring has to equalize with its space; it has to have time to spread
out and explore its range of motion. If the machine is moved too quickly, its
springs will lose their springiness and will no longer compensate for the
forces; the machine will become rapidly less efficient. Stiff mechanical
springs, having fewer low-frequency degrees of freedom, can equilibrate much
faster. If I understand correctly, my physics expert says that a typical small
entropic spring can equilibrate in fractions of a microsecond. But stiff
mechanical nanoscale springs can equilibrate in fractions of a nanosecond.
I will continue researching this. If my idea turns out to be wrong, then I will
post a correction notice in our newsletter archive at the top of this article,
and a retraction in the next newsletter. But if my idea is right, then it
appears that natural protein machines must have substantially lower speeds than
engineered nanoscale machines can achieve with the same efficiency. "Soft" and
"hard" machines do indeed work differently, and the "hard" machines are simply
better.
=========
The second thing I am investigating is the design of a nanoscale molecular
manufacturing system that is simple enough to be developed today, but functional
enough to build rapidly improving versions and large-throughput arrays.
It may seem odd, given the ominous things CRN has said about the
dangers of advanced
molecular manufacturing, that I am working on something that could accelerate
it. But there's a method to my madness. Our overall goal is not to retard
molecular manufacturing; rather, it is to maximize the amount of thought and
preparation that is done before it is developed. Currently, many people
think molecular manufacturing is impossible, or at least extremely difficult,
and will not even start being developed for many years. But we believe that this
is not true--we’re concerned that a small group of smart people could figure out
ways to develop basic
capabilities fairly quickly.
The primary insights of molecular manufacturing--that stiff molecules make good
building blocks, that nanoscale machines can have extremely high performance,
and that general-purpose manufacturing enables rapid development of better
manufacturing systems--have been published for decades. Once even a few people
understand what can be done with even basic capabilities, we think they will
start working to develop them. If most people do not understand the
implications, they will be unprepared. By developing and publishing ways to
develop molecular manufacturing more easily, I may hasten its development, but I
also expect to improve general awareness that such development is possible and
may happen surprisingly
soon. This is a necessary precondition for
preparedness. That's why
I spend a lot of my time trying to identify ways to develop molecular
manufacturing more easily.
An early goal of molecular manufacturing is to build a nanoscale machine that
can be used to build more copies and better versions. This would answer nagging
worries about the ability of molecular manufacturing systems to make large
amounts of product, and would also enable rapid development of molecular
manufacturing technologies leading to advanced nanofactories.
I've been looking for ways to simplify the Burch/Drexler
planar assembly nanofactory. This method of "working backward" can be useful
for planning a development pathway. If you set a plausible goal pretty far out,
and then break it down into simpler steps until you get to something you can do
today, then the sequence of plans forms a roadmap for how to get from today's
capabilities to the end goal.
The first simplification I thought of was to have the factory place blocks that
were built externally, rather than requiring it to manufacture the blocks
internally. If the blocks can be prefabricated, then all the factory has to do
is grab them and place them into the product in specified locations.
I went looking for ways to join prefabricated molecular blocks and found a
possible solution. A couple of amino acids, cysteine and histidine, like to bind
to zinc. If two of them are hooked to each block, with a zinc ion in the middle,
they'll form a bond quite a bit stronger than a hydrogen bond. That seems
useful, as long as you can keep the blocks from joining prematurely into a
random lump. But you can do that simply by keeping zinc away.
So, mix up a feedstock with lots of molecular zinc-binding building blocks, but
no zinc. Build a smart membrane with precisely spaced actuators in it that can
transport blocks through the membrane. On one side of the membrane, put the
feedstock solution. On the other side of the membrane, put a solution of zinc,
and the product. As the blocks come through the membrane one at a time, they
join up with the zinc and become "sticky"--but the mechanism can be used to
retain them and force them into the right place in the product. It shouldn't
require a very complex mechanism to "grab" blocks from feedstock (via Brownian
assembly) through a hole in a membrane, move them a few nanometers to face the
product, and stick them in place. In fact, it should be possible to do this with
just one molecular actuator per position. A larger actuator can be used to move
the whole network around.
Then I thought back to some stuff I knew about how to keep blocks from clumping
together in solution. If you put a charge on the blocks, they will attract a
"screen" of counterions, and will not easily bump each other. So, it might be
possible to keep blocks apart even if they would stick if they ever bumped into
each other. In fact, it might be very simple. A zinc-binding attachment has four
amino acids per zinc, two on each side. Zinc has a +2 charge. If the rest of the
block has a -1 charge for every pair of amino acids, then when the block is
bound with zinc into a product, all the charges will match up. But if it's
floating in solution with zinc, then the zinc will still be attracted to the two
amino acids; in this case, the block should have a positive charge, since each
block will have twice as much zinc-charge associated with it in solution as when
it's fastened into the product. This might be enough to keep blocks from getting
close enough to bind together. But if blocks were physically pushed together,
then the extra zinc would be squeezed out, and the blocks would bind into a very
stable structure.
That's the theory, at this point. It implies that you don't need a membrane,
just something like a tattoo needle that attaches blocks from solution and
physically pushes them into the product. I do not know yet whether this will
work. I will be proposing to investigate this as part of a Phase 2 NIAC project.
If the theory doesn't work, there are several other ways to fasten blocks, some
triggered by light, some by pressure, and some simply by being held in place for
a long enough period of time.
It appears, then, that the simplest way to build a molecular manufacturing
system may be to develop a set of molecular blocks that will float separately in
solution but fasten together when pushed. At first, use a single kind of block,
containing a fluorescent particle. Use a scanning probe microscope to push the
blocks together. (You can scan the structure with the scanning probe microscope,
or see the cluster of fluorescence with an ordinary light microscope.) Once you
can build structures this way, build a structure that will perform the same
function of grabbing blocks and holding them to be pushed into a product. Attach
that structure to a nano-manipulator and use it to build more structures. You'd
have a hard time finding the second-level structures with a scanning probe
microscope, but again the cluster of fluorescence should show up just fine in a
light microscope.
Once you know you can build a passive structure that builds structures when
poked at a surface, the next step is to build an active structure--including an
externally controlled nanoscale actuator--that builds structures. Use your
scanning probe microscope with multiple block types to build an actuator that
pushes its block forward. Build several of those in an array. Let them be
controlled independently. You still need a large manipulator to move the array
over the surface, but you can already start to increase your manufacturing
throughput. By designing new block types, and new patterns of attaching the
blocks together, better construction machines could be built. Sensors could be
added to detect whether a block has been placed correctly. Nanoscale digital
logic could be added to reduce the number of wires required to control the
system. And if anyone can get this far, there should be no shortage of ideas and
interest directed at getting farther.
=========
That's an inside look at how my thinking process works, how I develop ideas and
check them with other experts, and how what I'm working on fits in with CRN's
vision and mission.
Please
contact me if you have any feedback.
Chris
Information Delivery for
Nanoscale Construction
Chris Phoenix, Director of Research, CRN
A widely acknowledged goal of
nanotechnology is to build intricate, useful nanoscale structures. What
usually goes unstated is how the structures will be specified. Simple structures
can be created easily: a crystal is an atomically precise structure that can be
created from simple molecules and conditions. But complex nano-products will
require some way to deliver large quantities of information to the nanoscale.
A key indicator of a technology's usefulness is how fast it can deliver
information. A kilobyte is not very much information—less than a page of text or
a thumbnail image. A dialup modem connection can transfer several kilobytes per
second. Today's nanoscale manufacturing techniques can transfer at most a few
kilobytes per second. This will not be enough to make advanced products—only
simple materials or specialized components.
The amount of information needed to specify a product is not directly related to
the size of the product. A product containing repetitive structures only needs
enough information to specify one of the structures and control the placement of
the rest. The amount of information that needs to be delivered also depends on
whether the receiving machine must receive an individual instruction for every
operation, or whether it can carry out a sequence of operations based on stored
instructions. Thus, a primitive fabrication system may require a gigabyte of
information to place a million atoms, while a gigabyte may be sufficient to
specify a fairly simple kilogram-scale product built with an advanced
nanofactory.
There are several ways to deliver information to the nanoscale so as to
construct things. Information can either be encoded materially, in a stable
pattern of atoms or electrons, or it can be in an ephemeral form such as an
electric field, a pattern of light, a beam of charged particles, the position of
a scanning probe, or an environmental condition like temperature. The goal of
manufacturing is to embody the information, however it is delivered, into a
material product. As we will see, different forms of delivery have different
advantages and limitations.
Today's Techniques
To create a material pattern, it is tempting to start with materially encoded
information. This is what self-assembly does. A molecule can be made so that it
folds on itself or joins with others in quite intricate patterns. An example of
this that is well understood, and has already been used to make nanoscale
machines, is DNA. (See our previous science essay, "Nucleic
Acid Engineering.") Biology uses DNA mainly to store information, but in the
lab it has been used to make polyhedra, grid structures, and even a programmable
machine that can synthesize DNA strands.
One problem with self-assembly is that all the information in the final
structure must be encoded in the components. In order to make a complicated
structure, a lot of information must be programmed into the component molecules.
There are only a few ways to get information into molecules. One is to make the
molecules a piece at a time. In a long linear chain like DNA, this can be done
by repeating a few operations many times—specifically, by changing the chemical
environment in a way that adds one selected block to the chain in each
operation. (This can be viewed either as chemistry or as manufacturing.)
Automated machines exist that will do this by cycling chemicals through a
reactor, but they are relatively slow, and the process is expensive. The
information rate can be greatly increased by controlling the process with light;
by shining light in programmed sequence on different regions of a surface, DNA
can be grown in many different patterns in parallel. This can create a large
“library” of different DNA molecules with programmed sequences.
Another problem with self-assembly is that when the building blocks are mixed
together, it is hard to impose long-range order and to build heterogeneous
engineered structures. This limitation may be partially alleviated by providing
a large-scale template, either a material structure or an ephemeral spatial
pattern. Adding building blocks in a programmed sequence rather than mixing them
all together all at once also may help. A combination of massively parallel
programmable molecule synthesis and templated or sequenced self-assembly may be
able to deliver kilobytes per second of information to the nanoscale.
A theoretical possibility should be mentioned here. Information can be created
by starting with a lot of random codes, throwing away all the ones that don't
work, and duplicating the ones that do. One problem with this is that for all
but the simplest criteria, it will be too difficult and time-consuming to
implement tests for the desired functionality. Another problem is that evolved
solutions will require extra work to characterize, and unless characterized,
they will be hard to integrate into engineered systems. Although evolution can
produce systems of great subtlety and complexity, it is probably not suitable
for producing easily characterized general-purpose functional modules. Specific
molecular bio-designs such as molecular motors may be worth characterizing and
using, but this will not help with the problem of controlling the construction
of large, heterogeneous, information-rich products.
Optical lithography of semiconductors now has the capability to generate
nanoscale structures. This technique creates a pattern of light using a mask.
The light causes chemical changes in a thin surface layer; these changes can
then be used to pattern a substrate by controlling the deposition or removal of
material. One drawback of this approach is that it is not atomically precise,
since the pattern of light is far too coarse to resolve individual atoms.
Another drawback is that the masks are pre-built in a slow and very expensive
process. A computer chip may embody billions of bytes of information, but the
masks may take weeks to make and use; again, this limits the data rate to
kilobytes per second. There has been recent talk of using MEMS (micro electro
mechanical systems) technology to build programmable masks; if this works out,
it could greatly increase the data rate.
Several tools can modify single points in serial fashion with atomic or
near-atomic resolution. These include scanning probe microscopes and beams of
charged particles. A scanning probe microscope uses a large but sensitive
positioning and feedback system to bring a nanoscale point into controlled
physical contact with the surface. Several thousand pixels can be imaged per
second, so in theory an automated system could deliver kilobytes per second of
changes to the surface. An electron beam or ion beam can be steered
electronically, so it can be relatively fast. But the beam is not as precise as
a scanning probe can be, and must work in vacuum. The beam can be used either to
remove material, to chemically transform it, or to deposit any of several
materials from low-pressure gas. It takes a fraction of a millisecond to make a
shallow feature at a chosen point. Again, the information delivery rate is
kilobytes per second.
Nanoscale Tools
To deliver information at a higher rate and use the information for more precise
construction, new technology will be required. In most of the techniques
surveyed above, the nanoscale matter is inert and is acted on by outside forces
(ephemeral information) created by large machines. In self-assembly, the
construction material itself encodes static patterns of information—which
probably were created by large machines doing chemistry. By contrast, nanoscale
tools, converting ephemeral information to concrete operations, could
substantially improve the delivery rate of information for nanoscale
construction. Large tools acting on inert nanoscale objects could never come
close to the data rates that are theoretically possible with nanoscale tools.
One reason why nanoscale tools are better is that they can move faster. To a
first approximation, the operating frequency of a tool increases in direct
proportion as its linear size shrinks. A 100-nm tool should be about a million
times faster than a 10-cm tool.
The next question is how the information will be delivered. There are several
candidates for really fast information delivery. Light can be switched on and
off very rapidly, but is difficult to focus tightly. Another problem is that
absorption of light is probabilistic, so a lot of light would have to be used
for reliable information delivery. Perhaps surprisingly, mechanical signals may
be useful; megahertz vibrations and pressure waves can be sent over useful
distances. Electrical signals can be sent along nanoscale wires so that multiple
independent signals could be delivered to each tool. In principle, the
mechanical and electrical portions of the system could be synchronized for high
efficiency.
Nanoscale computing elements can help with information handling in two ways.
First, they can split up a broadcast signal, allowing several machines receiving
the same signal to operate independently. This can reduce the complexity of the
macro-to-nano interface. Second, nanoscale computation can be used to implement
some kinds of error handling at a local level.
A final advantage of nanoscale tools, at least the subset of tools built from
molecules, is that they can be very precise. Precision is a serious problem in
micron-sized tools. A structure built by lithography looks like it has been
whittled with a pocket knife—the edges are quite ragged. This has made it very
difficult to build complex, useful mechanical devices at the micron scale using
lithography. Fortunately, things get precise again at the very bottom, because
atoms are discrete and identical. Small and simple molecular tools have been
built, and work is ongoing to build larger and more integrated systems. The
structural precision of molecular tools promises several advantages, including
predictable properties and low-friction interfaces.
Several approaches could be used, perhaps in combination, to build a nanoscale
fabrication system. If a simple and repetitive system can be useful, then
self-assembly might be used to build it. A repetitive system, once fabricated,
might be made less repetitive (programmed heterogeneously) by spatial patterns
such as an array of light. If it contains certain kinds of electronics, then
signals could be sent in to uniquely reconfigure the circuitry in each repeating
sub-pattern.
Of course, the point of the fabrication system is to build stuff, and a
particularly interesting kind of system is one that can build larger or better
fabrication systems. With information supplied from outside, a manufacturing
system of this sort could build a larger and more complex version of itself.
This approach is one of the goals of molecular manufacturing. It would allow the
first tiny system to be built by a very expensive or non-scalable method, and
then that tiny system can build larger ones, rapidly scaling upward and
drastically reducing cost. Or if the initial system was built by self-assembly,
then subsequent systems could be more complex than self-assembly could easily
achieve.
The design of even a tabletop general-purpose manufacturing system could be
relatively simple, heterogeneous but hierarchical and repetitive. Once the basic
capabilities of nanoscale actuation, computation, and fabrication are achieved
in a way that can be engineered and recombined, it may not take too long to
start developing nanoscale tools that can do this in parallel, using
computer-supplied blueprints to build larger manufacturing systems and a broad
range of products.
What Is Molecular Manufacturing?
Chris Phoenix, Director of Research, CRN
The term "molecular manufacturing" has been associated with all sorts of
futuristic stuff, from bloodstream robots to
grey goo to tabletop
factories that can make a new factory in a few hours. This can make it hard for
people who want to understand the field to know exactly
what's being claimed and
studied. This essay explains what the term originally meant, why the
approach is thought to be powerful enough to create a field around, why so many
futuristic ideas are associated with it, and why some of those ideas are more
plausible than they may seem.
Original Definition
Eric Drexler defined the term "molecular manufacturing" in his 1992 technical
work Nanosystems.
His definition used some other terms that need to be considered first.
Mechanochemistry In this volume, the chemistry
of processes in which mechanical systems operating with atomic-scale precision
either guide, drive, or are driven by chemical transformations.
In other words, mechanochemistry is the direct, mechanical
control of molecular structure formation and manipulation to form atomically
precise products. (It can also mean the use of reactions to directly drive
mechanical systems—a process that can be nearly 100% efficient, since the energy
is never thermalized.) Mechanochemistry
has already been demonstrated:
Oyabu has used atomic force microscopes, acting purely mechanically, to
remove single silicon atoms from a covalent lattice and put them back in the
same spot.
Mechanosynthesis Chemical synthesis controlled
by mechanical systems operating with atomic-scale precision, enabling direct
positional selection of reaction sites; synthetic applications of
mechanochemistry. Suitable mechanical systems include AFM mechanisms,
molecular manipulators, and molecular mill systems.
In other words, mechanosynthesis is the use of mechanically
guided molecular reactions to build stuff. This does not require that every
reaction be directly controlled. Molecular building blocks might be produced by
ordinary chemistry; products might be strengthened after manufacture by
crosslinking; molecular manufactured components might be joined into products by
self-assembly; and building blocks similar to those used in self-assembly might
be guided into chosen locations and away from alternate possibilities. Drexler’s
definition continues:
Processes that fall outside the intended scope of this
definition include reactions guided by the incorporation of reactive moieties
into a shared covalent framework (i.e., conventional intramolecular
reactions), or by the binding of reagents to enzymes or enzyme-like catalysts.
The point of this is to exclude chemistry that happens by pure
self-assembly and cannot be controlled from outside. As we will see, external
control of the reactions is the key to successful molecular manufacturing. It is
also the main thing that distinguishes molecular manufacturing from other kinds
of nanotechnology.
The principle of mechanosynthesis—direct positional
control—can be useful with or without covalent bonding. Building blocks like
those used in self-assembly, held together by hydrogen bonding or other
non-covalent interactions, could also be joined under mechanical control. This
would give direct control of the patterns formed by assembly, rather than
requiring that the building blocks themselves encode the final structure and
implement the assembly process.
Molecular manufacturing The production of
complex structures via nonbiological mechanosynthesis (and subsequent assembly
operations).
There is some wiggle room here, because "complex structures"
is not defined. Joining two molecules to make one probably doesn't count. But
joining selected monomers to make a polymer chain that folds into a
predetermined shape probably does.
Machine-phase chemistry The chemistry of
systems in which all potentially reactive moieties follow controlled
trajectories (e.g., guided by molecular machines working in vacuum).
This definition reinforces the point that machine-phase
chemistry is a narrow subset of mechanochemistry. Mechanochemistry does not
require that all molecules be controlled; it only requires that reactions
between the molecules must be controlled. Mechanochemistry is quite compatible
with "wet" chemistry, as long as the reactants are chosen so that they will only
react in the desired locations. A ribosome appears to fit the requirement;
Drexler specified that molecular manufacturing be done by nonbiological
mechanosynthesis, because otherwise biology would be covered by the definition.
Although it has not been well explored, machine-phase chemistry has some
theoretical advantages that make it worth further study. But molecular
manufacturing does not depend on a workable machine-phase chemistry being
developed. Controversies about whether diamond can be built in vacuum do not
need to be settled in order to assess the usefulness of molecular manufacturing.
Extending Molecular Manufacturing
As explained in the first section, the core of molecular manufacturing is the
mechanical control of reactions so as to build complex structures. This simple
idea opens up a lot of possibilities at the nanoscale. Perhaps the three most
important capabilities are engineering, blueprint delivery, and the creation of
manufacturing tools. These capabilities reinforce each other, each facilitating
the others.
It is often thought that the nanoscale is intractably complex, impossible to
analyze. Nearly intractable complexity certainly can be found at the nanoscale,
for example in the prediction of protein folding. But not everything at the nanoscale is complex.
DNA folding, for example, is much simpler, and the engineering of folded
structures is now pretty straightforward. Crystals and self-assembled monolayers
also have simple aspects: they are more or less identical at a wide range of
positions. The mechanical properties of nanoscale structures change as they get
extremely small, but even single-nanometer covalent solids (diamond, alumina,
etc) can be said to have a well-defined shape.
The ability to carry out predictable synthesis reactions at chosen sites or in
chosen sequences should allow the construction of structures that are intricate
and functional, but not intractably complex. This kind of approach is a good fit
for engineering. If a structure is the wrong shape or stiffness, simply changing
the sequence of reactions used to build it will change its structure—and at
least some of its properties—in a predictable way.
It is not always easy to control things at the nanoscale. Most of our tools are
orders of magnitude larger, and more or less clumsy; it's like trying to handle
toothpicks with telephone poles. Despite this, a few techniques and approaches
have been developed that can handle individual molecules and atoms, and move
larger objects by fractions of nanometers. A separate approach is to handle huge
numbers of molecules at once, and set up the conditions just right so that they
all do the same thing, something predictable and useful. Chemistry is an example
of this; the formation of self-assembled monolayers is another example. The
trouble with all of these approaches is that they are limited in the amount of
information that can be delivered to the nanoscale. After a technique is used to
produce an intermediate product, a new technique must be applied to perform the
next step. Each of these steps is hard to develop. They also tend to be slow to
use, for two reasons: big tools move slowly, and switching between techniques
and tools can take a lot of time.
Molecular manufacturing has a big advantage over other nanoscale construction
techniques: it can usefully apply the same step over and over again. This is
because each step takes place at a selected location and with selected building
blocks. Moving to a different location, or selecting a different building block
from a predefined set, need not insert enough variation into the process to
count as a new step that must be developed and characterized separately.
A set of molecular manufacturing operations, once worked out, could be
recombined like letters of an alphabet to make a wide variety of predictable
products. (This benefit is enhanced because mechanically guided chemistry can
play useful games with reaction barriers to speed up reactions by many orders of
magnitude; this allows a wider range of reactants to be used, and can reduce the
probability of unwanted side reactions.) The use of computer-controlled tools
and computer-aided translation from structure to operation sequence should allow
blueprints to be delivered directly to the nanoscale.
Although it is not part of the original definition of molecular manufacturing,
the ability to build a class of product structures that includes
manufacturing the tools used to build them may be very useful. If the tools can be engineered
by the same skill set that produces useful products, then research and
development may be accelerated. If new versions of tools can be constructed and
put into service within the nanoscale workspace, that may be more efficient than
building new macro-scale tools each time a new design is to be tested. Finally,
if a set of tools can be used to build a second equivalent set of tools, then
scaleup becomes possible.
The idea of a tool that can build an improved copy of itself may seem
counterintuitive: how can something build something else that's more complex
than itself? But the inputs to the process include not just the structure of the
first tool, but the information used to control it. Because of the sequential,
repetitive nature of molecular manufacturing, the amount of information that can
be fed to the process is essentially unlimited. A tool of finite complexity,
controlled from the outside, can build things far more physically complex than
itself; the complexity is limited by the quality of the design. If engineering
can be applied, then the design can be quite complex indeed; computer chips are
being designed with a billion transistors.
From the mechanical engineering side, the idea of tools building tools may be
suspect because it seems like precision will be lost at each step. However, the
use of covalent chemistry restores precision. Covalent reactions are inherently
digital: in general, either a bond is formed which holds the atoms together, or
the bond is missing and the atoms repel each other. This means that as long as
the molecules can be manipulated with enough precision to form bonds in the
desired places, the product will be exactly as it was designed, with no loss of
precision whatsoever. The precision required to form bonds reliably is a
significant engineering requirement that will require careful design of tools,
but is far from being a showstopper.
Scaleup
The main limitation of molecular manufacturing is that molecules are so small.
Controlling one reaction at a time with a single tool will produce astonishingly
small masses of product. At first sight, it may appear that there is no way to
build anything useful with this approach. However, there is a way around this
problem, and it’s the same way used by ribosomes to build an elephant: use a lot
of them in parallel. Of course, this requires that the tools must be very small,
and it must be possible to build a lot of them and then control them all.
Engineering, direct blueprint injection, and the use of molecular manufacturing
tools to build more tools can be combined to achieve this.
The key question is: How rapidly can a molecular manufacturing tool create its
own mass of product? This value, which I'll call "relative productivity,"
depends on the mass of the tool; roughly speaking, its mass will be about the
cube of its size. For each factor of ten shrinkage, the mass of the tool will
decrease by 1,000. In addition, small things move faster than large things, and
the relationship is roughly linear. This means that each factor of ten shrinkage
of the tool will increase its relative productivity by 10,000 times; relative
productivity increases as the inverse fourth power of the size.
A typical scanning probe microscope might weigh two kilograms, have a size of
about 10 cm, and carry out ten automated operations per second. If each
operation deposits one carbon atom, which masses about 2x10-26 kg,
then it would take 1026 seconds or six billion billion years for that
scanning probe microscope to fabricate its own mass. But if the tool could be
shrunk by a factor of a million, to 100 nm, then its relative throughput would
increase by 1024, and it would take only 100 seconds to fabricate its
own mass. This assumes an operation speed of 10 million per second, which is
about ten times faster than the fastest known enzymes (carbonic anhydrase and
superoxide dismutase). But a relative productivity of 1,000 or even 10,000
seconds would be sufficient for a very worthwhile manufacturing technology. (An
inkjet printer takes about 10,000 seconds to print its weight in ink.) Also,
there is no requirement that a fabrication operation deposit only one atom at a
time; a variety of molecular fragments may be suitable.
To produce a gram of product will take on the order of a gram of nanoscale
tools. This means that huge numbers of the tools must be controlled in parallel:
information and power must be fed to each one. There are several possible ways
to do this, including light and pressure. If the tools can be fastened to a
framework, it may be easier to control them, especially if they can build the
framework and include nanoscale structures in it. This is the basic concept of a
nanofactory.
Nanofactories and Their Products
A nanofactory is (will be) an integrated manufacturing system containing large
numbers of nanoscale molecular manufacturing workstations (tool systems). This
appears to be the most efficient and engineerable way to make nanoscale
productive systems produce large products. With the workstations fastened down
in known positions, their nanoscale products can more easily be joined. Also,
power and control signals can be delivered through hardwired connections.
The only way to build a nanofactory is with another nanofactory. However, the
product of a nanofactory may be larger than itself; it does not appear
conceptually or practically difficult to build a small nanofactory with a single
molecular manufacturing tool, and build from there to a kilogram-scale
nanofactory. The architecture of a nanofactory must take several problems into
account, in addition to the design of the individual fabrication workstations.
The mass and organization of the mounting structure must be included in the
construction plans. A small fraction (but large number) of the nanoscale
equipment in the nanofactory will be damaged by background radiation, and the
control algorithms will have to compensate for this in making functional
products. To make heterogeneous products, the workstations and/or the
nanoproduct assembly apparatus must be individually controlled; this probably
requires control logic to be integrated into the nanofactory.
It may seem premature to be thinking about nanofactory design before the first
nanoscale molecular manufacturing system has been built. But it is important to
know what will be possible, and how difficult it will be, in order to estimate
the ultimate payoff of a technology and the time and effort required to achieve
it. If nanofactories were impossible, then molecular manufacturing would be
significantly less useful; it would be very difficult to make large products.
But preliminary studies seem to show that nanofactories are actually not very
difficult to design, at least in broad outline. I have written an
80-page paper that covers error handling, mass and layout, transport of
feedstock, control of fabricators, and assembly and design of products for a
very primitive nanofactory design. My best estimate is that this design could
produce a duplicate nanofactory in less than a day. Nanofactory designs have
been proposed that appear to be much more flexible in how the products are
formed, but they have not yet been worked out in as much detail.
If there is a straightforward path from molecular manufacturing to
nanofactories, then useful
products will not be far behind. The ability to specify every cubic
nanometer of an integrated kilogram product, filling the product with engineered
machinery, will at least allow the construction of extremely powerful computers.
If the construction material is strong, then mechanical performance may also be
extremely good; scaling laws predict that power density increases as the inverse
of machine size, and nanostructured materials may be able to take advantage of
almost the full theoretical strength of covalent bonds rather than being limited
by propagating defects.
Many products have been imagined for this technology. A few have been designed
in sufficient detail that they might work as claimed. Robert Freitas's
Nanomedicine Vol. I contains analyses of many kinds of nanoscale machinery.
However, this only scratches the surface. In the absence of more detailed
analysis identifying quantitative limits, there has been a tendency for
futurists to assume that nano-built products will achieve performance close to
the limits of physical law. Motors three to six orders of magnitude more
powerful than today's; computers six to nine orders of magnitude more compact
and efficient; materials at least two orders of magnitude stronger—all built by
manufacturing systems many orders of magnitude cheaper—it's not hard to see why
futurists would fall in love with this field, and skeptics would dismiss it. The
solution is threefold: 1) open-minded but quantitative investigation of the
theories and proposals that have already been made; 2) constructive attempts
to fill in missing details; and 3) critical efforts to identify unidentified
problems with the application of the theories.
Based on a decade and a half of study, I am satisfied that some kind of
nanofactory can be made to work efficiently enough to be more than competitive
with today's manufacturing systems, at least for some products. In addition, I
am satisfied that molecular manufacturing can be used to build simple,
high-performance nanoscale devices that can be combined into useful, gram-scale,
high-performance products via straightforward engineering design. This is enough
to make molecular manufacturing seem very interesting, well worth
further study; and in
the absence of evidence to the contrary, worth a measure of preliminary concern
over how some of its possible products might be used.
Advantages of Engineered Nanosystems
Chris Phoenix, Director of Research, CRN
Today, biology implements by far the most advanced nanomachines on the planet.
It is tempting to think that biology must be efficient, and that we can't hope
to design nanomachines with higher performance. But we already know some
techniques that biology has never been able to try. This essay discusses several
of them and explains why biology could not use them, but manufactured
nanomachines will be able to.
Low Friction Via Superlubricity
Imagine you're pulling a toy wagon with square wheels. Each time a wheel turns
past a corner, the wagon lurches forward with a thump. This would waste
substantial amounts of energy. It's as though you're continually pulling the
wagon up tiny hills, which it then falls off of. There's no way to avoid the
waste of energy.
At the molecular scale, static friction is like that. Forces between the
molecules cause them to stretch out of position, then snap into a new
configuration. The snap, or clunk, requires energy—which is immediately
dissipated as heat.
In order for a sliding interface to have low friction, there must be an
extremely small difference in energy between all adjacent positions or
configurations. But between most surfaces, that is not the case. The molecular
fragments at the surface are springy and adhesive enough that they grab hold,
get pulled, and then snap back, wasting energy.
There are several ways in which a molecule can be pulled or pushed out of
position. If the interface is rough or dirty, the surfaces can be torn apart as
they move. This of course takes a lot of energy, producing very high friction.
Even apparently smooth surfaces can be sources of friction. If the surface is
coated with molecular bumps, the bumps may push each other sideways as they go
past, and then spring back, wasting energy. Even if the bumps are too short and
stiff to be pushed sideways very far, they can still interlock, like stacking
egg cartons or ice cube trays. (Thanks to
Wikipedia for this analogy.) If the bumps interlock strongly, then it may
take a lot of force to move them past each other—and just as they pass the
halfway point, they will snap into the next interlocking position, again wasting
energy.
One way to reduce this kind of friction is to separate the surfaces. A film of
water or oil can make surfaces quite slippery. But another way to reduce
friction is to use stiff surfaces that don't line up with each other. Think back
to the egg-carton image. If you turn one of the cartons so that the bumps don't
line up, then they can't interlock; they will simply skim past each other. In
fact, friction too low to measure has been observed with graphite sheets that
were turned so as to be out of alignment. Another way to prevent alignment is to
make the bumps have different spacing, by choosing different materials with
different atoms on their surfaces.
This low-friction trick, called superlubricity, is difficult to achieve in
practice. Remember that the surfaces must be very smooth, so they can slip past
each other; and very stiff, so the bumps don't push each other sideways and
spring back; and the bumps must not line up, or they will interlock. Biological
molecules are not stiff enough to use the superlubricity trick. Superlubricity
may be counterintuitive to people who are accustomed to the high friction of
most hard dry surfaces. But experiments have shown [PDF]
that superlubricity works. A variety of materials that have been proposed for
molecular manufacturing should be stiff enough to take advantage of
superlubricity.
Electric Currents
The kind of electricity that we channel in wires is made up of vast quantities
of electrons moving through the wire. Electrons can be made to move by a
magnetic field, as in a generator, or by a chemical reaction, as in a battery.
Either way, the moving electrons can be sent for long distances, and can do
useful work along the way. Electricity is extremely convenient and powerful, a
foundation of modern technology.
With only a few exceptions like electric eels, biological organisms do not use
this kind of electricity. You may know that our nerve cells use electricity. But
instead of moving electrons, biology uses ions—the "charged" atoms that remain
when one or more electrons are removed. Ions can move from place to place, and
can do work just like electrons. Bacteria use ions to power their flagella
"tails." Ions moving suddenly through a nerve cell membrane cause a change that
allows more ions, further along the cell, to be able to move, creating a domino
effect that ripples from one end of the cell to the other.
Ions are convenient for cells to handle. An ion is much larger than an electron,
and is therefore easier to contain. But ions have to move slowly, bumping
through the water they are dissolved in. Over long distances, electrons in a
wire can deliver energy far more rapidly than ions in a liquid. But wires
require insulation.
It is perhaps not surprising that biology hasn't used electron currents. At
cellular scales, ions diffuse fast enough to do the job. And the same membranes
that keep chemicals properly in (or out of) the cell can also keep ions
contained where they can do useful work. But if we actually had "nerves of
steel", we could react far more quickly than we do.
To use electron currents, all that's needed is a good conductor and a good
insulator. Carbon nanotubes can be both conductors and insulators, depending on
how they are constructed. Many organic molecules are insulating, and some are
conductive. There is a lot of potential for molecular manufacturing to build
useful circuits, both for signaling and for power transmission.
Deterministic Machines
Cells have to reconfigure themselves constantly in response to changing
conditions. They are built out of individual molecules, loosely associated. And
the only connection between many of the molecular systems is other molecules
diffusing randomly through the cell's interior. This means that the processes of
the cell will happen unpredictably, from molecules bumping into each other after
a random length of time. Such processes are not deterministic: there's no way to
know exactly when a reaction or process will happen. This lack of tight
connection between events makes the cell's processes more adaptable to change,
but more difficult to engineer.
Engineered nanosystems can be designed, and then built and used, without needing
to be reconfigured. That makes it easier to specify mechanical or signal
linkages to connect them and make them work in step, while a constantly changing
configuration would be difficult to accommodate. Of course, no linkage is
absolutely precise, but it will be possible to ensure that, for example, an
intermediate stage in a manufacturing process always has its input ready at the
time it begins a cycle. This will make design quite a bit easier, since complex
feedback loops will not be required to keep everything running at the right
relative speed. This also makes it possible to use standard digital logic
circuits.
Digital Logic
Digital logic is general-purpose and easy to engineer, which makes it great for
controlling almost any process. But it requires symbolic codes and rapid,
reliable computation. There is no way that the diffuse statistical chemical
signaling of biology could implement a high-speed microprocessor (CPU). But
rapid, lock-stepped signals make it easy. Biology, of course, doesn't need
digital logic, because it has complex control loops. But complex things are very
difficult to engineer. Using digital logic instead of complexity will allow
products to be designed much more quickly.
Rapid Transport and Motion
Everything in a cell is flooded with water. This means that everything that
moves experiences high drag. If a nanomachine can be run dry, its parts can move
more efficiently and/or at higher speeds.
Things that move by diffusion are not exempt from drag: it takes as much energy
to make objects diffuse from point A to point B in a certain time as it does to
drag it there. Although diffusion seems to happen "by itself", to work as a
transportation system it requires maintaining a higher concentration of
particles (e.g. molecules) at the source than at the destination. This requires
an input of work.
In a machine without solvent, diffusion can't work, so particles would have to
be transported mechanically. (In theory, certain small molecules could be
released into vacuum and bounce around to their destination, but this has
practical difficulties that probably would make it not useful.) Mechanical
transportation sounds inefficient, but in fact it can be more efficient than
diffusion. Because the particle is never released, energy is not required to
find and recapture it. Because nothing has to move through fluid, frictional
forces can be lower for the same speed, or speeds can be higher for the same
energy consumption. The use of machinery to move nanoparticles and molecules may
seem wasteful, but it replaces the need to maintain a pathway of solvent
molecules; it may actually require less mass and volume. The increased design
complexity of the transport machinery will be more or less balanced by the
reduced design complexity of the receiving stations for particles.
It is not only transport that can benefit from running without solvent. Any
motion will be subject to drag, which will be much higher in liquid than in gas
or vacuum. For slow motions, this is not so important. But to obtain high power
density and processing throughput, machines will have to move quickly. Drying
out the machines will allow greater efficiency than biology can attain. Biology
has never developed the ability to work without water. Engineered machines can
do so.