|
Join the conversation at CRNtalk!
|
Ray Kurzweil CONDUCTED DECEMBER 2005
I had the idea that I wanted to be an inventor since I was five. I first got involved with computers when I was twelve, programming some early computers, such as the 1401 and the 1620. I also built computers out of telephone relays. I began seriously modeling technology trends around 1980. I quickly realized that timing is the critical factor in the success of inventions. Most technology projects fail not because the technology doesn’t work, but because the timing is wrong – not all of the enabling factors are at play where they are needed. So I began to study these trends in order to anticipate what the world would be like in 3-5 or 10 years and make realistic assessments. That continued to be the primary application of this study. I used these methodologies to guide the development plans of my projects, in particular when to launch a particular project, so that the software would be ready when the underlying hardware was needed, the needs of the market, and so on. These methodologies had the side benefit of allowing us to project development 20 or 30 years in the future. There is a strong common wisdom that you can’t predict the future, but that wisdom is incorrect. Some key measures of information technology - price-performance, capacity, bandwidth – follow very smooth exponential trends. I have been making predictions going back into the 1980s, when I wrote The Age of Intelligent Machines. That book had hundreds of predictions about the 1990s and 21st century based on these models, which have turned out to be quite accurate. If we know how much it will cost per million-instruction-per-second (MIPs) of computing in future points in time, or how much it will cost to sequence a base-pair of DNA or to model a protein, or any other measure of information technology at different points in time, we can build scenarios of what will be feasible. The capability of these technologies grows exponentially, essentially doubling every year (depending on what you measure). There is even a slow second level of exponential growth. We will increase the price-performance of computing, which is already formidable and deeply influential, by a factor of a billion in 25 years, and we will also shrink the technology at a predictable pace of over one hundred in 3D volume per decade. So these technologies will be very small and widely distributed, inexpensive, and extremely powerful. Look at what we can do already, and multiply that by a billion. Question 2: When did you first become aware of the term “singularity”? Did you use that term in your first book, The Age of Intelligent Machines? No. I first became familiar with it probably around the late 1990s. In my latest book, The Singularity is Near, I have really focused on the point in time where these technologies become quite explosive and profoundly transformative. In my earlier book, The Age of Spiritual Machines, I touched on that, and wrote about computers achieving human levels of intelligence and what that would mean. My main focus in this new book is on the merger of biological humanity with the technology that we are creating. Once nonbiological intelligence gets a foothold in our bodies and brains, which we have arguable already done in some people, but will do significantly in the 2020s, it will grow exponentially. We have about 1026 calculations per second (cps) (at most 1029) in biological humanity, and that figure won’t change much in the next fifty years. Our brains use a form of electro-chemical signaling that travels a few hundred feet per second, which is a million times slower than electronics. The inter-neuronal connections in our brains compute at about 200 calculations per second, which is also about a million times slower than electronics. We communicate our knowledge and skills using language, which is similarly a million times slower than computers can transmit information. So biological intelligence, while it could be better educated and better organized, is not going to significantly change. Nonbiological intelligence, however, is multiplying by over 1,000 per decade in less than a decade. So once we can achieve the software of intelligence, which we will achieve through reverse-engineering the human brain, non-biological intelligence will soar past biological intelligence. But this isn’t an alien invasion, it is something that will literally be deeply integrated in our bodies and brains. By the 2040s, the nonbiological intelligence that we create that year will be a billion times more powerful than the 1026 CPS that all biological humanity represents. The word “singularity” is a metaphor, and the metaphor that we are using isn’t really infinity, because these exponentials are finite. The real meaning of “singularity” is similar to the concept of the “event horizon” in physics. A black hole as physicists envision it has an event horizon around it, and you can’t easily see past it. Similarly, it is difficult to see beyond this technological event horizon, because it is so profoundly transformative. Question 3: Has there been one writer or researcher, such as Marvin Minsky or Vernor Vinge, who has had a predominant influence on your thinking? Both those individuals have been influential. Vernor Vinge has had some really key insights into the singularity very early on, There were others, such as John Von Neuman, who talked about a singular event occurring, because he had the idea of technological acceleration and singularity half a century ago. But it was simply a casual comment, and Vinge worked out some of the key ideas. Marvin Minsky was actually my mentor, and I corresponded with him and visited with him when I was in high school. We remain close friends and colleagues, and many of his writings on artificial intelligence, such as Society of Mind and some of his more technical work, have deeply influenced me. Question 4: Many semiconductor analysts are predicting that the field of robotics will become the next major growth industry. When do you predict that the robotics industry will become a major, thriving industry? In the GNR revolutions I write about, R nominally stands for robotics, but the real reference is to strong AI. By strong AI, I mean artificial Intelligence at human levels, some of which will be manifested in robots, and some of which will be manifested in virtual bodies and virtual reality. We will go into virtual reality environments, and have nanobots in our brain that will shut down the signals coming from our nerves and sense organs, and replace them with the signals that we would be receiving if we were in the virtual environment. We can be actors in this virtual environment, and have a virtual body. But this virtual body doesn’t need to be the same as our real body. We will encounter other people in similar situations in this VR. There will also be forms of AI which perform specific tasks, like narrow AI programs do today in our economic infrastructure. Our economic infrastructure would collapse if all these current narrow AI programs stopped functioning, but this wasn’t true 25 years ago. So these task specific AI programs will become very intelligent in the coming decades. So strong AI won’t just be robots; that is only one manifestation. The R revolution really is the strong AI revolution. Billions of dollars of financial transactions are done every day, in the form of intelligent algorithms, automatic detection of credit card fraud, and so forth. Every time you send an email or make a telephone call, intelligent algorithms route the information. Algorithms automatically diagnose electrocardiograms and blood cell images, fly airplanes, guide “smart” weapons, and so forth. I give dozens of examples in the book. These applications will become increasingly intelligent in the decades ahead. Machines are already performing tasks that previously could only be done by humans, and the tasks that this covers will increase in the coming years. In order to achieve strong AI, we need to understand how the human brain works, and there are two fundamental requirements. One is the hardware requirement, which you mentioned. It is relatively uncontroversial today that we will achieve computer hardware equivalent to the human brain’s computing capacity – just look at the semiconductor industry’s own roadmap. This is a roadmap into which the semiconductor industry has put enormous effort. By 2020, a single chip will provide 1016 instructions per second, sufficient to emulate a single human brain. We will go to the third dimension, effectively superseding the limits of Moore’s law, which deals only in 2-d integrated circuits. These ideas were controversial notions when my last book (The Age of Spiritual Machines) was published in 1999, but is relatively uncontroversial today. The more controversial issue is whether we will have the software, because it is not sufficient to simply have powerful computers, we need to actually understand how human intelligence works. That doesn’t necessarily mean copying every single pattern of every dendrite and ion channel. It really means understanding the basic principles of how the human brain performs certain tasks, such as remembering, reasoning, recognizing patterns and so on. That is a grand project, which I refer to as reverse-engineering the human brain, which is far further along than many people realize. We see exponential growth in every aspect of it. For instance, the spatial resolution of brain scanning is doubling every year in 3D volume. For the first time we can actually see individual interneuronal connections in living brains, and see them signaling in real time. This capability was not feasible a few years ago. The amount of data that we are obtaining on the brain is doubling every year, and we are showing that we can turn this data into working models, and in the book I highlight a couple of dozen simulations of different regions of the brain. For example, there is now a simulation of the cerebellum, which is an important region of the brain devoted to skill formation. This region comprises over half of the neurons of the brain. I make the case that we will have the principles of operation understood well within twenty years. At the end of the 2020s, we will have both the hardware and software to create human levels of intelligence. This includes emotional intelligence, which is really the cutting edge of intelligence, in a machine. Given that machines are already superior to humans in certain aspects, the human-intelligent computer combination will be quite formidable, and this combination will continue to grow exponentially. Nonbiological intelligence will be able to examine its own source code and improve it in an iterative design cycle. We are doing something like that now with biotechnology, by reading our genes. So in the GNR revolutions I write about, R really stands for intelligence, which is the most powerful force in the universe. It is therefore the most influential of the revolutions. Question 5: Nanotechnology plays a key role in your forecasts. What advice would you give to someone wanting to invest today in nanotechnology corporations? Nanotechnology developments are currently in their formative stages. There are early applications of nanotechnology, but these do not represent the full vision of nanotechnology, the vision that Eric Drexler articulated in 1986. No one was willing to supervise this radical and interdisciplinary thesis except for my mentor Marvin Minsky, We have shown the feasibility of manipulating matter at the molecular level, which is what biology does. One of the ways to create nanotechnology is to start with biological mechanisms and modify them to extend the biological paradigm – to go beyond proteins. That vision of molecular nanotechnology assembly – of using massively parallel, fully programmable processes to grow objects with remarkable properties – is about twenty years away. There will be a smooth progression, and early adaptor applications, many of which I discuss in the book. There are early applications in terms of nanoparticles. These nanoparticles have unique features due to nanosize components, but this is a slightly different concept. We are using the special properties of nanoscale objects, but we are not actually building objects molecule by molecule. So the real revolutionary aspect of nanotechnology is a couple of decades away, and it is too early to say which companies will be the leaders of that. Intel sees that the future of electronics is nanotechnology, and by some definitions today’s electronics are already nanotechnology. Undoubtedly, there will be small corporations that will dominate. When search engines were formative, it would have been difficult to foresee that two Stanford undergrads would dominate that field. Nanotechnology is already a multi-billion dollar industry which will continue growing as we get closer to molecular manufacturing. When we actually have molecular manufacturing, it will be transforming – we will be able to inexpensively manufacture almost anything we need from feedstock materials and these information processes. Question 6: You write in The Singularity is Near of feeling somewhat alone in your beliefs. How has the mainstream scientific community responded to your prognostications? Actually quite well. The book has been very well received; it has gotten very positive reviews in mainstream publications such as the New York Times and the Wall Street Journal. It has done very well, it has been #1 on the science list at Amazon, and ended up the fourth best selling science book of 2005 despite coming out at the end of the year. The New York Times cited it as the 13th most blogged about book of 2005. In terms of group intellectual debate, I believe that it has gotten a lot of respect, and has been well received. There are individuals who don’t read the arguments and just read the conclusions. For some of these individuals, the conclusions are so distant from the conventional wisdom on these topics that they reject it out of hand. But for those who carefully read the arguments, the response is generally positive. This is not to say that everyone agrees with everything, but it has gotten a lot of serious response and respect. I do believe that these ideas are getting more widely distributed and accepted, I am obviously not the only person articulating these concepts. Nevertheless, the common wisdom is quite strong - even among friends and associates, the common wisdom regarding life cycle and the concept that life won’t be much different in the future than it is today – still permeates people’s thinking. Thoughts and statements regarding life’s brevity and senescence are still quite influential. The deathist meme (that death gives meaning to life) is alive and well. The biggest issue, which I put out in the beginning of Singularity, is linear vs. exponential thinking. It is remarkable how thoughtful people, including leading scientists, think linearly. This is just wrong, and I make this case, showing dozens of examples. But even though someone may be an expert regarding one aspect of technology or science, doesn’t mean that they have studied technology forecasting. Relatively few futurists/prognosticators really have well-grounded methodologies. The common wisdom is to think linearly, to assume that the current pace of change will continue indefinitely. But this attitude is gradually changing, as more and more people understand the exponential perspective and how explosive an exponential can be. That is the true nature of these technology trends. Question 7: What about other technologies and industries, such as the textile, aerospace, or automotive industries? Are all technology fields experiencing exponential growth? The key issue is that information technology and information processes progress at an exponential pace. Biological evolution itself was an information process – the backbone is the genetic code, which is a digital code. I show in my book how that has accelerated very smoothly, in terms of the growth of complexity. The same thing is true of technological evolution, when it has to do with information. If we can measure the information content, which we can readily do with things like computation and communication, then we can discern that it progresses in this exponential fashion and subject to the law of accelerating returns. The information technology needs to get to a point where it is capable of transforming an industry, and biology is a good example. Biology was not an information technology until recently – it was basically hit or miss. Drug development was called drug discovery, which meant that we didn’t know why a drug worked and we had no theory of its operation. These drugs and tools were relatively crude and had many negative side effects. 99.9% of the drugs on the market were designed in this haphazard pre-information era fashion. The new paradigm in biology is to understand these processes as information processes, and to develop the tools to reprogram these processes and actually change our genes. We still have these genetic programs that are obsolete. The fat insulin receptor gene tells the body to hold on to every calorie, since it is programmed to anticipate that the next hunting season may be a failure. That was a good program 10,000 years ago, but is not a good program today. We have shown in experimental studies with mice that we can change those programs. There are many genes that we would like to turn off, and there is also new genetic information that we would like to insert. New gene therapy techniques are now beginning to work. We can turn enzymes on and off, which are the workhorses of biology, and there are many examples of that. Most current drug development is through this rational drug design. So biology is becoming an information technology, and we can see the clear exponential growth. The amount of genetic data we sequence is doubling every year, the speed with which we can sequence DNA is doubling every year, and the cost has come down by half every year. It took 15 years to sequence the HIV virus, but we sequenced the SARS virus in 31 days. AIDs drugs cost $30,000 per patient per year fifteen years ago, but didn’t work very well. Now they’re down to $100 per patient per year in poor countries and work much better. Fields such as energy are still not information technologies, but that is going to change as well. For instance, in Singularity I describe how we could meet 100% of our energy needs through renewable energy with nanoengineered solar panels and fuel cells within twenty years, by capturing only 3% of 1% of the sunlight that hits the Earth. That will happen within twenty years, and it will be related to information technology, since it will be able to meet our energy needs in a highly distributed, renewable, clean fashion with nanoengineered devices. We will ultimately transform transportation in a similar way, with nanoengineered devices that can provide personal flying vehicles at very low cost. The transportation and energy industries are currently pre-information fields. Ultimately, however, information technologies will comprise almost everything of value, because we will be able to build anything at extremely low cost using nanoengineered materials and processes. We will have new methods of doing things like flying and creating energy. Question 8: You have emphasized the superior mechanical and electronic property of carbon nanotubes. When do you anticipate nanotubes being embedded in materials? When will we see the first computers with nanotube components? There is actually a nanotube-based memory that may hit the market next year. This is a dense, two-dimensional device that has attractive properties. But three-dimensional devices are still about one and a half decades away. There are alternatives to nanotubes, such as DNA itself. DNA has potential uses outside of biology, because of its affinity for linking to itself. DNA could also be used structurally. But the full potential of three-dimensional structures based on either carbon nanotubes or DNA, is a circa 2020 technology. Question 9: Most predictions of future technological developments have been inaccurate. What techniques do you use to improve the accuracy of your prognostications? I have a team of people that gathers data on many different industries and phenomena, and we build mathematical models. More and more areas of science and technology are now measurable in information terms. I use a data-driven approach, and I endeavor to build theoretical models of why these technologies progress. I have this theory of the law of accelerating returns, which is a theory of evolution. I then try to build mathematical models of how that applies to different phenomena and industries. Most futurists don’t use this type of methodology, and some just make guesses. Many futurists are simply unaware of these trends – they make linear models. It is often said that we overestimate what can be done in the short term, because developing technologies turns out to be more difficult than we expect, but dramatically underestimate what can be achieved in the long term, because people think linearly. Question 10: The Government has traditionally played a pivotal role in developing new technologies. Is the U.S. Government doing enough to support the nascent nanotechnology or the AI industries? Do these industries require Government support at this point? These industries will both be propelled forward by the enormous economic incentive. Nanotechnology will be able to create almost any physical product we need at very low cost. These devices will be quite powerful because they will have electronics and communications embedded throughout the device. So there is tremendous economic incentive to develop nanotechnology, and the same is true of artificial intelligence. Basic research has an important role to play – the Internet, for instance, came out of the Arpanet. The new world wide mesh concept - of having every device not simply connected to the net but actually become a node on the net, sending and receiving both its own and other people’s messages – this arose out of a department of defense concept. It is now being adopted by civilian, commercial corporations. DARPA is actually playing a forward-looking role in such technologies as speech recognition and other AI fields. In terms of national competitiveness, the key issue is that we are not graduating enough scientists and engineers. The figures regarding numbers of individuals receiving advanced technical degrees are dramatically growing in China, Japan, Korea, and India. These figures actually resemble exponential curves. China in particular is greatly outpacing the U.S., producing scientists and engineers, both at the undergraduate and doctoral level, in every scientific field. Although this is a real concern, there is now one integrated world economy, so we shouldn’t see this problem as simply the U.S. vs. China. I am glad to see China and India economically engaged, and this isn’t a zero-sum game – Chinese engineers are creating value. But to the extent that we care about issues such as national competitiveness, this is a concern. In the end, however, this is about what fields teenagers choose to enter. The U.S. does lead in the application of these technologies. I speak at many conferences each year, including music conferences, graphic arts conferences, library conferences, and so on. Yet, every conference I attend reads like a computer conference, because they are so heavily engaged in computer technology. The level of computer technology used in any of a great diversity of fields is quite impressive. Question 11: How do you envision the world in 2015? What economic and technological predictions would you make for that year? By 2015, computers will be largely invisible, and will be very small. We will be dealing with a mesh of computing and communications that will be embedded in the environment and in our clothing. People in 2005 face a dilemma because, on the one hand, they want large, high-resolution displays. They can obtain these displays by buying expensive 72” flat-panel plasma monitors. But they also want portable devices, which have limited display capabilities. By 2015, we will have images input directly onto our retinas. This allows for a very high-resolution display that encompasses the entire visual field of view yet is physically tiny. These devices exist in 2005, and are used in high-performance applications, such as putting a soldier or a surgeon into a virtual reality environment. So in 2015, if we want a large, high-resolution computer image, it will just appear virtually in the air. We will have augmented reality, including pop-up displays explaining what is happening in the real world. We will be able to go into full-immersion, visual auditory virtual reality environments, We will have useable language technologies. These are beginning to emerge, and by 2015 they will be quite effective. In this visual field of view, we will have virtual personalities with which you can interact. Computers will have virtual assistants with sufficient command of speech recognition that you can discuss subjects with them. Search engines won’t wait to be asked – they will track your conversation and attempt to anticipate your needs and help you with routine transactions. These virtual assistants won’t be at the human level, that won’t happen until we have strong AI. But they will be useful, and many transactions will be mediated by these assistants. Computing will be very powerful, and it will be a mesh of computing. Individuals who need the power of a million computers for 25 milliseconds will be able to obtain that as needed. By 2015, we will have real traction with nanotechnology. I believe that we will be well on the way to overcoming major diseases, such as cancer, heart disease, and diabetes through the biotechnology revolution that we talked above. We will also make progress in learning how to stop and even reverse the ageing process. This interview was conducted by Sander Olson. The opinions expressed do not necessarily represent those of CRN. |
Copyright © 2002-2008 Center for Responsible Nanotechnology TM CRN is an affiliate of World Care®, an international, non-profit, 501(c)(3) organization.
|