In this century, humanity is predicted to undergo a transformative experience, the likes of which have not been seen since we first began to speak, fashion tools, and plant crops. This experience goes by various names – “Intelligence Explosion,” “Accelerando,” “Technological Singularity” – but they all have one thing in common.
They all come down to the hypothesis that accelerating change, technological progress, and knowledge will radically change humanity. In its various forms, this theory cites concepts like the iterative nature of technology, advances in computing, and historical instances where major innovations led to explosive growth in human societies.
Many proponents believe that this “explosion” or “acceleration” will take place sometime during the 21st century. While the specifics are subject to debate, there is general consensus among proponents that it will come down to developments in the fields of computing and artificial intelligence (AI), robotics, nanotechnology, and biotechnology.
In addition, there are differences in opinion as to how it will take place, whether it will be the result of ever-accelerating change, a runaway acceleration triggered by self-replicating and self-upgrading machines, an “intelligence explosion” caused by the birth of an advanced and independent AI, or the result of biotechnological augmentation and enhancement.
Opinions also differ on whether or not this will be felt as a sudden switch-like event or a gradual process spread out over time which might not have a definable beginning or inflection point. But either way, it is agreed that once the Singularity does occur, life will never be the same again. In this respect, the term “singularity” – which is usually used in the context of black holes – is quite apt because it too has an event horizon, a point in time where our capacity to understand its implications breaks down.
Definition
The use of the term “singularity” in this context first appeared in an article written by Stanislav Ulam about the life and accomplishments of John von Neumann. In the course of recounting opinions his friend held, Ulam described how the two talked at one point about accelerating change:
“One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
However, the idea that humanity may one day achieve an “intelligence explosion” has some precedent that predates Ulam’s description. Mahendra Prasad of UC Berkeley, for example, credits 18th-century mathematician Nicolas de Condorcet with making the first recorded prediction, as well as creating the first model for it.
In his essay, Sketch for a Historical Picture of the Progress of the Human Mind: Tenth Epoch (1794), de Condorcet expressed how knowledge acquisition, technological development, and human moral progress were subject to acceleration:
“How much greater would be the certainty, how much more vast the scheme of our hopes if… these natural [human] faculties themselves and this [human body] organization could also be improved?… The improvement of medical practice… will become more efficacious with the progress of reason…
“[W]e are bound to believe that the average length of human life will forever increase… May we not extend [our] hopes [of perfectibility] to the intellectual and moral faculties?… Is it not probable that education, in perfecting these qualities, will at the same time influence, modify, and perfect the [physical] organization?”
Another forerunner was British mathematician Irving John Good who worked at Bletchley Park with Alan Turing during World War II. In 1965, he wrote an essay titled “Speculations Concerning the First Ultraintelligent Machine,” where he contended that a smarter-than-AI could create even smarter AIs in an ongoing process known as “subassembly theory.”
In 1965, American engineer Gordon Moore noted that the number of transistors on an integrated circuit (IC) can be expected to double every year (later updated to roughly every two years). This has come to be known as “Moore’s Law” and is used to describe the exponential nature of computing in the latter half of the 20th century. It is also referenced in relation to the Singularity and why an “intelligence explosion” is inevitable.
In 1983, Vernor Vinge popularized the theory in an op-ed piece for Omni magazine where he contended that rapidly self-improving AI would eventually reach a “kind of singularity,” beyond which reality would be difficult to predict. It was also here that the first comparison to a black hole was made:
“We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.”
How and when?
Vinge popularized the Technological Singularity further in a 1993 essay titled “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In addition to reiterating the nature of the concept, Vinge also laid out four possible scenarios for how this event could take place. They included:
Superintelligent Computers: This scenario is based on the idea that human beings may eventually develop computers that are “conscious.” If such a thing is possible, said Vinge, there is little doubt that an artificial intelligence far more advanced than humanity might naturally result.
Networking: In this scenario, large networks of computers and their respective users would come together to constitute superhuman intelligence.
Mind-Machine Interface: Vinge also proposed a scenario where human intelligence could merge with computing to augment their intelligence, leading to superhuman intelligence.
Guided Evolution: It is also possible, said Vinge, that biological science could advance to the point where it would provide a means to improve natural human intellect.
But perhaps the most famous proponent of the concept is noted inventor and futurist Ray Kurzweil. His 2005 book, The Singularity is Near: When Humans Transcend Biology, is perhaps his best-known work and expands on ideas presented in earlier books, most notably his “Law of Accelerating Returns.”
This law is essentially a generalization of Moore’s Law and states that the rate of growth in technological systems increases exponentially over time. He further cited how an exponential increase in technologies like computing, genetics, nanotechnology, and artificial intelligence would culminate and lead to a new era of superintelligence.
“The Singularity will allow us to transcend these limitations of our biological bodies and brains,” wrote Kurzweil. “There will be no distinction, post-Singularity, between human and machine.” He further predicted that the Singularity would take place by 2045 since this was the earliest point where computerized intelligence would significantly exceed the sum total of human brainpower
To see these trends at work, futurists and speculative thinkers generally point to examples of major innovations from human history, oftentimes focusing on technologies that have made the way we convey and consume information exponentially faster. In all cases, the purpose is to show how the time lag between innovations keeps getting shorter.
Accelerating change
One key school of thought has to do with the way data is shared, which is also known as “Message Compression.” The basic idea here is that the progressive amount of data humans create and share can be measured as an expression of time over the number of people the medium allows us to reach.
For instance, cave paintings are the earliest known means of indirect (i.e., non-verbal) communication, with some of the earliest dated to ca. 40,000 years ago. These paintings – which could have been historical records, ancestral tales, or the earliest catalogs of then-known constellations – were likely witnessed only by members of the extended-family communities that crafted them.
The next major innovation emerged during Neolithic times – ca. 9,000 BCE – in the form of symbols that resemble physical objects (aka. pictograms). As of around 5,500 years ago, this gave way to ideograms, writing symbols that convey concepts rather than objects.
Then came the first alphabets, such as Phoenician script, roughly 3,000 years ago. What followed were mass-printing techniques, which began with woodblock printing (ca. 3rd century), followed by moveable type by the 11th century, and the printing press by the 15th century. The telegraph followed in 1792, which enabled typed communications from one part of the planet to another.
Then came Alexander Graham Bell’s telephone in 1876, which allowed for auditory messaging over vast distances. Radio communications followed by the turn of the century, which took audio communications even farther. This was accompanied in short order by the transmission of moving pictures and television (combining audio and visual messaging) by the mid-1920s.
By 1931, H.L. Hazen and Vannevar Bush of MIT built the Differential Analyzer [PDF], the most sophisticated analog computer ever made. By 1939, the first electromechanical analog computer (aka. digital computer) was introduced. During the 1940s (and World War II), computers that relied on vacuum tubes, digital electronic circuits, transistors, and stored programs were created.
During the 1950s, the first integrated circuits were invented, and by the 1960s, personal “desktop” computers began to emerge. By 1975, IBM released the first mobile computer (IMB 5100) and the first “laptop” by 1980. By the millennium, smartphone use and mobile computing became prolific, as did the information technology (IT) sector.
To put it in perspective, predictive analysts often compare modern smartphone technology to the computers of the Apollo Era. Whereas the NASA computers that guided astronauts to the Moon six times between 1969 and 1972 had the equivalent of 73,728 bytes of working memory. Meanwhile, smartphones today have as much as 32GB of memory, about 430,000 times the working memory of the Apollo guidance computer.
NASA also weighed in on the progress humanity has made, indicating how the Voyager 1 and 2 spacecraft – which explored the outer planets and became the first human-made objects to reach interstellar space – possess 69.63 kilobytes of memory each. By comparison, Apple’s iPhone 5 (first released in 2012) has up to 16 gigabytes of memory, which is about 240,000 greater.
In short, people today consume and produce amounts of data that would absolutely astound people who were alive just two generations ago. At this rate, adults just a single generation from now may be living in a world that is virtually unfathomable to us today.
The “Information Age” and “Big Data”
Another key indicator that a Singularity is on the horizon is the way information technology and information production have vastly increased over time. With advances like computing, networking, the internet, and wireless technology, the number of people connected to countless others has grown exponentially in a very short time.
Between 1990 and 2016, the number of people worldwide with internet access grew from 2.6 million to 3,408 million (a multiplication factor of 1310).
According to a 2018 report by the UN’s International Telecommunication Union (ITU), 90% of the global population will have access to broadband internet services by 2050, thanks to the growth of mobile devices and satellite internet services. That’s 8.76 billion people, a 220% increase over the 4 billion people (about half of the global population) that have access right now.
Another key metric is the amount of data generated over time. During the 2010 Techonomy Conference, Google CEO Eric Schmidt claimed that humanity created as much information every two days as it had between the dawn of civilization (ca. 6000 years ago) and 2003. This, he estimated, was in the vicinity of five exabytes (EB) of data, or five quintillion (1018) bytes.
By the 2010s, humanity entered what is known as the “Zettabyte Era,” where the amount of data generated was equal to one sextillion (1021) bytes. According to Statista, the volume of data created between 2010 and 2020 grew from 2 to 64.2 ZBs — a 32% increase every year — and is projected to reach 181 ZBs by 2025 — a 36% increase every year.
Similarly, the amount of data kept over time has also increased at a prodigious rate. Between 2005 and 2020, storage capacity worldwide grew from 200 EBs of data to 6.7 ZBs (an average of 223% a year). At an estimated compound annual growth rate of 19.2%, global storage capacity is estimated to reach 16.12 ZBs by 2025.
What will come beyond that? Given the current rate of progress, humanity is likely to be entering into the “Yottabyte Era” (1024 bytes) before 2050 rolls around. But given that the rate itself is subject to acceleration, it’s not out of the question for this milestone to come much sooner than mid-century.
All of this data forms the basis for human knowledge, and as more and more people connect to high-speed internet connections and find this staggering amount of data essentially at their fingertips (or possibly directly interfaced into their brains), this collective library could serve as a launchpad of sorts for a Technological Singularity.
Artificial Intelligence
Another anticipated pathway to the Singularity is the development of advanced artificial intelligence (AI). This concept was initially popularized by famed mathematician and codebreaker Alan Turning, who raised the question “can machines think?” in his 1950 essay, “Computing Machinery and Intelligence.” It was also in this paper that he devised his famous “Imitation Game” (aka. the “Turing Test”).
The game, wrote Turning, would consist of a human interrogator attempting to distinguish between a computer and a human who would respond to a series of questions in text form. As Turing explained:
“We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?'”
In, Prof. Stuart Russell (UC Berkeley) and Peter Norvig (Director of Research at Google) published a leading textbook on the study of AI, titled Artificial Intelligence: A Modern Approach. In it, they drew a distinction between computer systems that think and act like humans versus those that would think and act rationally.
In recent decades, this distinction has become more evident thanks to supercomputers, machine intelligence, deep learning, and other applications that are capable of processing information and discerning patterns. The progress towards “machines that think” has kept pace with improvements in computing and led to programs capable of far surpassing human intelligence in some respects.
In 1959, efforts to develop AI began in earnest with the invention of the General Problem Solver (GPS), a computer program created by economist and cognitive psychologist Herbert A. Simon and J.C. Shaw and Allen Newell of the RAND Corporation. This program, they hoped, would lead to the development of a “universal problem-solver machine.”
In 1957, the first computer designed to mimic a neural network (the Mark 1 Perceptron) was built by Frank Rosenblatt – an American psychologist. The machine demonstrated the capacity to learn through trial and error, earning Rosenblatt the unofficial honor of being the “Father of Deep Learning.”
In the 1980s, “backward propagation of errors” algorithms (backpropagation for short) were integrated with neural networks, allowing them to work faster and solve problems that were previously thought to be unsolvable. These become the mainstay for all future neural networks and AI applications.
In 1996, IBM unveiled Deep Blue, a chess-playing computer that went on to unseat world chess champion Garry Kasparov in a series of games and rematches. By 2008, IBM’s DeepQA project finished work on Watson, a question-answering supercomputer that would go on to compete (and win) on Jeopardy!, defeating champions Ken Jennings and Brad Rutter in 2011.
In 2014, Google acquired the British tech company DeepMind, which combined machine learning and neuroscience to create general-purpose learning algorithms. In 2016, the company’s AlphaGo program beat the world champion Go player (Lee Sodol) in a five-game match.
In 2015, the Chinese company Baidu released a paper explaining how their Minwa supercomputer set a new record for recognizing images, beating a previous record set by Google. This was made possible by a new type of deep learning known as a convolutional neural network, which allows it to identify and categorize images with greater accuracy than the average human.
Today, supercomputers and machine learning are often used by governments, research institutes, and the private sector to conduct “data mining” – finding anomalies, patterns, and correlations within large data sets. This is necessary in order to deal with the growing volume of information that is created on a daily basis and to predict outcomes.
In 1985, Prof. Ray J. Solomonoff – inventor of algorithmic information theory – wrote an essay detailing what he saw as the seven developmental milestones that needed to be achieved before AI could be fully realized. They were:
- The creation of AI as a field, the study of human problem solving (aka. “cognitive psychology”), and the development of large parallel computers (similar to the human brain).
- A general theory of problem-solving that consists of machine learning, information processing and storage, methods of implementation, and other novel concepts.
- The development of a machine that is capable of self-improvement.
- A computer that can read almost any collection of data and incorporate most of the material into its database.
- A machine that has a general problem-solving capacity near that of a human in the areas for which it has been designed (i.e., mathematics, science, industrial applications, etc.)
- A machine with a capacity near that of the computer science community.
- A machine with a capacity many times that of the computer science community.
In short, Solomonoff believed the development of AI would consist of building machines that could mimic human brain functions (learning, information retention, problem-solving, self-improvement, etc.) and eventually surpass them. At the time of writing, he asserted that all but first still needed to be accomplished.
Based on this roadmap, we are now close to the point of realizing true artificial intelligence since modern supercomputers are capable of outperforming human beings in many respects, but not all — particularly in abstract or intuitive reasoning. Nevertheless, we are edging ever closer to the day when machine intelligence could very well surpass humanity.
When that happens, scientific research and development will accelerate, leading to bold new possibilities. If these machines are tasked with creating more advanced versions of themselves, they may have no reason to stop doing so once they achieve human-level general intelligence and could simply continue to improve themselves until you have what Kurtzweil called an artificial superintelligence “lift-off”, a definitive inflection point marking the Technological Singularity.
—
But of course, computing, information production, and the way advancements always appear to be coming faster are just a few of the pathways that could be leading us to the so-called Singularity. In part II, we will examine how advances in nanotechnology and medical technology are also leading us towards a point in time beyond which the future will be difficult to predict.
We will also take a look at how this predicted revolution will occur – a rapid onset, or gradually – and what the implications could be. Last, but not least, we’ll look at what the critics and doubters have had to say about this, and how it stacks up to other predictions that never seem to come true.
More Stories
Splash Damage Announces TRANSFORMERS: REACTIVATE » JaypeeOnline
Top Benefits Of Having A PTZ In The Office
Apps You Can Use to Organize Your Estate Plan » JaypeeOnline