Types of Singularities Return to Contents Page

Jim: You have defined “The Singularity” as that time when technology “goes to infinity.” Then things are supposed to become awesome, but also unpredictable. Most of the singularity writers go on to tell nice stories about what will happen. If the singularity is unpredictable, how can they tell stories about it?

James: Writing attracts an audience mostly because of its persuasiveness, not its consistency. To give singularity proponents credit, I think they use unpredictability as rhetoric to evoke the wonders of the singularity, not as a precise description. Then they use stories to tout the enticements of the heaven they are promoting. I am not a “singularitarian,” by which I mean I don’t worship “the singularity.” The “the” indicates that there is only one. I see many potential types. Some are wonderful; some are awful. There are several ways to measure where they stand on the scale from wonderfulness to awfulness.

Puck : Socrates: How do you do that?

James: For rhetorical simplicity, I like to denominate in human lives enabled. Some singularities could enable trillions of human lives. Others work their wonders in other ways, without necessarily increasing the population of humans. Then we have to consider something like QALYs, Quality Adjusted Life Years. These were developed by medical people to rate the value of medical intervention. Consider as an example a patient with a slowly developing brain tumor that will kill him in a few years. An operation could extend his life but reduce his ability to speak. Is the extension of life worth its reduction in quality? Some singularities involve not more but longer life, or various changes in humans or substitutes for humans that may or may not enhance quality of life. For example, if we extend a human mind by interfacing a human brain with a supercomputer, does this improve the quality of life of the resulting transhuman?

Jim: I suppose we could ask him.

Puck : Socrates: Is there some basic principle, so we can know in advance?

James: It would be nice to have a philosophical principle for determining what is good in life. That is somewhat of a digression right now.

Puck : Socrates: I respect your strategy for developing your thesis, but you know that it will detract from that theses if you can't answer my questions.

James: Agreed. We will answer that question later.

Puck : Socrates: What types of lives might result from different types of singularities?

James: Space resources could enable trillions of human lives. “Uploading” could as well, if you consider an uploaded mind to be equivalent to a human mind. Uploading involves recording the entire contents of a human mind, and then running it on a supercomputer, so the mind is put into the computer. The computer could simulate an environment, or the mind could control a robot body in our world. A mind running on a computer could live forever, as long as the computer could be kept running. If it lives in a simulated environment, that environment could have marvelous properties. Multitudes of minds could communicate and interact and merge in marvelous ways. They could improve themselves with few limits. One of the biggest limits is knowing whether a proposed improvement is truly an improvement. Uploading is one of the most popular of singularity predictions. I personally think that a high-resolution upload, one that I would consider to really be me, is probably impossible. It would seem necessary to scan the state of trillions of synapses and their interconnections. I doubt that this can be done remotely. We might invade the brain with trillions of nanobots, but I don’t see how they could tract out the interconnections. We might freeze, slice, and scan the brain. This might work, but requires slicing with a precision and a lack of trauma that may not be available. Everything we can imagine is not automatically possible. But this is a digression. Another prediction is the creation of artificial, robot minds. How do we rate the QALYs of a robot mind?

Jim: I suppose that depends on its design. Robot servants would increase the quality of life of the humans that they serve.

Puck : Socrates: Does being served increase one’s quality of life?

James: Then there is life extension, giving us longer life. That automatically increases QALYs since they are denominated in years, as long as the quality of life remains consistent. There is also the potential of genetic or artificial enhancements to improve current humans.

Jim: How do we know that it is an improvement?

James: We will have to develop some standards.

Puck : Socrates: So you do see the value of philosophy.

James: Yes. I hope that standards can be developed by diverse consensus, by lots of people reacting to developing possibilities. But right away I propose a simple standard, that of Untilitarianism, the greatest good for the greatest number. That automatically endorses the value of large numbers of humans.

Jim: Some people would not like that, fearing overpopulation.

James: Overpopulation is a problem in a limited world. A singularity removes most of the limits.

Jim: It does not remove all of the limits. The universe is ultimately limited. I saw someplace that exponential growth of population would be limited at least when it resulted in a sphere of humanity expanding at the speed of light. It couldn’t go faster.

James: Perhaps, but that is way beyond even most of the singularities we have to consider.

Turn the page

Return to Contents Page