The Synthetic Republic

16 Jan

This topic will not make sense without at least a basic understanding of the Technological Singularity. The Technological Singularity is best described by I. J. Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

I am a firm believer in the Singularity as described by Good. However, since there are numerous ways that the Singularity could come about, I have developed a general definition that covers every contingency:

The Technological Singularity is a hypothetical future event wherein effective available intelligence combined with understanding of said intelligence initiates a rapid intelligence amplification feedback loop.

It is plausible that the Singularity will come at an unknown time by unknown means and from an unknown source. However, there is some reason to believe that it may come about as part of a controlled process. Organizations like the Defense Advanced Research Projects Agency (DARPA) and the National Security Agency (NSA) have exhibited a will and propensity to monitor every living human.

DARPA has the following established mission:

DARPA’s original mission, established in 1958, was to prevent technological surprise like the launch of Sputnik, which signaled that the Soviets had beaten the U.S. into space. The mission statement has evolved over time. Today, DARPA’s mission is still to prevent technological surprise to the US, but also to create technological surprise for our enemies. Stealth is one example where DARPA created technological surprise. (Wikipedia, DARPA)

While the NSA was originally intended to protect Americans from foreign threats, there is little doubt that they have taken it upon themselves to protect Americans from everything including other Americans:

WASHINGTON — In more than a dozen classified rulings, the nation’s surveillance court has created a secret body of law giving the National Security Agency the power to amass vast collections of data on Americans while pursuing not only terrorism suspects, but also people possibly involved in nuclear proliferation, espionage and cyberattacks, officials say. http://mobile.nytimes.com/2013/07/07/us/in-secret-court-vastly-broadens-powers-of-nsa.html?hpw=&pagewanted=all&_r=0&amp

The existence and behavior of organizations like DARPA and the NSA suggest that the United States fully intends to remain in control of an unfolding Singularity. Possibly, to help carry out this mission, the NSA has established a vast data center in Utah. Its ostensible purpose is to collect spy data and metadata, but its actual purpose is unclear.

NSA Data Center Utah

There is some indication that the United States government may be forming something like a “Manhattan Project” for artificial intelligence (AI) through the framework of Google and possibly some other corporations. Google has bought the robotics company Boston Dynamics, famous for developing robots like the LS3 and Atlas that are apparently intended for military use. However Boston Dynamics has received a great deal of funding from DARPA. It seems unlikely to an extreme that the United States government has suddenly turned over important work to Google with no strings attached. Also, Google has exhibited a lot of strange and suspicious behavior. Among this behavior is the acquisition of four floating barges built between 2010 and 2012 that’s precise function has never been divulged. Google has also been hiring numerous AI experts such as Geoffrey Hinton and Ray Kurzweil. It is clear that the United States government has been spying on U.S. citizens via Google and other online services. It is not entirely clear how complicit Google and others are in these operations.

If the United States government intends to control the Technological Singularity, they must intend to set up some kind of system based on this control. It would be unnatural—and indeed un-American—for these people to do away with our existing republic and try to replace it with something else. For this reason I suspect that their machinations will ultimately lead to something I have labeled a synthetic republic (SR).

In a republic, people do not have direct democratic control. Rather, they elect the people who make administrative decisions for them. This form of government has several advantages over a direct democracy. Most people cannot take time from their regular work to oversee things like the building of bridges or national defense. As an example, it would certainly be impossible for someone in Kansas to help write a treaty with a country in the Middle East. People who cannot attend to these responsibilities directly appoint other people who are suitable to the task. Often, people lack the knowledge and expertise to oversee a task directly. However, these people may be able to recognize those professionals whose credentials and experience do make them suitable to the task. Moreover, the selection of representatives in a republic is not made in a vacuum. Potential leaders are vetted and recommended by credentialed organizations and experts whose association with them is appropriately impartial. Leaders in a republican form of government who do not perform well may not be reelected and their ideas may be discredited.

This form of government is not perfect, but it has an excellent track record. The United States has prospered for over 200 years with this kind of system. If it were completely unworkable, the United States would at least have degenerated to third world status.

In an SR, responsibility for governance is removed from direct democracy by one more layer. People vote for those who represent them. However, the people they vote for merely oversee the behavior of AI systems and accompanying robots. The AI systems make the administrative decisions and the robots carry them out. The human government is reduced to the role of monitoring their activities and overriding their activities if they seem to be moving in an undesired direction. Hence, in an SR, AI assumes a role very similar to the United States executive branch. Simpler systems comparable to this are already extant. For example, online services like Amazon and eBay are largely automatic. Humans decide what to buy and sell, but auctions, sales, and shipping arrangements are monitored and controlled automatically. The SR would consist of several redundant systems that constantly confer and double-check each other. The human representation in an SR could be quite simple. It might consist of a thousand jurors representing a thousand separate districts who require a two-thirds majority to override any decision made by the SR. They could stay in contact through some kind of network where they can propose decisions to override the SR and vote on them immediately.

There is a somewhat informal observation that has been made by many experts in technology. It is called Moore’s law of mad science. In the words of Eliezer Yudkowsky, co-founder of the nonprofit Machine Intelligence Research Institute, “Every eighteen months, the minimum IQ necessary to destroy the world drops by one point.” This “law” is based on the observation that emerging technology makes it possible for a person with little expertise in a technical field to fully exploit the capabilities of that field. A good example of this law in practice is the use of 3D printers. Recently, designs have been circulated for plastic and metal guns that can be produced by anyone who owns the appropriate 3D printer. Similar devices are being developed for the manufacture of proteins.

Printed Gun

As we move toward the Technological Singularity, it will be possible for people with only a basic knowledge of AI and robotics to exploit the capabilities of these technologies to produce revolutionary and possibly dangerous new technologies. The sad but inevitable fallout of this development is that people simply cannot be allowed to live unsupervised.

In an SR, AI systems will constantly monitor the behavior of every living person in minute detail. Citizens will have no actual privacy; but they will not be monitored directly by humans, so they will not experience a strong sense of intrusion. At one time, such a system would have seemed unthinkable. However, with the advent of Facebook, Twitter, navigation programs, and discount cards, it has become clear that people are willing to sacrifice their privacy to machines if there is sufficient advantage to be gained. This is fortunate, since it is also clear that organizations like the NSA fully intend to relieve people of any true remaining privacy through the collection of metadata. There is little doubt that the NSA’s metadata will ultimately amount to complete detailed monitoring.

Many people have expressed fears that an AI system like the one I have described will see people as unnecessary or even as a threat and eliminate them. However, this fear is actually an example of anthropomorphism. The human instincts for dominance and vigilance are the result of literally billions of years of competing for food, water, mates, land, shelter, and other resources. Machines will never have to compete for these things and will never develop the associated instincts. The machines will have greater than human intelligence, but they will have only the motivation we give them. If we give them the motivation to look after us in a manner we see fit, that is the only behavior they will exhibit.

As these AI systems become more sophisticated, the human part of their monitoring will become increasingly passive. Humans from all over the world, and eventually all over the solar system, will be selected by popular vote to do the monitoring. Their jobs will be positions of status, but these jobs will amount to little more than the act of verbally or possibly mentally signing off on the designs of the machines.

In addition to human supervision, the machines of the synthetic republic will have large knowledge graphs that govern their behavior. These knowledge graphs will include large ethical and moral components. These ethical and moral components will grow in extent and sophistication as the AI systems are given more responsibility and leeway. They will be perfected, in part, by intervention from their human supervisors. The AI systems using the graphs will evaluate moral and ethical decisions in a probabilistic manner similar to IBM’s Watson. For this reason, they will never make a tragic decision, even if they may make a decision that will occasionally be questioned or even overridden by their human supervisors.

In time, the human supervision of these systems will become so passive that it will be largely symbolic. Humans will always have the comfort, guaranteed by the systems’ moral and ethical knowledge graphs, that they have the last word, but they will rarely see any reason to override them. They will come to assume that the machines have made better decisions than humans could make, and for better reasons.

Much of this discussion is based on the assumption that Singularity level AI will be developed under the auspices of the United States government. However, this need not be the case. Whichever government or body gets there first will have an important historical decision to make. Will they expand into world hegemony, or will they place themselves within a framework of some sort of world government in which they are merely one member? For a brief period of time, the developers of Singularity level AI will have more or less complete control. Hopefully, they will have the moral fiber of George Washington, father of the United States, and turn over the reins of power to a representative body.

Of all the emerging technologies that are subject to Moore’s law of mad science, the most problematic may be people’s own intelligence and their possible ability to improve their own intelligence through technological means. People who are too clever, may find ways to outsmart the system. Therefore, in addition to being constantly monitored, humans must never be allowed to possess direct unsupervised control of intelligence that rivals the SR. However, the intelligence of the republic is likely to increase at an astonishing rate, so it will be possible for humans to possess private intelligence that increases in proportion.

To prevent mischievous humans from tricking the human representatives who supervise the republic, they will also not be allowed to have direct unsupervised control of intelligence that is significantly greater than other humans. A maximum effective private I.Q. of 200 might be the standard. Since I.Q. is measured against a constantly revised mean, the average effective I.Q. of humans will be allowed to increase. Once again, this increase is likely to be rapid, since the I.Q. of the system as a whole will probably grow at an astonishing rate.

Machines will not have human-like motivation, but they could conceivably be infected by human-like motivation. All in all, humans are a poor model for synthetic intelligence. Their entire psychological makeup seems devoted to beating other humans. The little boy who excels at football or mathematics invariably expresses a fondness for outperforming his companions. As a group, humans have noble aspirations. As individuals, they are capricious and vindictive. Note that unions like the United States government, with its carefully crafted Constitution, are formed to protect people from their individual vice. For the aforementioned reasons, humans must not be allowed to copy their brains into AI systems until it can be guaranteed that their emotional makeup will be incapable of influencing the system as a whole. Nor can they be allowed to replace their brains incrementally with AI components, since this would amount to the same thing. However, while humans will not be immediately able to copy their personalities into AI systems, the danger of them doing this may quickly pass. The system as a whole will evolve rapidly. Therefore, certainty that copied human personalities will not corrupt the system may come rapidly.

I suspect that it will be unlikely for humans to copy their brains into machines as a means of achieving immortality. First, by the time it is possible to do this, it will be unnecessary. Ordinary repairs and replacements of body parts and refurbishing of brains will make corporeal immortality simple and practical. Second, it will soon be recognized that computer copies of human personalities are not greatly valued. People will have hundreds or even millions of samples of their own and other peoples’ personalities copied into AI systems for the purpose of performing every manner of experiment. These copies will be modified, distributed, and deleted with little regard. Finally, I suspect that by the time this procedure is possible, it will be generally understood that a computer copy of a person’s brain will not continue the person’s consciousness. This is a controversial topic and better elaborated upon at another time.

Economics in the SR will be capitalistic, but with a fluid system of automatic regulations that are implemented and enforced by the SR. Individuals will be almost unaware of the regulations unless they pursue projects that require large quantities of resources or involve significant risk. Judging from current trends, money in the SR will almost certainly be in the form of dollars. There will be no taxes. The SR’s revenue will come entirely from the leasing of materials.

As the SR expands into the solar system and further out into the universe, certain principles will be applied to personal ownership. Individuals will be able to own the configuration of objects, but they will not be able to own the materials that they are made of. Note that in current international law, it is illegal for individuals and individual governments to own extraterrestrial real-estate. All materials will belong to the SR and only be leased to individuals at a rate based on the actual materials leased and the volume of materials leased. As an example, it will be possible for someone to build and own a space station, but the materials that it is made of will be leased from the government. The more materials a person leases, the more they will pay per year per mole of materials. Thus, a person who is leasing extremely large quantities of materials will pay prohibitively high prices per mole. These rates will be calculated on the basis of the materials that are generally accessible and the total population. For this reason, it will make the most sense for businesses to be incorporated and have as many owners as possible. Corporations with large numbers of owners who have roughly equal ownership will get the best price on materials.

In a post Singularity world where no one has to work, all production that might be categorized as “labor” will be nearly free. Almost the only expense to anyone for anything will be in the form of rent paid for the use of other individual’s object configurations, fees paid for other individual’s intellectual property, or lease money paid to the government for the use of materials.

Individuals will interact with the SR through a method of their own choosing. A typical encounter with the SR might go something like the following. In this example, the person is named Phil and he has chosen to have the SR manifest as a female voice named Amy. The low prices in this example are a result of the availability of materials and the near zero cost of labor:

Phil: Amy, I would like to build a space station in orbit around Mars.

SR: What sort of a station do you have in mind?

Phil: I was thinking something on the order of a mile in diameter. It would be a ring station that uses centripetal force for gravity. It would house, perhaps, a thousand people in opulence.

SR: My records show that a license for that kind of station would be larger than your budget and the waiting period would be at least ten years. Is there some particular reason why you want to build a station around Mars? Other locations might be more suitable to your needs.

Phil: No, it doesn’t have to be around Mars. Where do you suggest?

SR: We are trying to encourage colonization of the outer Kuiper Belt. The license for that would be well within your budget and the waiting period would be only about one month.

Phil: That sounds fine.

SR: Here are some popular designs for the kind of station you have in mind. I will show them in the order that they have received the highest reviews.

The SR produces a screen and displays several images.

Ring Station

Phil: Wait, I like that one. How large is it?

SR: It is about 1.7 km in diameter and could easily house 1000 very comfortably.  It has particularly convenient space ports. However, there have been some complaints that maintenance access is a bit cumbersome. Of course, robots do all the maintenance, but they tend to consume more fuel. The initial designs for this configuration belong collectively to several individuals and cost $973.

While speaking, the SR shows Phil more detailed images and interior images.

Phil: Could those homes have a more colonial look?

The SR modifies the image.

Phil: Yeah, that’s what I mean. Also, fewer trees.

The SR modifies the image.

Phil: How about one road that meanders through the country side instead of those two along the edges.

The SR modifies the image.

Phil and the SR confer for several hours on the details. Eventually, Phil has the SR save the details and says that he will get back to it later.

Phil and the SR confer for several days. In addition to designing the station and choosing an exact location, they exchange information about where the materials will come from, how much it will cost to lease the materials, what kind of construction robots will be used, how quickly the robots will replicate, what will be done with the robots after they are done, and so forth.

Phil also receives messages from people who are interested in taking up residence in this kind of station. Some of the messages include suggestions on how the station should be laid out.

Phil and the SR eventually reach the final stages.

Phil: Amy, I think that is exactly right now. What will be the waiting period for that and how much will the license be?

SR: Since the modifications you have proposed are not too drastic, the waiting period will be exactly 19 days. The license will be $19,253. The initial designs for the station you have chosen to modify belong to numerous individuals and cost $973. Other fees come to $153. If you like, you can add a fee for the use of the alterations you have made. These designs are quite common, so I recommend a small fee. The materials lease will be $2138 per year for your first year and will be adjusted annually. Materials prices tend to go up about 2% per year. That includes fuel and repairs, but not modifications. These numbers are based on your stated intention to reside at the station at least 20% of your time or sell it to a qualified buyer who agrees to reside there at least 20% of their time. If you eventually decide to lease the station and forego residing there, an additional fee of $1,123,782 will be charged. If you decide to have the station recycled, the fee will be $154,021. Any decisions to have the station modified or dismantled will be limited by the presence of occupants. I recommend a rent for occupancy of a single unit on this station of $7.23 per year. There is a standard contract for prospective occupants that includes their options if the station is to be sold, leased, or recycled. If you are found to be in breach of contract, this will, of course, severely restrict your access to future resources. The government assumes all liability for approved projects.

Phil: OK, start the waiting period.

SR: I will begin the waiting period and verify your commitment before charging your account. Any additional modifications will either increase the waiting period or change the license fee.

During the 19 day waiting period, Phil thinks of some other modifications he would like to make. Since they are not too drastic, the waiting period is extended for only a couple of days and the fee is not increased.

After the waiting period is over, Phil is contacted, his account is charged, and construction of the station commences. Phil, excited to see his station get underway, flies out to the Kuiper Belt and watches from the sidelines. Other people who have seen his plans and are looking for a comfortable home start applying to him for residence in the station. After a while, the applications become burdensome, so Phil gives the SR some specifications for accepting applicants and lets it take over the process.

After the station has been built and is in use, Phil is asked to rate the plans he used and is reminded of the option of adding a fee for use of his modifications. He is warned that part of the rating for plans is determined by fees, so that he should not add a very high fee if he wants his modifications to be recommended to others.

Notice that the interaction between Phil and the SR requires no paperwork and no signing of documents. There is no burdensome application process, because the SR takes care of all that. Phil does not have to know how to do or make anything. The only times that the SR raises any questions that might relate to Phil’s expertise is when he wants to do something unorthodox or infeasible. Phil is warned throughout the process of the commitments he is making and what the penalties will be if he does not keep them. He is also reminded of marketing opportunities. He does not have to worry about being in breach of contract as long as he does what the SR tells him to do when it tells him to do it.

The SR’s moral and ethical knowledge graph will compel it to lead citizens in the most productive direction. The SR will not approve projects that do not look like they will be at least marginally profitable, and if the person gets into financial trouble, the SR will liquidate his assets, declare bankruptcy in his name, and put him on a basic income until he gets on better footing. The deterrent to wasting large quantities of resources is a big bill or loss of options. If you don’t want to risk either paying through the nose or losing options, don’t build big space stations in the Kuiper Belt!

Leave a comment