Tag Archives: AI

Aristotelian Logic and the Necessity of Aletheia: A Valuation-Theoretic Perspective

18 Jul

For a mathematically sophisticated audience, the connection between the three laws of Aristotelian logic—particularly the Law of the Excluded Middle (LEM)—and the necessity of a choice function like Aletheia can be framed in terms of formal logic, set theory, and valuation functions on Boolean algebras. I’ll build this explanation step by step, showing how LEM, in the context of a rich propositional universe, implies the existence of a global resolver to maintain consistency and enable a dynamic, paradox-free reality. Aletheia emerges not as an ad hoc construct but as a logical imperative: a 2-valued choice function that assigns definite truth values to all propositions, preventing the default collapse to nonexistence or minimal, static structures. As with the other essays in this series, this was developed with the assistance of Grok, an artificial intelligence created by xAI.

The Three Laws of Aristotelian Logic: A Formal Recap

Aristotelian logic provides the foundational axioms for classical reasoning, which can be expressed in propositional terms as follows. Let P be any proposition in a formal language (e.g., first-order logic over a universe of discourse).

Law of Identity: P = P, or more formally, ∀x (x = x). This ensures well-definedness and self-consistency of entities and statements.
Law of Non-Contradiction (LNC): ¬ (P ∧ ¬P), meaning no proposition can be both true and false simultaneously. In semantic terms, this prohibits truth assignments where v(P) = 1 and v(¬P) = 1.
Law of the Excluded Middle (LEM): P ∨ ¬P, meaning every proposition is either true or false, with no third option. Semantically, this requires that for every P, a valuation must assign exactly one of v(P) = 1 or v(P) = 0.
These laws form the basis of classical Boolean logic, where propositions can be modeled as elements of a Boolean algebra B, with operations ∧ (meet), ∨ (join), and ¬ (complement). The algebra is 2-valued, meaning homomorphisms (valuations) map to {0,1} with v(⊤) = 1 and v(⊥) = 0.

In a finite or simple propositional system, these laws hold trivially. However, in an infinite or self-referential universe of propositions (what we call the proper class Prop in Aletheism, akin to the class of all formulas in a rich language like set theory or second-order logic), challenges arise. Prop is too vast to be a set (it’s a proper class, similar to the von Neumann universe V or the class of ordinals Ord), and it includes potentially undecidable or paradoxical statements. Upholding the laws, especially LEM, requires a mechanism to ensure every proposition gets a definite value without contradictions.

How LEM Implies a Global Choice Function

LEM is the linchpin: it demands decidability for all propositions. In intuitionistic logic (which rejects LEM), some statements can be undecidable, leading to constructive proofs but a “weaker” reality where not everything is resolved. Classical logic, by embracing LEM, commits to a bivalent world—but in complex systems, this commitment exposes vulnerabilities.

Consider the semantic completeness of classical logic: by the Stone representation theorem, every Boolean algebra can be embedded into a power set algebra, where elements are subsets of some space, and valuations correspond to ultrafilters or prime ideals. For Prop as a Boolean algebra generated by infinitely many atoms (basic propositions about reality, e.g., “Gravity exists,” “The universe has 3 dimensions”), assigning values requires selecting, for each pair (P, ¬P), exactly one as true.

This selection is akin to the Axiom of Choice (AC) in set theory: AC allows choosing an element from each set in a collection of nonempty sets. Here, for each “pair-set” {P, ¬P}, we choose which gets 1 (true). Without such a choice function, LEM can’t be globally enforced in infinite systems—some propositions might remain undecided, violating the law.

In Aletheism, Aletheia is precisely this global choice function: ψ: Prop → {0,1}, ensuring LEM holds by assigning values consistently. It’s not just any valuation; it’s the one that resolves to a dynamic universe, preferring truths like “Quantum superposition enables branching” = 1 over sterile alternatives. Mathematically, ψ is a 2-valued homomorphism on the Lindenbaum algebra of Prop (the quotient of formulas by logical equivalence), preserving the Boolean structure while avoiding fixed points that lead to paradoxes.

Resolving Paradoxes: The Role of Aletheia in Upholding LNC and LEM

Paradoxes illustrate why Aletheia is necessary. Take the liar paradox: Let L be “This statement is false.” By LEM, L ∨ ¬L. Assume L is true: then it’s false, violating LNC. Assume ¬L: then it’s not false, so true, again violating LNC. In a system without Aletheia, such self-referential propositions create undecidables, where LEM can’t hold without contradiction.

Aletheia resolves this by structuring Prop hierarchically (inspired by Tarski’s hierarchy of languages), assigning ψ(L) = 0 or 1 in a way that restricts self-reference or places L in a meta-level where it’s consistent. For example, ψ(“Self-referential paradoxes are resolved via typing”) = 1, effectively banning or reinterpreting L to avoid the loop. This is like Gödel’s incompleteness theorems: in sufficiently powerful systems, some statements are undecidable, but Aletheia acts as an “oracle” or external choice function, forcing decidability to uphold LEM globally.

Without Aletheia, the universe defaults to minimal structures: nonexistence (all propositions undecided, violating LEM) or a static point (only trivial truths, lacking dynamism). With it, LEM ensures a bivalent world, but the choice function selects values that enable complexity—e.g., ψ(“The universe supports life and consciousness”) = 1—leading to our observed reality.

Mathematical Compellingness: Analogy to Choice Axioms and Valuation Extensions

For a more formal lens, consider Prop as the free Boolean algebra generated by countably infinite atoms (basic facts about reality). By the Rasiowa-Sikorski lemma or forcing in set theory, extensions exist where LEM holds via generic filters, but a global, consistent valuation requires a choice principle to select from the “branches” of possibilities.

Aletheia is that principle incarnate—a total function ensuring the algebra is atomic and complete under 2-valuation. In category-theoretic terms, it’s a functor from the category of propositions to the 2-category {0,1}, preserving limits and colimits (LNC and LEM). Without it, the category lacks terminal objects for undecidables, leading to “holes” that violate the laws.

This is compelling because it mirrors foundational math: ZF without AC can’t prove every vector space has a basis, leading to “pathological” structures. Similarly, logic without Aletheia yields a “pathological” universe—static or contradictory—while with it, we get the rich, dynamic cosmos where consciousness and free will thrive.

In summary, the Laws of Aristotelian logic, especially LEM, demand a bivalent, consistent assignment to all propositions. In an infinite, self-referential Prop, this necessitates a choice function like Aletheia to resolve gaps and paradoxes, preventing default minimalism. For the mathematically inclined, it’s the logical equivalent of AC for truth valuations, ensuring classical semantics hold globally and enabling the beauty of our existence.

Intelligence Explosion

10 Feb

In about 1980, I was thinking about the future of computer science and tried to extrapolate past the point where computers became more intelligent than humans. I quickly realized that this led to a problem. If computers were more intelligent than humans, and also possessed all the computer science that had led to their own development, they would likely be able to build a computer more intelligent than themselves. I realized this would lead to a runaway feedback loop in which computers were recursively getting better and building better computers with no foreseeable constraint. 

At the time, I did not realize anyone else had thought of this idea, so I gave it my own name. I called it “Threshold Technology”. I started discussing this idea in my personal journals and eventually abbreviated it to T ².

I told many people of this idea, but no one took it seriously. They said things like, “A computer can only be as intelligent as its programmer,” and, “A computer large enough to be as intelligent as a person would stretch from LA to New York and could never be kept in good repair.” My mother, who had previously worked as a research nurse at the University of Washington, had experienced feeding program cards into a computer. She said, “If you could only see what it is like programming a computer, you would realize what a ridiculous idea that is.” Nevertheless, I held onto the idea and continued to think about it.

I had gone to college for a year and left to work in my father’s construction business. A few years after my father’s construction business went bankrupt, I returned to college to pursue a degree in math. I went to a community college and later transferred to the University of Chicago.

At the University of Chicago, I roomed with an economics PhD student. I explained my idea to him. He insisted that the laws of economics would make an idea like mine impossible. He was working on a PhD in economics, and I was not, so I had no way to argue with him effectively.

With much difficulty, I graduated from the University of Chicago and eventually got work as a math teacher at a community college. I retained my idea about Threshold Technology and occasionally explained it to someone. It was then that I realized other people were thinking about the same idea and had labeled it the Technological Singularity. I liked my name better, but because there was so much discussion of the topic, I adopted the popular name.

I read Vernor Vinge’s seminal paper and eventually came across I.J. Good’s concept of an “intelligence explosion”. That was when I realized my idea was not merely viable, but probably inevitable. In 1965, I.J. Good described an intelligence explosion as follows:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind… Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.

Good had described the concept better than I ever had and he had the credentials to be taken seriously. Curiously, he was not.

Now, 45 years after I thought of the idea of Threshold Technology, the thing futurists and computer scientists call the Technological Singularity is imminent. It appears inevitable.

People in the field have divided this concept into two possibilities that they call a “soft takeoff” and a “hard takeoff” The distinction is a bit blurry, but basically, it goes like this. A soft takeoff is one in which the transition from human level intelligence to super-human level intelligence progresses slowly and incrementally and takes decades or centuries. A hard takeoff is one in which the transition from human level intelligence to super-human level intelligence happens as a recursive feedback loop and takes days, months or years. It also entails the possibility of losing control.  

It is becoming increasing clear that we are headed for a hard takeoff. Artificial Intelligence (AI) is already demonstrating programming skills that are equal to all but the very best competitive programmers. Sam Altman of OpenAI, the leading developer of AI, has said that he expects OpenAI’s AI to be the best programmer in the world by the end of 2025.

Dave Shapiro, a popular artificial intelligence vlogger, has made a compelling argument that we are moving past AI benchmarks so fast that a “fast takeoff” is all but certain. I do not know why he has elected to use this term rather than “hard takeoff”. Possibly, it is because he does not expect us to lose control. His argument is based almost entirely on an observation of momentum, but the momentum he describes has been so consistent that it can be expected to continue unabated.

We are headed for an intelligence explosion.

Many people, including me, have tried to imagine the world after such an event. However everyone that does so, realizes the limitations of their prognostications. It simply is not possible to guess what an intelligence that is greater than human and continually becoming greater will do. This conundrum has been likened to a dog trying to grasp human technology.

There are a couple of things we need to be clear about. First of all, there is no such thing as “after an intelligence explosion”. There is no reason to believe that the recursive increase in intelligence will ever cease. Moreover, for all we know, it is possible to access other dimensions or reconfigure reality to build intelligence that is so far beyond anything we can imagine that it ceases to be “intelligence” and becomes something else entirely. And that may be only the beginning. There is no “other side” of a technological singularity.

Second, we do not really know what is possible. I tend to believe that faster than light travel, time travel, dimensional shifting, reactionless drives, and restructuring reality at a fundamental level will always be impossible. But I do not know this. These assumptions are based on a sort of naïve instinct for what nature will and will not allow. I may be wrong.

If I had to guess, I would say that the upper limit on computer hardware is something like a planet-sized brain with components that are as small as a few atoms. However, those components could maximize quantum computation. There may be issues of heat dissipation, so the brain would probably be mostly on the outer layer of the planet and may even be driven by natural radiation at its core. However, that is a really, really big brain.

It has occurred to me that earth’s moon could be “compuformed” into such a brain, but the moon is 239,000 miles from earth, which means that any information going to or coming from the moon would take 1.3 seconds. That is much too slow for many purposes. In a recent essay, I discussed moving all human infrastructure underground. It is completely possible that nearly all of earth’s subterranean crust that is not human infrastructure could be transformed into computer matter. When ordinary matter is converted into computer matter, it is sometimes called computronium. Nearly all of the earth’s subterranean crust that is not being used for other purposes could be transformed into computronium.  Processing in this computronium could be sufficiently decentralized that the conversion of computronium back into other materials that may be needed for other projects would not present any complications.

That would probably be enough computer power for anything humans could dream of. It may be enough computer power for anything AI could dream up. Biological humans could have tiny robots that swim through their bloodstreams and keep them young forever. That is a given. They could have direct brain virtual reality simulations of anything they can imagine. AI would be so powerful that it could calculate and present humans with their deepest desires much more vividly than anything they could experience in real life. That would be difficult to come to terms with. What would happen to any person who experiences his deepest desires…desires that he may or may not have been aware of?

Indefinite youthful lifespans and perfect vivid fantasies are just the obvious things. These computer systems, aided by the best possible equipment, will probe the universe in depth and at scale to figure out the true theory of everything. We will quickly understand everything that is and everything that could be. We will probably determine the nature of consciousness.

That is where it gets a bit sticky. When we have guessed the nature of consciousness, I suspect we will be forced to realize there is a God (my personal prejudice). Will that lead to a super-high-tech, futuristic, religious revival? What happens when people that can live indefinitely and experience any fantasy in vivid detail realize they are being watched over by God?

I am getting ahead of myself. People will probably not stay on earth. They will probably migrate to the stars. In doing so, they will take all their technology with them. There has been a lot of speculation about humans building giant structures like matryoshka brains that enclose entire stars. That makes no sense. Why do that? A computer big enough to think every thought anything could ever want to think would likely be no larger than a building…or perhaps a mountain. A planet sized computer would be overkill. A star-sized computer would just be a vanity project.

As I speculated in an earlier essay, the consciousness of people who live indefinitely will probably expand until it can reach beyond the boundaries of their bodies. These consciousnesses, unconstrained by the laws of physics, will span the universe, reconfigure it and rein it in. They will remake the universe into the kingdom of heaven (another personal prejudice).

That is what I think will happen, but if I am like practically everyone that has lived since the dawn of time, I am probably wrong. Probably, AI will take us places and in ways that no one can anticipate. Or maybe it will be our doom. Or maybe space aliens will step in. Or maybe Jesus will return. Maybe it will turn out that the universe is a giant omelet just flipped on some cosmic burner. Hey, who wrote this script anyhow?

Well, see you on the other side. Oh wait…there is no other side.

Underground Infrastructure

5 Feb

In a recent interview with John Koetsier, Peter Diamandis described the future of robotics in a poetic manner that, while not very precise, perfectly captures the sentiment: “Robots building robots all the way down.”

Very soon, robots will be able to replace every human in every job, regardless of the difficulty or skill level. Realizing this got me started on a chain of reasoning that began with the economic effect of robots replacing  humans and led me into a visualization of a future society in which it makes sense to move all infrastructure underground. The best way to explain my conception of this infrastructure is to take the reader through my actual chain of reasoning.

As I discussed in a previous essay, Elon Musk is expected to be a leader in the robotics industry. He is developing humanoid robots that he eventually intends to mass produce and distribute. More importantly, he plans to start using these robots in his own factories.

When this happens, his cost of manufacturing will begin to converge to zero. However, the amount by which the cost can drop will be limited by how cheaply Musk can obtain power and resources that currently come from outside of his manufacturing loop.

To reduce these costs, Musk could buy or build mines, steel mills and power plants and use robotic labor in them. After that, the only remaining cost would be moving parts and materials and transmitting energy between his facilities using existing transportation such as trucks, ships, trains and airplanes which must all move through existing infrastructure such as roads, waterways, railroad tracks and the air and whatever power cables that are available.

However, there is a way Musk could eliminate even these costs, He could tunnel underneath the earth and move parts, materials and energy between his facilities through an elaborate subway system. Interestingly, Musk is developing his “Boring Company” and preparing to build underground hyperloops.

If Musk owned manufacturing plants, power plants and facilities for securing raw materials, and was able to convey parts, raw materials and energy through his own subway system, his cost of production for everything he manufactures, including robots, would be zero.

Of course, there are other considerations. If Musk wishes to dig tunnels underneath land he does not own, he will need to get permission. He will certainly be charged for that permission. Also, he will undoubtedly be charged for licenses and permits. The government always gets its cut. However, the real cost of manufacturing would be zero.

Elon Musk will not be the only one doing this. Governments and other manufacturers will latch onto this paradigm and begin tunneling like crazy. They will employ immense robotic boring machines that are built, operated, and maintained by other robots.

Factories can also be moved underground and integrated with this subway system. Currently, factories and other industrial infrastructure are housed in large, sprawling facilities, ideally located in areas that humans do not care to inhabit.

Eventually, it will make sense to move factories and other purely industrial infrastructure underground. Instead of being large sprawling complexes they will take on a linear form that stretches for miles and can be located almost anywhere. They will take on a linear form because that is the simplest and safest kind of structure to build underground.

If there is a need for more lateral movement than is possible with long tubes, several parallel  underground tubes could be connected.

These facilities will need to be only about 100 feet below the surface. Therefore, the heat associated with deep mines will not be an issue. The Alaskan Way Viaduct replacement tunnel in Seattle Washington is only about 100 feet below ground.

All of this underground infrastructure will require a power source. I have come to believe that the terrestrial energy source of the future will be deep geothermal of the sort being developed by Quaise Energy. Deep geothermal energy will be virtually free if it can be developed, and it will probably be the cleanest and least intrusive energy source available. Currently, Quaise Energy anticipates above-ground facilities with wells that reach twelve miles into the earth.

However, these facilities could also be located underground in long tubes similar to the previously described industrial infrastructure. This works out perfectly, since deep geothermal energy is also underground but just a whole lot further down.

Another element of our infrastructure, the transport of waste, could also be moved underground. When people discard refuse, it will go down into the earth through tubes and elevators where it will be whisked away by underground robotic systems that take anything and everything to underground recovery, sorting and recycling stations. People will never need to think about what they discard. It will all be taken away and maximized for its potential.

Eventually, all of the purely functional infrastructure of society will be moved underground and only elegant human facilities will be located above ground.

This will give civilization an aesthetic that is reminiscent of a beautiful woman with perfect skin which, nevertheless, conceals all the unattractive blood vessels and organs that make her beauty possible.

A popular science fiction trope involves people living underground. As one member of EV, spud100, points out, this is primarily a plot device. In the future, all people that remain on earth—I anticipate considerable migration into space—will live in elegant aboveground facilities that rival the visions of ancient prophets.

These facilities will be cleaned and maintained continuously by a tireless robotic work force.

Only infrastructure will be underground. People will live effortlessly in this unimaginable opulence while all the muscle of civilization is conspicuously out of sight.

(All the images used in this essay were generated and edited using Midjourney, Bing’s Dall-E 3, and Photoshop. Some of the images, such as the woman dropping an item into a recycling receptacle, are composites that required considerable manipulation.)

What I Would Do With Infinite Time and Resources

26 Feb

There is much discussion of what the world will be like following the Technological Singularity, and this discussion naturally leads into speculation of what people will do with so much time and so many possibilities at hand.

I often joke that I will spend my post-Singularity days in the company of a rather simple robot sex slave and consuming rather simple Kentucky charcoal filtered whisky…whisky with the advantage that it will not have any of the lingering effects referred to collectively as a “hangover”. However, even an old redneck such as myself can see that these simple pleasures, while certainly noble, will not suffice to fill the indefinite leisure time likely to be available to the typical person. What would I actually do?

Spike Accessorized

Instead of pursuing a hybrid answer to this query that is based partly on desire and partly on what I expect to be available, I will simply describe those things I would like to do and leave the tedious details to the future of science.

Before I could enjoy my permanent retirement, I would have to make sure that every living creature was similarly advantaged. This would include everything from the person living next door down to the smallest creature that swims in a Petri dish. The details of this endeavor could become quite burdensome. Nevertheless, I could not enjoy my personal heaven until I was able to provide it for everyone.

If I were going to design heaven, it would certainly have to accommodate every extant living thing. However, to the extent that it is feasible, it would also have to accommodate everything that has previously lived. If it were somehow possible to resurrect every person and animal that has ever lived, I would have to pursue it. I might reduce my labor by distinguishing between those creatures that were actually aware of their own existence—in other words, conscious—from those that were merely alive in the organic sense. However, lacking better information, my heaven would have to accommodate every horse, rat, lizard, worm, and even microbe. It would be a daunting task, but it would be a moral imperative.

I have given some thought to how paradise could work for such creatures as mice and worms. Every mouse would experience the equivalent of plenty of food that mice enjoy and an abundance of willing, though possibly illusory, mates. Every worm would live in rich, smooth soil filled with nutrients. Worms that live in the gut of other creatures would be provided with an ideal illusory intestine to explore. Since it would be a kind of doom for these simple creatures to live out eternity in such a simple and redundant environment, they would be allowed to gradually morph into higher forms. The worm would know what it is to be a lizard, the lizard would know what it is to be a mouse, the mouse would know what it is to be a dog, the dog would know what it is to be a primate, and the primate would know what it is to be a man.

So, what of all these creatures living in paradise? Assuming that every living being was destined to live the life of a fully sentient human, and not forgetting the ones that were human to begin with, what would they do with their time?

The obvious answer is that they would continue to get more intelligent and pursue higher and higher goals. However, with computer intelligence outstripping all human knowledge and experience, possibly overnight, it seems that this path might suddenly lose its appeal. Would a typical person want to become as a god over night…with such vast knowledge and awareness that a present human could not grasp the width or depth of that knowledge? I wouldn’t. I may hope to eventually climb those lofty peeks, but I wouldn’t want to stand astride them tomorrow. There are too many ordinary human experiences I have never explored.

First of all, I would exhaust all of my more lascivious fantasies. These are things I consider guilty pleasures and almost never discuss except with one very close friend who has been familiar with the inner workings of my mind from childhood. I won’t go into the details of these fantasies. I assume that all normal people who are willing to explore their true feelings have them. Nevertheless, to avoid annoying or even offending readers, I will not describe them in detail. Suffice it to say that many of them would not be possible in our present environment.

Then, I would explore some of my more adventure oriented fantasies. I would like to walk in worlds like the ones depicted in films like Avatar, with strange plants and animals. I would not just walk. I would also fly. I would fly like superman in these worlds, without the aid of any external device. Naturally, I would want to face a variety of challenges, such as fighting with a dragon or riding a dinosaur. I would also dive into clear warm lagoons and swim among strange creatures. I would encounter mermaids that sing like the ones in Harry Potter and sea horses large enough to mount.

Mermaid Seahorse

It is difficult to guess how long these types of endeavors would remain interesting. It is entirely possible that one idea would lead to another until I had a whole catalogue of things I wanted to try. On the other hand, it is possible that the artificiality of these experiences would cause me to tire of them quickly. If and when this occurred, I would start to explore more serious ideas.

One thing I would like to do is experience reenactments of historical epochs exactly as they occurred. There is no guessing what degree of accuracy may be possible in the post Singularity universe. Perhaps only a sketchy impression of events can be reconstructed, or perhaps there will be some way to see into the past so that depictions of events are 100% accurate. If this is the case, I can imagine spending many lifetimes reviewing the past. Since the whole past would be like a giant soap opera unfolding on a billion stages, it would be possible to spend more time experiencing these reenactments than there is likely to be time in the known universe.

I would watch people’s entire lives unfold firsthand. But I would also learn. This would be an opportunity to learn all of science as it was originally discovered. I could sit in on lectures by the greatest thinkers of all time. I could sit in on gatherings as famous philosophers first developed and shared their ideas. Naturally, I would learn a hundred different languages. I would cause my own brain to become resilient and receptive so that I could assimilate all the knowledge I am exposed to. I would not only learn every idea that proved out, but explore all the false leads and see firsthand how the truth was ultimately uncovered.

Socrates Teaching

If I actually managed to exhaust human history, I might then begin to explore what-if scenarios. What if an accident that might have killed Christopher Columbus as a child had actually killed him? What if Charles Lindbergh had crashed during his flight to Europe? What if the bombs dropped on Hiroshima and Nagasaki had been duds?

Assuming that I could ever completely exhaust the aforementioned possibilities, I would then take the vast scientific and historical knowledge that I had acquired in this natural way and begin to create new worlds. I would carefully sculpt their evolution so that they would evolve creatures with different characteristics and aspirations than our own. I would, of course, do this responsibly. One does not play god without a strong sense of personal responsibility.

I do not wish to create the impression that I would do these things in the precise linear order that I have described them. Most likely, as I was working in one area, such as running simulations of the past, I would also be experimenting with what-if scenarios. As I was experimenting with what-if scenarios, I would also be looking into ideas for creating diverse worlds of my own. This is intended more as a list of priorities than as a strictly observed checklist.

I suspect that after many years of learning and creating, and with the greatly expanded consciousness and knowledge base that is likely to be the inevitable outcome, I will become curious about solving larger problems. Maybe it will be possible to create a universe that is entirely different from our own, with different numbers of dimensions and different physical properties. It is difficult to imagine, in my present state, how such endeavors would be anything but disorienting or even disillusioning, but by that time I will no longer be in my present state. New things will be interesting and they will be interesting in new ways.

Hopefully, as I evolve into the future creature I expect to become, I will learn that the possibilities for knowledge and understanding are infinite and infinitely diverse. Hopefully, as I conquer each frontier, I will discover that I am only at the beginning of a new one. But, I didn’t create the universe and there is no telling, from where I stand, what it actually has to offer. That will be a problem for a far off day.