More on Race and Genetics

In the March 25, 2018 Sunday Review section of the New York Times, Dr. David Reich, professor of genetics at Harvard University published an article entitled 'Race' in the Age of Modern Genetics. (Also published on March 23 under the title How Genetics Is Changing Our Understanding of ‘Race’ ). Since it overlaps somewhat with my blog entry of Feb. 10, 2018, I thought it would be interesting to refer you both to the article ( and my response to it below.

David Reich attempts to convince the readers that there is validity in discussing genetic differences among “races” while at the same time cautioning against “misusing” such data in “pseudoscientific racist” terms.  Unfortunately, Dr. Reich, himself, has muddied the water in this article.

We are left confused as to what he means by “race” alternately putting that term in quotes and using the phrase “racial constructs” while never defining either. His examples include phrases such as “self-identified African-Americans” and “self-identified European-Americans.” Are we to consider these as two examples of races? Later he describes a study that compares “Europeans with more years of education” with “Europeans with less years of education.” Are these two different races?

He concludes by using the invalid analogy of comparing sexes. We can definitively define males as having a XY chromosomal configuration and females as having an XX chromosomal configuration and therefore use sex scientifically as an independent variable. There is no such way to define races genetically or in any other definitive manner. We are, therefore, left with only the vague notion of “self-identification” as the highly variable and inaccurate independent variable. If we were to use “self-identification” in determining the variable “sex”, and did a study of a transgender population, our “science” for the study of the genetic relationship to sex would be incorrect. The fact of the matter is that we are all African and there are certainly genetic clusters of variables among different populations living in or recently emigrated from different geographic locations. Adding “race” to this discussion is misleading and not helpful.

Race and Genetics

            Dare I venture into this politically and emotionally charged issue? When researching the questions regarding our evolutionary biology as I do, ignoring the question of race as it relates to species would be negligent. On the other hand, trying to discuss it in a short blog such as this could be considered foolhardy. This sounds like a lose-lose situation. I’ll let you, the reader, be the judge.

            Humans have existed for about two million years. In that time, there have been many human species. Homo sapiens emerged about 300,000 years ago and we have been the only human species still living for about 37,000 years. There is little debate today among taxonomists, evolutionary biologists and all the other “ists” that have an opinion on this subject: today all seven billion plus of us belong to one species, regardless of racial, geographic, ethnic or any other classification.

            Why do we say that? There are many conflicting definitions of species—referred to in the literature as the “species problem.”  There might be some room to argue that the different groups of today’s Homo sapiens that we call “races” could fit one or more of the definitions of species. Even more problematic is the definition of “subspecies”.  If the different races don’t qualify as separate species, could they at least qualify as subspecies?

            The answer is NO and NO.

            A species consists of a group of organisms with a definable set of genetic characteristics or common gene pool that evolves independently of all other groups of organisms. A common gene pool is not a precise nucleotide-by-nucleotide definition of a set of genes.  Rather, it a set of genes that perform all the same functions. There will be great variation within these genes among the members of the same species. The “evolving separately” component of the definition implies that there is some barrier to interbreeding of a species with other species such that when new genetic variants enter the gene pool, they are not intermingled with other species to a large extent. This does not mean that the barrier to interbreeding is absolute. Many species today interbreed with other species to some extent, but by and large over time, they continue to evolve independently. For example, we now know that Homo sapiens interbred in the past with at least two other human species. With today’s human mobility and facile intermixing of genes among all ethnicities and localities, clearly there is not a separately evolving subgroup among us. That is particularly true of the large groupings that we call races.

            The notion of subspecies is even more vague and difficult to define. The subspecies level is sometimes equated with “races”. In taxonomy, subspecies are designated with three Latin terms rather than the two that designate a species. There is only one subspecies of Homo sapiens alive today, called Homo sapiens sapiens, and it includes all present day humans. The only other subspecies of Homo sapiens, called Homo sapiens idaltu, is assigned to an extinct group of fossils thought possibly to represent the immediate precursor to today’s modern humans.

            With that admittedly superficial background, let’s consider human races. If not separate species or subspecies, is there any genetic basis for categorizing people as African, Caucasian, or any other racial designation? That is, is there any genetic basis for race? One can find virtually any opinion on this subject in the legitimate scientific literature. In a publication in the New England Journal of Medicine, Robert Schwartz states that “race is a social construct, not a scientific classification” and that race is a “pseudoscience” that is “biologically meaningless.” On the other hand, in the same journal, Neil Risch states that today’s humans cluster genetically into five continent-based grouping that are biologically and medically meaningful.

            Are these two points of view really different answers to the same question about genetics and race, or are they answers to different questions? Specifically, can one state that there is no genetic basis for race and, at the same time, state that there are some genetically measurable differences between self-identified racial categories? I think the answer is yes.

            Let’s take, for example, sickle cell trait, which is much more prevalent in people who consider themselves African compared to those who consider themselves Caucasian. Yet sickle cell trait exists in all races and one could not use it to define African vs. non-African people. In fact, when one looks at the genetic variation within any racial category, it exceeds the variation between racial categories. There is no genetic profile that can define any race. Are there clusters of genetic traits that have higher probabilities in one race or another? Certainly. That would be true of other classifications of humans as well such as classification by size or athleticism or musical ability. Yes, certainly those who consider themselves African have, on average, darker skin than those that consider themselves Caucasian, but the variation in skin color is great in both groups. For example, the paleogenomic profile of the earliest human fossil found in Great Britain shows that it had dark skin in a geographic area that today consists primarily of Caucasians.

            This comes back to the question of species. Aren’t there great variations within species as well? Yes, but they are far less than the variations between species. That is, today’s genomic variation between the various racial groups is less than the variation between Homo sapiens and Homo neanderthalensis. All of today’s human races, no matter how you define them, are clearly Homo sapiens and not Homo neanderthalensis.

            This brings me to one final point that can either further clarify or further muddy this entire discussion of race and genetics. Generally, when we talk about genetic comparisons, we have been talking about comparing classical “genes” which are the DNA sequences that code for proteins (e.g. the hemoglobin protein coded by sickle cell trait). It is only in the past decade or so that we have learned that much of the 98% of the human genome that does not code for proteins has a profound effect on our phenotype. That is the epigenome, which regulates the expression of the classical genes.

            One of the things we have learned about the epigenome is that it can change during the lifetime of an individual based on environmental factors such as diet, stress, toxins and other factors. These changes do not affect the DNA sequence of genes, but they do affect the expression of those genes. More significantly, some of these epigenomic changes are passed on to offspring and can effect generations into the future.

            This raises the question of environmental factors related to racial groupings and their impact on genetics. There is evidence, for example, that African-American descendants of slaves have lower birth weigh children than African-American descendants of non-slaves, perhaps related to epigenetic factors of stress and diet during slavery. One can imagine many social-cultural factors that may vary by race that could impact the epigenome. Perhaps, when we have the ability to look at the full genome variation among racial groups, our knowledge of genetics and race will change.



1.     R. Schwartz, “Racial Profiling in Medical Research,” New England Journal of Medicine 344 (2001): 1392

2.     E. Burchard, E. Ziv, N. Coyle, et. al., The Importance of Race and Ethnic Background in Biomedical Research and Clinical Practice,” New England Journal of Medicine 348 (2003): 1170

3.     K. Lotzof, Cheddar Man: Mesolithic Britain’s Blue-eyed Boy, Natural History Museum website, Feb. 7, 2018,

4.    M. Meloni, Race in an Epigenetic Time: Thinking Biology in the Plural, The British Journal of Sociology 68 (2017): 389

Artificial General Intelligence

            Artificial intelligence (AI) is all the rage today. It permeates our lives in ways obvious to us and in ways not so obvious. Some obvious ways are in our search engines, game playing, Siri, Alexa, driving cars, ad selection, and speech recognition. Some not-so-obvious ways are finding new patterns in big data research, solving complex mathematical equations, creating and defeating encryption methodologies, and designing the next generation weapons.

            Yet AI remains artificial, not human. No AI computer has yet passed the Turing Test or the Blois Test. (See discussion blog of November 2, 2017).  AI far exceeds human intelligence in some cognitive tasks like calculating and game playing, AI even exceeds humans in cognitive tasks requiring extensive human training like interpreting certain x-rays and pathology slides. Generally, its achievements, while amazing, are still somewhat narrow. They are getting broader particularly in hitherto exclusively human capabilities like face recognition. But we have not yet achieved what is called artificial general intelligence, or AGI.

            AGI is defined as the point where a computer’s intelligence is equal to and indistinguishable from human intelligence. It defines a point toward which AI is supposedly heading. There is considerable debate as to how long it will take to reach AGI, and even more debate whether that will be a good thing or an existential threat to humans. (See discussion blog of October 23, 2017).

            Here are my conclusions:

1.     AGI will never be achieved.

2.     The existential threat still exists.

            AGI will never be achieved for two reasons. First, we will never agree on a working definition of AGI that could be measured unambiguously.  Second we don’t really want to achieve it and therefore won’t really try.

            We cannot define AGI because we cannot define human intelligence—or more precisely, our definitions will leave too much room for ambiguity in measurement. Intelligence is generally defined as the ability to reason, understand and learn. AI computers already do this depending on how one defines these terms. As discussed in the book, more precise definitions attempt to identify those unique characteristics of human intelligence including the ability to create and communicate memes, reflective consciousness, fictive thinking and communicating, common sense and shared intentionality. Even if we could define all of these characteristics, it seems inconceivable we will agree on a method of measuring their combined capabilities in any unambiguous manner. It is even more inconceivable that we will ever achieve all of those characteristics in a computer.

             More importantly, we won’t try. Human intelligence includes many functions that don’t seem necessary to achieve the future goals of AI. The human brain has evolved over millions of years and includes functions that are tightly integrated into our cognitive behaviors that seem unnecessary and even unwanted to build into future AI systems. Emotions, dreams, sleep, control of breathing and heart rate, monitoring and control of hormone levels, and many other physiological functions are inextricably built into all brain activities. Do we need an angry computer? Why would we waste time trying to include those functions in future AIs? Emulating human intelligence is not the correct goal. Human intelligence makes a lot of mistakes because of human biases. Our goal is to improve on human intelligence—not emulate it.

              The more likely path to future AI is NOT to fully emulate the human brain, but rather to model the brain where that is helpful—like the parallel processing of deep neural networks, and self learning—but create non-human computer-based approaches to problem solving, learning, pattern recognition and other useful functions that will assist humans. The end result will not be an AI that is indistinguishable from human intelligence by any test. Yet is will still be “smarter” in many obvious and measurable ways. The Turing Test and Blois Test are irrelevant.

               If that is true, why why would AI still be an existential threat? The concerns of people like Elon Musk, Steven Hawking, Nick Bostrom and many other eminent scientists is that there will come a time when the self-learning and self programming AI systems will reach a “cross-over” point where they will rapidly exceed human intelligence and become what is called artificial superintelligence or ASI. The fear is that we will then lose control of an ASI in unpredictable ways. One possibility is that an ASI will treat humans similarly to the way we treat other species and eliminate us either intentionally or unintentionally as we eliminate thousands and even millions of other species today.

               There is no reason that a future ASI must go through an AGI stage to achieve this potential threat. It could still be uncontrollable by us, unfriendly to us and never have passed the Turning Test or any other measure of human intelligence.

Lamarckian Evolution is Making a Comeback

In the introduction I make the statement, “I also learned how correct Jean-Baptiste Lamarck, the famous French biologist, was for all the wrong reasons.”

Lamarck was an accomplished biologist living in the late 18th and early 19th centuries. He was an expert in the taxonomy of invertebrates, and was widely regarded as a botanist. He also wrote about physics, chemistry and meteorology.

He is best remembered, however, for his publication of Philosophie Zoologique in 1809 in which he lays out his theory of evolution. In the book, he outlines two laws of nature. The first is that animals develop or lose physical traits depending on usage of those traits. For example, if an animal like a mole always lives in darkness, then they would, over generations, become blind. Through usage, characteristics of an animal are either enhanced or decay during a lifetime. The second law states that these acquired changes during a lifetime are passed on to offspring, i.e. inherited. These two laws explain how species evolve by continual adaptation to their environment and eventually branch off into new species once the changes become large enough. This is often referred to as Lamarckian evolution.

There were other interesting aspects of his theories. He believed that there was some natural force that drove organisms toward increased complexity quite apart from the usage law. The wide variety of organisms found in nature was because different life forms appeared spontaneously at different times. Thus they do not all evolve from a common ancestor. When gaps seemed to appear in the fossil record in certain lineages, he attributed that to a failure in finding all the relevant fossils. His theory clearly assumed gradual and continual evolution, but that evolution was always driven toward greater complexity.

Lamarckian evolution was largely debunked when the works of Gregor Mendel and others later demonstrated that inheritance occurred according to discreet rules of dominant and recessive inheritance rather than through acquired characteristics. Further discoveries in genetics during the 20th century further put the notion of inheritance through acquired characteristics to rest.

BUT, Lamarck has gotten a bit of a reprieve in the 21st century. By 2003, we had completed the Human Genome Project, which told us a lot about our genome and genes, but little about the epigenome. Since then, we’ve learned a lot. The epigenome refers to the 98% of our genome that does not code for proteins (what we traditionally call genes.) Instead, much of that huge portion of our genome has to do with the regulation of genes, largely through the coding of various types of RNA and the subsequent methylation and acetylation of DNA and histones. We have between 20,000 and 25,000 protein-coding genes.  That’s about the same number as a mouse and even a worm. And many if not most of these genes do about the same thing across a wide spectrum of animals. What makes us different from a mouse or a worm is largely controlled by the epigenome.

It turns out that the epigenome is responsive to various factors in our environment like diet and chemicals. These factors do cause changes in the epigenome, which, in turn, cause changes in the expression of various genes during a lifetime. The epigenome does not ever change the DNA sequence of a gene. The remarkable fact is that some of the epigenomic changes acquired during a lifetime are passed on to progeny through the sperm and egg! Although it is not through usage of parts of the body as Lamarck proposed, there is evidence of inheritance of traits acquired during a lifetime. One could call that Lamarckian.

Another way that acquired traits will increasingly be passed on to progeny will occur once germline genetic engineering becomes more prevalent. So perhaps Lamarck was more prescient than we give him credit for.

Lamarck was extremely accomplished and well ahead of his time. He lived long before we understood genetics and his evolutionary theories preceded those of Darwin. To some extent, he has been given a bit of a bum rap. He got some things right and some things wrong. You can say that about a lot of our great scientists. He did recognize that something changes in an individual through generations and those changes result from interaction with the environment. Darwin also theorized that individuals change from generation to generation. Neither understood that these changes first require random genetic changes. Both knew that the environment played a large role in evolution, although Darwin’s natural selection is what is generally accepted today as the driving environment force rather than usage of body components. He was wrong about the multiple spontaneous emergences of different life forms at different times, but he was correct about apparent gaps in evolutionary lines reflecting incompleteness in the fossil record.

Lets give Jean-Baptiste Lamarck his due.

What will Homo Nouveau look like?

When interviewed about my book, often the first question asked is “So what does come after Homo sapiens? I never answer, “Homo nouveau” which is the name I have given to the next human species since it really doesn’t tell anyone anything. What they really want to know is what does this creature look like?

The simple answer is that Homo nouveau will look just like us. Well, what does that mean? Homo sapiens have a lot of different looks. Yes, and so will Homo nouveau. My hypothesis is that the next human species will be the result of an off-target epigenetic mutation secondary to a popular genetic engineering procedure leading to a post-zygotic reproductive barrier. If you haven’t read the book, this sentence may not have much meaning for you. The basic premise is that many Homo nouveau will come into being over a period of decades or longer because they or their parents all underwent some popular germline genetic engineering procedure. That procedure could be anything and I speculated it would be related to attempts to alter a complex genetic characteristic such as aging.

Assuming that procedure is available to anyone in the world, then the collective appearance of all Homo nouveau will have the same variation as Homo sapiens. One will not be able to tell one from the other by appearance, language, culture or any other characteristic…except one. They will only be able to have viable offspring by breeding with another Homo nouveau. Interbreeding with Homo sapiens will not result in viable pregnancies.

Over time, since Homo nouveau will be a metapopulation with a separately evolving gene pool, they will evolve physical and other characteristics that will diverge from Homo sapiens. That will happen both by classical Darwinian natural selection as well as likely further genetic engineering.  There is no way to anticipate exactly what those differences will be.  Much of that will be determined randomly. At some unpredictable time in the future, they will look and act differently.

How does this scenario regarding the speciation of Homo nouveau from Homo sapiens compare to the speciation of Homo sapiens from our predecessor? There are similarities and differences.

The similarities reflect the fact that in most speciation events the new species looks the same or similar to the previous species at least in outward appearance. With regard to Homo sapiens, we don’t know for sure exactly which human species was our immediate predecessor. It may have been Homo heidelbergensis or some other closely related human species and it almost surely happened in Africa. It was not some sudden event, however. As Sally McBrearty and Alison Brooks point out in their marvelous article, “The Revolution that Wasn’t: A New Interpretation of the Origin of Modern Human Behavior” (Journal of Human Evolution 39 (2000): 453), the transition to modern humans from our predecessor happen over a period of over 200,000 years. In fact, it probably happened in multiple regions of Africa simultaneously during that period.

Similarly, as Robert Foley said in his book Humans Before Humanity: “As we have seen here, human evolution is no blinding flash and no special creation. Man did not make himself, nor woman herself. Both are the product of countless events in the daily lives of the hominids. There is no magic ingredient in human evolution, and no substitute for knowing the details of what happened – where and when and why. Small, insignificant earthquakes in Africa, or particular demographic trends in Europe, are responsible for what happened in evolution. We should not let the uniqueness of our species dupe us into believing that we are the product of special forces. Cosmologists studying the origins of the universe need to think in terms of a big bang. Evolutionary biologists are better off with a bout of hiccups. If we had been privileged enough to observe the origins of our species and our lineage, we would have been struck by one thing –nothing very much happened.”

Let me emphasize: Had we been observers during the period when archaic humans transitioned to modern humans, it is unlikely we would have noticed it. In that regard, that transition is similar to my projected speciation of Homo nouveau.

The differences, however, are dramatic. First, we are going to directly cause the speciation through genetic engineering. Second, it is going happen much more quickly than over a period of 200,000 years. Finally, we, the predecessor species will be cognizant of what is happening.

Was Lucy better adapted to bipedalism than we are?

In Chapter 7, I review the controversies regarding the reasons we evolved into a species that walks on two legs. The theories include the need to free up hands for carrying infants, to be able to carry food long distances, to appear larger as a defense mechanism, to be more efficient in locomotion, and several others. In my view, none of the explanatory theories were very convincing and the bipedalism mystery persists. But there is a related question that also interests me. Was Lucy better adapted to walking upright than we are?

Lucy is the most famous Australopithecus fossil—maybe the most famous fossil of any kind. She lived about 3.3 million years ago in sub-Saharan Africa at a time when climate change was thinning out the trees. This required her predecessors, the Ardipithecines (whose most famous fossil is nicknamed Ardi) to occasionally leave their natural habitat in the trees and walk to a more distant tree. They were probably the first pre-human bipedal species in our lineage. By the time of Lucy, those distances became greater and there were numerous speculated reasons that the Australopithecines continued to advance bipedalism. Lucy’s foot was more “human-like” than Ardi’s in that it did not have an opposable big toe. Nonetheless, Lucy was still an ape, about 4 feet tall with a small brain who didn’t use tools. When asked about Lucy, her discoverer, Donald Johanson, said “Oh yes, she walked erect. She walked as well as you do.”

Although we don’t know for sure the exact line of evolution from these pre-human species to humans and finally to Homo sapiens, we do know that they were all upright bipedal walkers. We also know that upright posture doesn’t work all that well for Homo sapiens given the widespread back problems we have of all kinds: degenerative arthritis, narrow, bulging and ruptured intervertebral discs, sciatica, spondylolisthesis, spinal stenosis, and others. In short: a lot of back pain.  Did Lucy and her relatives suffer from the same back afflictions?

Probably not. There is one big difference between Lucy and Homo sapiens: our big brain. Lucy’s pelvis was rotated compared to an ape in just the right way to allow the appropriate muscle attachments needed for upright walking. But it was still a small pelvis that easily accommodated a small newborn head at childbirth. That pelvis had to get a lot bigger to accommodate a human newborn head. That is what caused the problems we have today.

C. Owen Lovejoy is the guru regarding the anatomy of locomotion in both living and extinct species. He states the following: “In one respect Lucy seems to have been even better designed for bipedal­ity than we are.”  That respect related to a pelvic design that became compromised as the human brain enlarged. This required changes in the size and shape of the birth canal, which had a negative impact on the mechanics of upright locomotion. Although these changes in the male are less pronounced than the female, even our male anatomy is less adapted to upright posture than Lucy’s—particularly in view of the larger bulk of a male. Lovejoy goes on to say, “The difficulty of accommodating in the same pelvis an effective bipedal hip joint and an adequate passage for a large infant brain remains acute, how­ever, and the human birth process is one of the most difficult in the animal kingdom.”

Since upright posture preceded the enlargement of the brain, perhaps had the sequence been reversed, humans would be quadrupedal today.

The Blois Test

In 1950, the brilliant Alan Turing proposed what is commonly referred to today as the Turing Test. Common descriptions of the Turing Test describe it as a person, acting as an interrogator, submitting questions to both a computer and a person, each of which is secluded in a room away from the interrogator. Communication of the answers is done only by typed text back to the interrogator. If, after a period of time, the interrogator is unable to correctly identify which answers come from the computer vs. the person, the computer has then passed the test and is considered as intelligent as a human.

The actual Turing Test as described in Turing’s original article was a bit different from that. His original description consisted of what he called The Imitation Game in which a man and a woman were secluded into two separate rooms and the goal of the interrogator was to determine which was which judging from the typed text answers to queries. The man and woman were instructed to intentionally disguise their sex identity by their answers. The Turing Test would then consist of substituting a computer to take on the role of one of the sexes and then determine if it was equally capable of fooling the interrogator.  Turing predicted that by the turn of the century (fifty years later in 2000), “that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.”

Whatever version of the Turing Test one uses—and there have been many over the years—it has been criticized from many points of view as being an inadequate method to determine when computers reach the level of human intelligence. Who would be the “average” interrogator? Who would be the person being interrogated?—too variable and too imprecise. Which of the many definitions of human intelligence is really being tested? What questions would be asked? If you limited the questions to chess moves, IBM’s Deep Blue would have passed the test long ago, yet no one considers it to have human level intelligence. An excellent review article of the Turing Test done 50 years later described the many flaws of the many possible versions. It concluded by saying, “We believe that in about fifty years’ time, someone will be writing a paper titled "Turing Test: 100 Years Later".

Marsden S. Blois was a physician on the faculty of the University of California, San Francisco School of Medicine until his death in 1988.  He is not as well known as Alan Turing except in narrow professional circles. He pioneered the field of medical informatics and was a founding member of the American College of Medical Informatics. In chapter 11 on electronic evolution, I discuss his article in the New England Journal of Medicine in 1980 entitled Clinical Judgment and Computers. It is considered a classic in his field. Below is a figure from this article.

The funnel is meant to show the decreasing cognitive span of a physician during the process of making a diagnosis regarding a patient. At first, represented by point A in the figure, any diagnosis is possible. The physician, with his or her general knowledge of medicine and of the world in general and the ability to interact with other humans by talking with them (taking a history), examining them and observing their behavior must narrow down the huge number of possibilities to a reasonably small number called the differential diagnosis. Selecting from the smaller number of choices in the differential diagnosis is represented by point B in the figure. The process at point B to get to the correct diagnosis often requires the use of laboratory tests and other diagnostic procedures and very specific detailed knowledge of the narrow disease spectrum. Blois’ contention was that physicians are far superior to computers at point A whereas computers, if programmed properly with the right rules, are superior to physicians at point B.

The funnel could represent any domain of knowledge, not just medicine. Somehow, the human brain is born with the capability to get to point A just by living in the world. It is common sense. I propose a new test to replace the Turing Test—lets call it the Blois Test. As artificial intelligence (AI) becomes closer to human intelligence, Point B in the figure will move to the left and become closer to Point A. For example, Google’s AlphaGo software in its DeepMind computer is at point B with regard to playing the game of Go, but it totally lacks common sense.  It is far from Point A. I would postulate that IBM’s Watson computer, as demonstrated in its victory in the game of Jeopardy is closer to Point A, but still far from it. When AI finally reaches what is called artificial general intelligence, i.e., equal to human intelligence, Point B will be superimposed on Point A. It will then pass the Blois Test.

OK, dear readers, fire away with your critique of this suggestion. 


What's in a name?

The grandson of Charles Darwin, Charles Galton Darwin suggested the name Homo sapientior (wiser man) when speculating about a future human species. Since we don’t really know if that future species will be wiser, I elected to use a more neutral name: Homo nouveau (new man). At the time of my research I did not have any preconceived notion regarding the answer to What Comes After Homo Sapiens?

Many others have coined new terms for a possible future human species and, in the process, have created some confusion regarding their intent in so doing. The confusion is whether they intended to imply an alternative to Homo nouveau as I have defined it or were they simply renaming Homo sapiens.

This distinction is subtle but important. It is best illustrated by referring to Figure 9 in the book on page 42 (of the print version). This figure illustrates the difference between anagenesis and cladogenesis. Cladogenesis is the evolutionary branching of a new species from an ancestral species. It is the emergence of a distinct and new species separate from its predecessor. In the case of Homo nouveau, it is a new human species different from Homo sapiens and, as envisioned, coexisting with Homo sapiens. In fact, there is a biological barrier to interbreeding between the two human species.

Anagenesis does not involve the creation of a new species distinct from its predecessor. It is simply the evolutionary change of that same species over time. Normally, we do not assign a new name to species simply because it has evolved over time. This is true even if the difference between a species at one time is greatly different from that same species at an earlier time. That difference could even be greater than that between an ancestral species and a new species created through cladogenesis. It is quite possible—in fact even likely—that the difference between Homo sapiens of today and the early Homo sapiens of 250,000 years ago is far greater than the difference between today’s Homo sapiens and my speculated Homo nouveau.

In looking at the popular nonfiction science literature, this distinction is not always clear. Take, for example, Yuval Noah Harari’s book Homo Deus.  Did Harari intend to suggest that the future Homo deus is a new human species distinct from Homo sapiens (cladogenesis) or is he simply re-characterizing a future Homo sapiens (anagenesis)? Harari’s intent in writing his book was quite different from mine. He was not focused on the issue of speciation so much as the cultural and technological changes that he anticipates Homo sapiens is encountering. My conclusion in reading his book is that Homo deus is not a new species distinct from Homo sapiens, but rather our evolutionary future, i.e. anagenesis. Harari views Homo deus as a kind of “upgrade” to Homo sapiens in which our technologies convey god-like powers on us to improve longevity, happiness, intellectual and other capabilities. The issue of speciation is not discussed.

Similarly, Paul Knoepfler’s book GMO Sapiens uses this term to apply to a subset of future Homo sapiens who have undergone genetic engineering for the purposes of human enhancement. These people are still Homo sapiens.

Max Tegmark suggests that we change the name of our species from Homo sapiens to Homo sentiens in his book Life 3.0: Being Human in the Age of Artificial Intelligence. This is in anticipation of the time when artificial intelligence has advanced to the point where it dwarfs human intelligence. Tegmark’s premise it that humans will still be distinguished from these superintelligent machines because of our consciousness or sentience, which the machines will lack.

The book Homo Prospectus brings this notion to the present. The authors are simply suggesting an alternative name for the current Homo sapiens species, suggesting that our unique distinguishing feature is not how “wise” we are but rather how capable we are in imagining the future—prospection.

I would characterize John Hands’ fabulous book Cosmosapiens: Human Evolution from the Origin of the Universe as the history of everything. Although he never uses the term Cosmosapiens in the book except in the title, it is obviously meant to imply a broad connection of human evolution to the evolution of the cosmos in general.  In any case, it is not the name of some future species—or more precisely, it seems to be the name of all human species past, present and future.

There is one book that appears to be predicting and naming a new future human species: Homo Evolutis by Juan Enriquez and Steve Gullans. They define Homo evolutis as a species “that directly and deliberately controls the evolution of its own and of other species.” This will be done through germline genetic engineering. But even they muddy the concept. Since it will be Homo sapiens that will do the initial genetic engineering, it is not clear if they are renaming Homo sapiens (anagenesis) or describing the new species that is created by Homo sapiens, which, in turn, will continue to use genetic engineering to create yet more species.

After all is said and done, there is no question that Homo nouveau as I define it will be a distinctly new human species fitting the definition of a cladogenesis speciation event. It is the only clear-cut fully-described such case I have found in the nonfiction literature.  If you know of others, please respond here.