The Case For Synthetic Sentience
Steve Jones, May, 2022 (pre-LLM/ChatGPT)
Abstract
Although progress has been made in Artificial Intelligence (AI) over the last 70 years, the big catalyst that will transform the world, as the 20th century invention of the transistor did, has yet to be built. While the goal of AI has been to embody intelligence in machines, it is unlikely that possessing intelligence, such as a built-in understanding of a natural language or road system, will effectively scale to power high-functioning machines that engage the world in a general way. AI's toolbox of paradigms, including the rule-based Expert Systems (ES), Blackboard Systems (BS), and Machine Learning (MS), has been successfully applied in narrowly-scoped applications such as robotic vacuum cleaners and self-driving cars, because these paradigms are useful programming tools for creating solutions to problems that are too complex to solve with an algorithm that implements a fixed control regimen or fixed control policy. So far, these applications require hard-coded integrated domain knowledge and application-specific training for their particular function and situational framework.
Sentience, or the perception and response to sensation, is a lower-level, more general operational paradigm, a foundation upon which skills, such as walking, driving, or communicating in a natural language, become learned as a result of direct experience while operating in the natural world. Gaining an understanding of how sentience arises in natural brains makes it possible to replicate the same effect in synthetic brains, for deployment in open-ended applications. Whereas robots built using present day AI paradigms are proficient in fixed-function applications, robots integrating Synthetic Sentience (SS) will gain proficiency in problem domain areas as needed and become more successful, overtaking the performance of fixed-function devices.
It is difficult to overstate how much of an impact SS will likely have on the world. SS is a new fundamental building block that will enable an enormous wave of future innovation, just as Shockley's invention of the transistor in 1947 gave rise to the replacement of vacuum tubes and relays in digital computers, leading to the proliferation of digitized information, computers on every desktop and in every home, integration of embedded computers in virtually all electric and mechanical equipment, global connectivity through the internet, and a fast, global, digital economy.
In this paper, we start with what it means to be sentient and how the effect arises in natural brains, and then explore how it will arise in synthetic brains, and how synthetic sentience may be deployed in applications. Finally, we will make some predictions about how the world will change as a result.
Introduction
Sentience is exhibited today in a huge range of natural organisms employing a wide range of complexity, from the small brain of Drosophila Melanogaster (a common fruit fly, with a brain containing about 220,000 neurons) to that of Homo Sapiens (87 billion neurons) and the even larger brains of dolphins, sperm whales, and elephants. These animals share a common behavioral characteristic-- they are able to navigate a complex world that enables the organism to lead a productive life, even interacting with other animals exhibiting their own dynamic behavior, by perceiving and responding to sensation. They all share a common feature-- they have a Central Nervous System (CNS) with a species-specific brain plan that does not specify the exact set of neurons and their connectivity-- surprisingly, the wiring of the individuals' CNS is unique within most species.
A few simpler animals have fixed nervous system architectures. At the lower end, Caenorhabditis Elegans (or more commonly, C Elegans), a well-studied small nematode that comes in two versions (male and hermaphrodite), has a fixed, precisely-wired neural network of exactly 302 neurons for the hermaphrodite and 387 for the male, replicated in all individuals. This worm's nervous system identifes chemical gradients in the environment and seeks out nutrition through muscle movements, also facilitating reproduction. Other animals, such as sea squirts, begin an ambulatory life using a brain, though once they attach to a substrate on the sea floor, their brain atrophies, being no longer needed.
Outside the animal kingdom, plants and fungi communicate through the exchange of chemical signals, on a much slower timescale than is found in animal communication. We have only begun to explore the scope of how plants become aware of their environment and each other; however, it is clear that plants do not have brains because they do not navigate their environment with muscle movements-- this is a distinct property of animals, and of sea squirts until they attach to the sea bed and become plant-like.
Brains exist in order to support interaction with the world, through navigation made possible by muscle movements, and through communication made using sounds produced by muscles. Sentience starts then, with a brain that typically controls muscles that enable the organism to navigate its environment and interact with it.
Natural brains also receive inputs-- humans have sight and sound, smell, taste, skin sensations (pressure, pain and temperature), internal sensations such as proprioception (flexion and extension of limbs, and sensing of the contraction of muscle groups), pain, pressure, temperature and vestibular (detection of up/dow, left/right, and forward/backward motion.) Fish have a lateral line that senses electric field disturbances to detect nervous system activities of other nearby animals.
Fundamentally, brains enable the organism to move and affect the environment, which causes the set of sensory signals being received by the same brain to change. This closed loop allows the brain to respond, change, and model the environment and the changes it makes by changing itself with synaptic plasticity. We hypothesize that this feedback loop is the mechanism that enables sentience and corresponding visible behavior to arise. Give a baby a floor and enough time, and the baby will learn to roll over, sit up, crawl, and eventually walk.
This means there can be no sentience in a brain that has never received input, but which randomly actuates the organism's muscles, because it will not be able to associate moving a muscle with movement or making a sound. It is only when feedback occurs, that the two can be associated. Similarly, there can be no sentience in a brain that receives input but has never moved or otherwise affected its environment, because it will not be able to tell that it is independent of the sensory reference frame and all of its information-- it will be a mere observer.
Sentience then, needs a world to explore, and a brain to both sense the world through sensors and to direct actuators to navigate the world, in order to develop. Interestingly, an SS need not actually explore a physical world. If a brain could sense a virtual world, and navigate the virtual world with virtual muscles or more direct control over it, such as the buying and selling of securities, then we can imagine that it could be sentient and lead a productive life in a virtual environment not associated with the physical world.
How Do Natural Brains Implement Sentience?
It is commonly assumed by many that sentience is produced by a precisely-wired magic circuit in the brain or, at the other end of the spectrum, that it is abstractly based on philosophical or psychological constructs. These ideas are misguided.
Low Level Brain Wiring is Not Fixed
Natural brains have evolved over a large timescale, working with a wide range of animal species, in a wide range of physical environments. The number of different brain wirings that nature has successfully built is staggering. A conservatively low estimate of 109 (one billion) animal species have been explored by nature, each with its own brain plan with an architecture expressed by its genome. If we even more conservatively do not count the highly-replicated simpler animals, such as the over 110 trillion mosquitoes living at any one time on Earth, and give more weight to the 108 billion humans that have ever lived, an even more conservative estimate of 1011 (one hundred billion) individuals (a gross average, including insects, birds, fishes, reptiles and amphibians, and small and large mammals) within each species have instantiated their species' brain plan during neurogenesis uniquely wiring up a compatible instance of the plan, even though the circuit is, at a low level, completely different from individual to individual. Very conservatively, 10(9+11) (one hundred quintillion) individual brains with unique wiring have successfully navigated the world and led productive lives. There is clearly a huge set of implementations exhibited by nature that are actually sentient. The lowest-level details of brain wiring do not seem to matter, nor does there seem to be a universal brain wiring architecture. Instead, there must be a set of organizational principles that can be distilled to salient attributes that, when expressed by any of the implementations, give rise to the sentience effect.
This conclusion makes sense from another perspective; the human genome contains approximately three billion base pairs that are used to encode a human's entire body plan and operational parameters. With 87 billion neurons making up the human CNS, there is simply no room for encoding a human brain's detailed wiring. During neurogenesis, more general wire-up principles take over, allowing for huge variability in actual implementation of the architectural brain plan.
In addition to introducing new brain plans, nature has had plenty of opportunities to refine them all with sheer brute force exerted by evolutionary forces on a global scale. As behaviors become more successful in a population, they are eventually rewarded with optimized wiring. Because nature has been working on brains for a very long time, the ones we study today are likely to contain significant optimizations, or special purpose hard-wired functions, that may obfuscate the underlying architecture. To borrow terminology from computer software, if a brain plan were written as a structured program with functions and statements within those functions, there would be GOTO statements everywhere, optimizing the behavior and making it nearly impossible for a human to understand or maintain it. Further, if we believe that evolutionary forces caused the creation of brains to begin with, then if a brain plan were written as a program, we should see only GOTO statements and no structured code at all, and perhaps only later, more generalized structures might arise. This is actually the case, as the neocortex was an evolutionary addition to all mammalian brain plans as a more uniform computing fabric composed of uniform cortical columns.
Most early brain research in small to large animals focused on the other, fixed-function areas of the brain-- including the brain stem, the midbrain, the limbic system, and the allocortex. Recently, focus on understanding the neocortex columnar architecture is gaining popularity.
This brief tour suggests that there are many ways to wire up a working brain, and that many brain architectures work. Having a look at what these brains in common, will likely reveal the much larger-scale architectural elements they have in common-- the salient attributes of brains that give rise to sentience itself.
Philosophy and Psychology Are Not Fundamental
It is easy for humans to use their direct experience with consciousness to explain how sentience arises. As we attempt to do so, we find explanations in terms of concepts we have already come to know through natural language, such as thought, emotion, reasoning, feeling, reflection, truth, ethics, morality, and faith. However, with the exception of the hard-wired primitive emotions arising from our limbic system such as fear, and cravings such as hunger, these constructs are built on language with a vocabulary. The salient parts of the mechanism that allows us to consider these concepts are language elements, not brain mechanics. Although most (but not all) of us experience "thoughts" with a voice in our heads, they are secondary, and not the substrate from which sentience emerges.
This situation is similar to the two-level architecture in today's microprocessors. We colloquially say that the microprocessor runs the machine code, but actually, it is the microprocessor's microcode interpreter that consumes the processor's microcode as data, and the resulting interpreted engine consumes machine code as we know it as data; the microprocessor's lower level hardware never sees it directly. Similarly, the higher level thoughts we seem to experience as fundamental, are in fact implemented with language. They do not form the basis for sentience, the perception and response to sensation, a much lower-level function that can eventually give rise to natural language, supporting the "higher" cognitive experiences we sometimes refer to as consciousness. Although interesting, this level of processing is not our focus here; we are interested in sentience fundamentals.
Synthetic Brains
Can sentience arise only in natural brains, or can artificial ones work too, as do artificial hearts for example? While natural brains have evolved to use the materials available to nature, it is reasonable to ask if the actual materials used by nature are important (necessarily sufficient) for sentience to arise. We can answer this question from a few different vantage points using thought experiments.
Let's start with a functioning human brain, and imagine replacing one of its living neurons with an artificial one, keeping the same connections to the other neurons that the original one held. This artificial neuron would have the same electronic and chemical properties as a living neuron, but it would be made of synthetic materials instead of cytoplasm, and it would fire Action Potentials (AP) exactly as the living neuron would. We would expect the whole brain to continue working. In fact, we could replace the entire brain with artificial components, and at no time would we expect to see a difference in function, because the artificial neurons are constructed with sufficient operational fidelity to yield exactly the same behavior at the neuron level. It would appear that the materials used by nature were one way to achieve sentience, but not the only way. It is the behavior of the neurons that matters, not the materials from which they are constructed.
Now imagine that each artificial neuron uses a wireless radio signal to communicate with an external computer, signaling the computer when inputs change, and receiving signals from the computer when the computer calculates that the neuron should fire its AP. We have moved the calculations for behavior of each neuron to the computer system, and only the shells of neurons and their connections to other neuronal shells remain. Again, with suitable simulation fidelity of each neuron's behavior, the system would continue to behave as the natural brain did. Artificial neurons need not be entirely physical for sentience to be exhibited.
Let's go one step further. Let's move the entire connectome of neurons and their connections into the computer's simulation engine, and simply leave the edge neurons in the body-- the neurons that interface with incoming sensory signals, and output muscle nerves. We would expect that with sufficient simulation fidelity, the same behavior would occur, and the brain's behavior would continue unchanged.
What sort of fidelity would be necessary and sufficient for simulation of a natural neuron to be successful? We have good data about neuronal behavior, both at the very low-level (sub-1ms timescale) where chemistry translates to electronic pulses of AP activity, and the timescale at which AP pulses are observed (1ms and beyond). If our simulator has the capability to model all of the important neuronal parameters, including post-synaptic potential (PSP), receptors and the actions they make on the host neuron, PSP decay, and others, we may replicate the spiking characteristics of the natural brain, in simulo or in silico.
While high fidelity (<1ms) simulation might seem desirable, it may not be strictly necessary. Nature implements neurons with materials having chemistry and physics that can be modeled with continuous differential equations; a costly complexity for simulation. Perhaps the continuously differentiable nature of these physics that eventually gives rise to AP pulses is one way to achieve these results, but perhaps it is not the only way. In fact, discrete digital simulation might effectively reproduce the spiking nature of simulated neurons at any desired timescale. We expect to learn through experiements that sentience will arise by using discrete simulations with an adjustable timescale from sub-millisecond to hundreds of milliseconds, showing the extent to which simulation timescale has an effect on overall efficacy of a simulated model with respect to the timescale and physics of an external environment. Converting the continuous physical model to a discrete one could significantly reduce computational work, allowing for larger, more sophisticated models to be simulated.
A simulation engine would require significant computation and storage capacity if it were to model a human brain in real time, performing computing associated with every neuron, each millisecond of operation. However, it need not evaluate every neuron, every step of the way. Most neurons in any brain are actually quiet, most of the time-- only a tiny fraction of the brain's neurons fire at the same time. If we cleverly build our simulator to only focus on computation for the neurons that might become active, then the inactive ones may be assumed to be quiescent, and may be ignored computationally. We can also arrange to distribute the simulation workload across several CPUs and host computer systems to a cluster of computer, or across the internet, further increasing simulation capacity for larger simulated brains.
Another look at the simulation capacity problem provides certainty that we have sufficient capacity to do sentience research using COTS hardware-- we need not look to simulating human-sized brains immediately. We can start with smaller ones, at the scale of the aforementioned Drosophila Melanogaster, as a way of studying how sentience arises in brains, and once we discover sentience arising in any form, scale up the model to watch for more sophisticated (and perhaps more recognizable) sentient behavior to emerge.
The Rise of Synthetic Sentience
Here we've convinced ourselves that sentience emerges from the collective behavior of connected neurons, and that while nature used living cells, a simulation (and by extension a simulation in pure hardware) of neurons with the right connectivity can also serve as a suitable substrate for sentience to arise.
We also know that sentience is part of a system involving an external (perhaps physical) world and a brain (natural or synthetic) with a body that has environmental sensors to sense the world or states of the body, and actuators that enable the body to move about and change the world.
Three things are needed to proceed with an experiment: A robotic body with sensors and actuators, and environment for the body to navigate, and a brain simulation which can receive inputs from the robotic sensors and deliver commands to the robotic actuators.
If we were to choose a virtual world for a virtual robot to navigate, that sentience might look very different to us in that environment. The physical world is full of complexity, and perhaps it is that complexity, compared to the mathematical purity of a virtual world, that might be the catalyst for the kind of sentience, with which we are familiar, to emerge. A physical world is a good starting point, and that provides the encouragement to create a physical robot to navigate the physical world and borrow ideas from nature.
While natural animals crawl, burrow, swim, and fly, not all of these paradigms are equally easy to implement using robotics. Burrowing, flight and swimming are more difficult mechanically and for mobile electronics. It makes sense to start with robots that have basic articulated limbs which, though uncoordinated in the beginning, might become coordinated through learning to roll over, crawl, or walk.
Adding vision makes it possible for the robot to see itself, either from a third person point of view, or from the first person point of view, allowing its movements to be observed by itself. The robot might easily house a small Raspberry Pi computer running Linux and a program that moves servos or stepper motors, and captures video camera data or other sensory streams, such as proprioceptors in the body or on the articulated legs. The sensory data could be streamed from the robot computer to a computer running the brain simulation over Wi-Fi or Bluetooth, and the brain simulation computer could use the same network to send pulses to the robot computer to move the servos.
Note the lack of built-in behavioral functionality in such a system-- there is no built-in knowledge about how to move the legs together in a connected way, nor is there any sensory processing that performs scene understanding in the robot computer. It is the rise of purposeful coordinated behavior in the system that we are looking for; perhaps coordinated leg movements that give rise to locomotion toward a stimulus.
Housed separately from the robot's body, the brain simulation computer is free to consume power and have adequate computational capacity. This system simulates a model crafted by a model designer using brain layout tools. During normal operation, some neurons are stochastically stimulated by the simulator, simulating how a constant sprinkling of neurons in a natural brain are always firing. This leads to servos being activated randomly, making the robot randomly kick its legs.
As the simulator implements synaptic plasticity, neurons that fire together tend to coordinate more and more over time. Eventually, these plastic circuits tend to generate activities that yield results that cause the sensory data to change-- evolutionary behavior begins to emerge. Further, behavior that doesn't yield results, diminishes over time. With enough sophistication, we can view this as a very simple but purposeful behavior-- and see it as evidence of the beginnings of SS.
With enough sensory input channels and motor control channels, it will become difficult for us to tell the difference between coordinated movements and well-thought-out planning; that is because with sufficient neural fabric complexity, enough temporal pattern history can be encoded through synaptic plasticity to form longer real-time chains of behavior.
Of course, locomotion is one behavioral indicator that sentience has started to emerge. The response to spoken language and production of language would be another important one that we as human researchers would be able to recognize. These two functions seem like quite separate behaviors in humans, but actually they are very similar processes. Just as limbs begin stochastic movements that become coordinated over time, so to, do we observe babies babbling, listening to themselves, and listening to others, their vocalizations gradually morphing into recognizable language with experience.
Layered on top of locomotion and natural language, other more powerful behaviors can emerge. With sufficient real-world experience, the memories of a voice replay spontaneously in the robot, giving rise to a "voice in the head". The robot now experiences "thoughts" of its own, and can "think" about things it has encountered, situations in which it finds itself, and the future. It can coordinate with others. It is this level of sophistication that will be the catalyst for changing the world with SS.
The Value of Synthetic Sentience
To predict the future for a new technology like SS, it can be useful to look back on how other technologies have changed the world. From the introduction of the transistor in 1947 to today's internet in the 2020s, we can see big steps along the way. The transistor was an electronic switch that required far less power to operate than vacuum tubes, and consumed far less space. It could be dense-packed onto circuit boards, resulting in higher densities, enabling mobility, as was the case with hand-held transistor radios, consumer electronics such as alarm clock radios and audio/video equipment, and communications devices such as car phones and mobile radios.
Higher densities also gave rise to ideas for miniaturizing equipment, and soon integrated circuits and microprocessors were born. This was a key development, because it meant that higher capacity, lower cost, and lower power consumption were all a function of materials science. With steady developments first in multi-layer printed circuit board, and later lithographic manufacturing, increasing densities grew these three properties exponentially, following Moore's Law.
Soon, computers were "on every desktop and in every home". Connectivity was essential, and the internet became a publicly-usable platform for communication and services. Eventually, reduction in power consumption and footprint led to hand-held smart phones, and the market efficiently filled in all possible devices between desktop PCs and hand-held smart phones.
A key part of the growth of the electronics industry is that, by having the transistor become a component that helped scale-up the tools that in turn accelerated the transistor's impact on the world, it was as if gasoline were poured on a fire-- it burned as hot as possible, and reached out as far as possible, to every corner of the market.
Now let's think about sentience as though it were at the transistor level. We can imagine the first four-legged robot demonstrates learned locomotion and exploratory attention to its environment. It navigates around fixed and moving items in the physical world, more capable than the Roombas of today.
Eventually it learns to understand natural language and can follow commands. It learns to speak with humans, and can alert a human when someone is at the front door or a pet needs to go outside.
More capable robotic bodies with more sensory and motor pathways are developed; many more, in fact. Some prove to be more useful than others, and eventually seem well-fitted to specific tasks that people don't want to do, like cleaning, inspecting dangerous situations, or providing security.
Synthetic brain models become more sophisticated as market demands more capabilities from the physical robots they power. Eventually leading models excel at the tasks people most want to offload. Third parties train synthetic brain models in environments and sell them, and the good ones enjoy market success. Consumers of robots employing those synthetic brains use them in other ways, and the collective experiences of those synthetic brains can be merged back into a combined model that is more powerful than the original model-- making the acquired experience the valuable commodity. Educators and trainers of robots which have experienced and learned from those experiences most sought-after will be rewarded with value they can sell. Ultimately it will not be about the robots themselves, but about the recorded experiences and learned skills, and the brain models that support them, which can be replicated profitably.
Just as Intel and Microsoft routinely innovated new building blocks for PCs, a thriving industry of synthetic brain modelers and robotics manufacturers will create better and better SS substrates, which will be deployed in the world using the market's purchased robots to enrich them with experiences and skills. In the early market money will be made licensing the models and selling robots themselves. Eventually robots will replace the humans creating the basic building blocks; they may not even be physical articulated robots, but virtual ones, which understand a virtual world of brain architecture and virtual models of robots and the physical world.
Synthetic sentience will become ubiquitous and more powerful. We won't just have a robot in every business and in every home. Because any SS may be cloned with a file copy operation, it will cost next to nothing to replicate a sentience for every living person who wants one, just as smartphone software is often free or costing 1/1000th the cost of desktop software, and is actually monetized through other means, such as advertisements. Personal SSes are likely to stay with people as a companion, with the eyes and ears of their smart phone or whatever sensors eventually evolve to. A companion to have a conversation with, to have for advice, compfort, and also to serve as an extension of our body to operate in the virtual world, will all be functions offered by a personal sentience.
There shouldn't be a reason why a person would only own one SS; a person might replicate thousands, or even millions, of SSes to serve different functions or tackle hard problems, or to be used as force multipliers to make things happen in the world that could not be accomplished by one person. Ambitious individuals like Henry Ford, Bill Gates, and Elon Musk have created companies to amplify their ambitions and make them real-- pioneering mass production, eradicating diseases, solving renewable energy, and commercializing space exploration. In the 2080s, it should be possible to harness large groups of SSes, lowering the cost of coordinated positive impact on the world, and reducing the negative impacts that a company's footprint has on the globe's limited resources.
The ubiquitous nature of low-cost SSes might alarm those who worry that they will take jobs away from people who want to work. Even a shallow investigation of this notion will reveal it to be short-sighted. There will always be a shortage of labor because there will always be more work that needs to be done. Replicated SSes will allow anyone at a very low cost (lower than their monthly cell phone bill) to command the leverage they need to get more done, without having to shoulder all the work by themselves. Synthetic sentiences will create more possibilities, and create more job opportunities for people.
Synthetic Sentience as a Business
The infrastructure for SS will expand to the size of the Internet, with endpoints in the form of physical or virtual robots, and Sentience-As-A-Service (SAAS) providers in the cloud offering levels of synthetic sentience capacity, capabilities, and experience. Still other third parties will educate and train robots and sell the stored experience back into the market as add-on capabilities, similar to how today's applications are sold in the smart phone market.
The market for SS will be much larger than the car, PC, or smartphone markets-- billions of SSes may even outstrip the number of natural human sentiences in the world, though a large portion of those are likely to be virtual; and that is only the consumer market, not the commercial or governmental ones.
Small, medium, and large companies will deploy SS as force multipliers to be more productive, and the gap between the capabilities of small and large companies will narrow, enabling smaller companies with great ideas to realize the scaled results traditionally only found in larger companies with more employees.
Governments will deploy virtual SS to handle administrative workload that is currently backlogged-- background investigations, tax return processing, general accounting, and many other tasks that do not scale well with only humans doing the work. For example, the 73,000 human IRS agents suffering a backlog of work in 2020 might have been reworked to 10,000 humans, each augmented by 10 SS tax agents, to achieve the work of a 110,000 agent workforce, the majority of which would work tirelessly, 24/7/365 and not require any facility other than a SaaS instance and an email account, since they would be virtual agents. The resulting redistribution of labor could reduce the workload for employees, increase their pay, and improve overall throughput and accuracy. Internally, the IRS' IT department might deploy its own Agent-As-A-Service (AaaS) cloud infrastructure, where spinning up agents on demand at tax time would be a possibility.
Synthetic sentience costs will fall off exponentially with the declining costs of computer technology, SaaS hosting, and a multitude of synthetic brain models. Whole SS replication will have near-zero cost, making it feasible for every human who wants one, to have at least one companion/agent SS that lives with them for their natural life, and perhaps eventually continues on with their children or heirs.
The market will value and create demand for robots, virtual or physical, which are competent by virtue of their experiences. The value chain will consist of (a) brain models and simulation engines, (b) robotics, and (c) experience.
It has been said that 150% of the profits of the sale of a typical PC or laptop went to Intel and Microsoft; that's because the OEM bought a circuit board from an ODM and wrapped a case around it at a loss. The OEM made money by making replacement batteries for its laptop products, and selling space on the hard drive to software companies so that their demos could be pre-loaded during manufacturing. The subsequent attach rate, the rate at which consumers buy the real software from the software vendors as a result of trying the demos, fueled the software companies and made the whole process work. Similarly, we should expect simulation engines and robotics to be ultimately pushed down in value as they become more mass-produced with better and better technology, and brain models becoming ever more inexpensive, as a general understanding of them becomes high school education. Eventually, the only true value in this value chain will be experience.
Initially, experience will be packaged as a digital archive of a synthetic brain model that has been deployed for some time in a robot that has built working knowledge gained through experience in the environment. Whole copies will be replicated initially, but later, innovations surrounding incremental archives of experience will be made, and it is likely that individual experiences will be captured as digital add-ons for redistribution to other robots which had not gone through the same real-world experiences. Natural language, job skills, and knowledge will be packaged as downloadable data, which may be available for purchase, time lease, or through a monthly subscription plan. Perhaps some add-on packages will be made available on demand, being downloaded as needs arise in a robot's experience.
Conclusion
Synthetic sentience is the invention that finally delivers Artificial General Intelligence (AGI) and with it, enormous value that scales to a huge range of applications. Its capacity for exponential growth makes it a catalyst for positive change in the same way the transistor was for modernizing the 20th century.
Steve Jones
Founder & CTO
Explorations into synthetic sentience and building the robotics used to demonstrate it.