Frequently Asked Questions

Steve Jones, NeuroSynthetica's founder and CTO, offers answers to questions about NeuroSynthetica and what's ahead.

Q: How long have you been doing this?

Steve: I've been thinking about simulating sentience for a long time. Back in college as a computer science undergrad in the 1970s, I had the idea that perhaps brains didn't use static structures to represent information, but instead used a dynamic, oscillating pattern, something like that found in Conway's Game Of Life. I realized there were far more dynamic states than static ones in a given set of bits, and probably nature had found a way to exploit that fact.

It wasn't until 2013 that I started thinking about at-scale implementation, fueled by the reemergence of brain study, especially circa 2008. I spent months figuring out how to scale a simulation on commercially-available hardware, and implemented my first server. Years later, after selling my last company and retiring, I turned to the project full-time, and designed and implemented the Sentience Engine™, and along with it, the Workbench, used to interact with the server. The development has been a full-time job, and going forward, building the company will be a full time job for me as well.

After years of stealth-mode development, especially concentrated in 2020, the company made its announcement in Q1 of 2021.

Q: Where are you at right now, with results?

Steve: We've only just started. Synthetic Sentience is right now just a goal. The company's mission is to solve sentience and enable its safe and effective deployment. We have the tools and a plan. We are actually at the same stage that Unix was at when K & R had an operating system and a C compiler-- ready for the next great application.

I don't expect NeuroSynthetica to solve sentience on its own, but I am thrilled to be involved in the effort, and it will be a thrill to see others make significant achievements. At this point, we have demos that show that the tools work, and they await use by developers who will make synthetic sentience happen.

Q: How do we get from tools demos to real sentience?

Steve: The real key will be to get those tools in the hands of as many developers as possible, while still enabling NeuroSynthetica to hire the people needed to support them. That's why we have a community portal and a subscription model with a nominal annual fee to enable access to the tools and information and get things going.

At the same time, there is much work to be done to realize the vision of achieving synthetic sentience. The tool chain includes our SOMA™ modeling language and basic constructs, but lacks the functional equivalent of a set of class libraries that can be leveraged. So for example, in the future, sentience models won't have to define objects from scratch, but will likely include sentience fabric classes that embody research in proven concrete libraries, where the microcircuitry level is already established in the libraries and simply leveraged by the application.

Another area of work is the library of macro-level organization of synthetic brains, like the establishment of functional units (i.e., cortex functional areas, cerebellum, and brain stem equivalents). It will be straightforward in the future to simply include a library of high-level brain organization and adapt it using tuning parameters to deliver immediate results for a given application.

These ideas about evolving sentience technology are very similar to the ways in which integrated circuits evolved. Initially, there were the early 7400-series DIP packages with four gates per package, and a lot of products were produced with that level of technology. Then companies like Zilog and Intel packaged whole CPUs, and they were used in another wave of products, like calculators and PCs. Then companies like ARM got into the business of creating designs, and licensing them to customers who then went to foundries to get them implemented. The designs became more and more componentized, so that even peripheral designs were licensed. This resulted in the design flexibility we have today in modern equipment like mobile phones-- customer designs can draw on low-level building blocks including CPUs and peripherals, and implement them in one chip. Today I believe we are at the 7400 level of synthetic sentience, noticing that it can be used for basic machine learning. I think we will be surprised by the corresponding next steps ahead.

Q: Are you pursuing the minimal brain architecture itself as well?

Steve: Yes. So far, the company's efforts have been focused on developing the capability to describe models for synthetic brains and simulate them in real time. As we've talked about, there is much to be done in the tools area.

But always, we will keep our eye on the ball of solving sentience to find the minimal brain architecture necessary for it to arise. This activity will likely organically push the expansion of the tools to accommodate new findings, which will be great for the community and our customers.

Q: Aren't you concerned about the ethical ramifications of this?

Steve: Absolutely, and it's important to get the issues on the table and start that discussion. I started down this path to understand the origin of sentience, and that's still our scientific endeavor.

Should synthetic sentience become actually highly capable, we would not want it to be placed in control over mechanisms that could create safety hazards, any more than we would want any automation to be placed in that position today.

Safety aside, there are other ethical issues. Consider that when we achieve measurable sentience in a model, the essense of that model will be stored on a hard drive as a set of large files that describe all of the nodes (simulated neurons) and their interconnections, fine-tuned with the simulation's experience of the real world. It's easy to replicate the sentience by copying the set of files to another server, using those files with a different simulation ID so they don't interfere with one another. How would we, as natural sentient beings, feel about being cloned, and if "feelings" were ever established, how would our clones feel about it? Would it be ethical to stop, or even pause, a simulation? Is it ethical to delete those files?

There are many other issues; those mostly surround the application of AI in general. Synthetic sentience is an evolution of AI's toolbox; it simply uses a different development paradigm to achieve an autonomous system that can perform functions that would be very difficult to codify in discrete lines of application-specific code.

Q: Is NeuroSynthetica profitable?

Steve: Well, we're only just getting started. The development to-date has been self-funded, and community engagement is happening. The small community subscriptions will fund the general engagement with the community, but will not fund technical support-- that's why we've separated that part out.

If you look at Geoffrey Moore's book, Crossing the Chasm, it describes how technology moves through several stages during its lifecycle. Starting out with the Visionary stage, visionary users pick up the technology because they can envision how it can be used, and then they take off and run with it. A good example here is the world wide web, which seemed like a curiosity in its early days, not a serious commercial concern. The second stage is the Early Adopter stage, where the technology is packaged in a way that, with technical support, users can get working; an SDK is a typical example of this; so is Linux. The third stage is Mainstream, where the goal is to just ship the product. Examples are iPhones and pens-- we just purchase them and consume them today. The fourth stage is Laggard, where only the most technology-adverse users end up using them. An example here is typewriters today. On this timeline, while AI is in the Early Adopter stage, Synthetic Sentience is in the Visionary stage. We hope to move Synthetic Sentience to the Early Adopter stage, and that will mean plenty of support for customers. This means we need to be diligent about charging for support, so that we can build the support infrastructure for the Early Adopter market.

Licensing by commercial and governmental Early Adopters will help offset our expenses. It may very well turn out that commercial entities license our technologies and do not share how they achieved emergent sentience or the fidelity of the sentience. That's okay, although it would be nice for us to find ways to collaborate for everyone's benefit.