The NeuroSynthetica® Workbench is an interactive development, command, control and monitoring (C2M) environment used to engineer synthetic brain models, deploy them to simulation servers, and monitor/debug them in real time.
Workbench projects select a Model and a Server Set to Deploy it on. Projects also select the simulation paradigms to be employed at runtime.
Running simulations may be monitored with Workbench visualization tools, as well as configurable dashboards that enable the performance of the simulation from entire neural network or server down to the time-series behavior of individual neurons.
The NeuroSynthetica SOMA™ Modeling Language is similar to a hardware definition language such as VHDL or VeriLog, and is used to describe models containing Nodes and their receptors and signals, arrays of nodes, objects (similar to a structure in a programming language), arrays of objects (fabrics), and I/O channels.
Using the Workbench, models can be quickly described in the NeuroSynthetica SOMA™ modeling language, and compiled, so that they are ready to be built on a target server set. The resulting compilation can be visualized in graphical schematic form as well as graphical 3D form.
Nodes, receptors, signals, and objects are instantiated from classes defined in the language, specifying the operational parameters of the node or receptor, or the constituent elements of the object. Once defined, single nodes and objects, as well as 1-, 2- and 3-dimensional arrays of nodes and objects may be defined in the model based on their defined classes.
Object classes are similar to function definitions in a programming language; in addition to defined elements, they support for executable statements used to wire-up their constituent elements. The language supports modern programming constructs such as FOR, WHILE, IF, variables and expressions, and assignment statements. The ROUTE statement is used to connect the output signal of an afferent node to the input of an efferent node. During neurogenesis (after compilation), these statements execute with high performance, and can create and wire-up nodes on the server set over a gigabit network at a rate of over 20,000 links per second.
Similar to traditional application programming in a high level language, NeuroSynthetica's SOMA™ modeling language allows the definition of nodes and other objects without specifically declaring their coordinates in the 3D model space. Optional statements give the designer the flexibility to place objects at relative coordinates to an object class' origin, defining a precise 3D structure that may be replicated in object arrays for the construction of neurocomputational fabrics.
The synthetic brain development cycle is interactive and rapid. First, a project is declared, defining the server set, simulation type, model name, and initial dashboard. Then, a model definition source file is created, which possibly includes predefined class definitions for nodes, receptors and signals. Then objects, similar to structures but which also contain the active code described above, are defined, including the top-level object (analogous to the main() function in C++). One-click compilation nominally takes under one second, and the resulting schematic and 3D layout views are readily available from the compiled model.
If syntax errors arise, the developer can correct them in the interactive environment and recompile, and rapidly get to a successful model build. Once compiled successfully, the model may be deployed to the server set with one click.
The user can use the built-in text editor to type-in and edit SOMA language statements in the model source files, or the Workbench can be configured to run the user's text editor as desired.
The execution container for a running model on a server set is a Simulation, which is defined and edited in the Workbench using a dialog box system which supports a library of user-defined simulation types.
While simulation parameters are installed on the server set when a simulation is deployed, its parameters such as the epoch duration, plasticity and homeostatic regulation may be adjusted while the simulation is live, allowing the designer to immediately see the run-time effects of different simulation qualities.
Simulation parameters include the epoch timebase (1ms to 1000ms), the server's epoch utilization and activation limits, and server plasticity and regulation parameters.
While the model's source code describes the functional aspects of the model, the layout is handled by the compiler (unless overridden by the designer). The Model Layout View displays the model in 3D Model Space, and allows the user to click on objects to zoom into and out of their internal structure.
Stimulated and activated nodes in a running system are highlighted in real time in the layout view according to the visualization properties of the node classes used. For example, in this display, an camera's input stream is shown activating nodes in the left-most array on Server0. Those nodes stimulate nodes in the middle array on Server1, and the nodes in the middle array stimulate nodes in a third (right-most) array located on Server2.
The Model Schematic View displays the model in schematic form, allowing the user to see consumer-producer relationships of nodes, compound objects, and even arrays of nodes and objects. The user can click on high-level objects to drill-down and explore the schematic detail within their defined class.
When a model has been compiled successfully, it may be deployed to all the servers in a server set with a single click (see the Generate button in the pictures). Generation can take seconds, minutes, or hours, depending on the size and complexity of the model.
Simulations are controlled from the Workbench Projects tab. When a project is opened, its simulation may be assigned to a server set, loaded and unloaded on the server set, and started, paused, or stopped.
Multiple users may use Workbench to connect to the same server set and collaboratively manage and monitor the running simulation, potentially with their own customized dashboard views to meet their role's needs.
Dashboards have a grid-style layout, allowing population of widgets that display bar graphs, strip charts, metric values, messages, text logs, and a 3D activated node display, to show a heat map of activated nodes in the model as it is running. Widgets may be configured to source their data from any of many sources (see figure).
Other widgets, including an audio synthesizer and an audio spectral input widget can interact with the running model's I/O channels, without the need for a robot to be connected to the model. The user may expand on this idea by implementing custom widgets using the plugin interface to programmatically perform I/O with the running simulation on the server set.
Widgets may draw on any of the following real time data sources:
NeuroSynthetica Workbench is extensible with plugins, simple Linux ELF executables built using Linux tools. Each plugin may be loaded by Workbench on startup or when a specific project is opened. Plugins may be graphical or headless, and call the NeuroSynthetica Workbench API, documented on the Customer Portal. Source code for sample plugins is provided as a starting point for production of new widgets that either serve as a data source, or consume data sources.
Workbench enables the user to define a set of 1-255 servers that will take part in a simulation-- called the server set. Any number of server sets may be defined, and projects can be rehosted on different server sets to observe the operational performance characteristic differences. Servers may be cloud-hosted, VM-hosted, or bare-metal hosted.
NeuroSynthetica Workbench supports user login, so that all of the user's Sentience Engine™ credentials may be associated with the user. Server set credentials are stored SHA3-256 encoded and are transmitted on the network with a server-supplied salt with each new session.
This permits servers to be hosted with confidence that only authorized persons will interact with their deployed simulations, when hosted on a shared server.