What sounds like an easy and straight-forward question actually merits (in our humble opinion) a few additional considerations and thoughts. Please bear with us:
The SEAforth chip presents a unique and different architecture, very much unlike anything available today. While it is possible to run conventional, high-level languages on the SEAforth platform, such an approach forces a return to sequential threaded thinking, slow interpreter style software, and the attachment of large external program memory arrays. This discounts the unique advantages of this parallel computing platform.
By applying common, sequential threaded thinking, performance of the SEAforth chip might actually look quite unfavorable compared to other platforms, simply because most current, common hardware architectures offer advanced, complicated memory access features to help overcome shortcomings of inherently sequential tasking.
By looking at SEAforth architecture as familiar and common, while a nevertheless much more powerful architecture, people are doing themselves a disservice. It is important to stop thinking of the SEAforth platform as a 'nail', simply because hammers have been such familiar, beloved and even fancy tools for so long a time.
True leaps forward in processor performance require a fresh new thinking as well as the willingness to let go of familiar approaches and views. Only then will SEAforth architecture unleash it's full power and become the 'rocket in a world of tractors'. Even the architects that built the SEAforth platform have only scratched the surface of what designs are possible.
Having said all this, we are actively working on techniques to marry sequential language elements with advanced parallel computing. The simple reason for this is that a sequential, external program can hold the deep state information and complex algorithms needed to implement such applications as TCP/IP stacks or other complex protocols, language compilers, event rich human interfaces or other complex application areas.
This method of execution has been dubbed an SPS, or Sequential Processing System. Multitasking can be carried out in this model, but it is the classic, time multiplexed, form we are all familiar with which executes upon single core processors. IntellaSys is much more keenly interested in developing highly leveraged methods or languages for rapidly and accurately defining, managing and deploying large cooperative multi-processor application solutions
Any high level, sequentially executed language is potentially appropriate for adaptation as a SEAforth SPS. Some will do better than others if they map well onto the features and limitations of the SEAforth architecture. For this reason, the high level SPS's that we are exploring at present are all some form of Forth, or forth-like interpreters. Sequential function should not be viewed as a justification for the SEAforth chip; rather it is a potential requirement in order to facilitate parallel function.
In addition to exploring multiple SPS technologies, we are exploring many promising architectures for defining, managing and deploying multi-processor application solutions. Nearly every new application area is an opportunity to do ground breaking work in this new field if you just keep your mind open and struggle to bring the strengths of the SEAforth architecture to bear upon the problem.
Among the designs we are currently working on is one that will allow RAM-data that are required by several parallel executing engines to be fetched only once and then distributed to all required nodes just before they will need them, using a cooperative architecture organized around a simple grammar or algebra. The benefits of such a system are twofold. The memory bandwidth is optimized and each node's execution times can be balanced so that are all busy much of the time. An important added benefit is the ability to validate the functional cooperation of all nodes as a consequence of the structure of the algebra.
We utilize port execution streams to boot the parts and this has given us the ability to fill all of local RAM, as well as to initialize most node registers, without any help from or prejudice to the content of a node's ROM. The edge nodes in charge of controlling a boot protocol are the only nodes whose ROM is affected. All other nodes are free for ROM designed to augment application code.
Other architectures are on the table that can multiplex the utilization of memory between several demands that are independent and asynchronous, but which may have specific priority or bandwidth requirements. People are continually coming up with new ideas and strategies for utilizing port execution.
No previous hardware architecture has made it so easy, or so necessary, to write multiprocessor solutions with such minuscule overhead and latency. Focusing on only one processor at a time is like a queen ant sending each worker out in a different direction to work alone. If I could only give only one word of advice, it would be to always think in terms of how to organize the problem so that it can be shared among cooperating peers.
Don't expect any 'quick-and dirty' ports of common languages. It's not going to get you anywhere fast.
DO EXPECT, however, some new and different approaches in due time that will leverage parallel computing AND high level style coding.
Thanks for your continued support!