Throughout the last ten years there has been an considerable evolution in the area of parallel systems and applications. With the ending of the Cold War the available funds for the construction of specialized supercomputers suffered a substantial reduction. These systems were able to deliver a superior floating-point performance but their costs were several orders of magnitude higher when compared with the existing mainframes.
Users, however, continued to demonstrate interest in this kind of systems. In fact, there has been an increase on the number of applications that need significant computational capacities. In many cases parallel processing is the only solution, otherwise, results would take too long to be calculated and would lose their usefulness. These applications come from the most diverse areas of knowledge, such as medical sciences including genetic engineering, financial modeling, and robotics.
Today exists a more pragmatic attitude on the development of support systems for parallel applications. The hardware architecture that is most commonly found consists on a group of workstations or PCs interconnected by a high-performance network. Application programming can be done using a message-passing platform with a standard interface, such as the Message Passing Interface (MPI). The existing platforms, however, still have a number of restrictions. For instance, they impose an interactions model where processes are constrained to exchange messages only to the processes of the same application, which prevents the cooperation among parallel applications.