Scopira
20080306
|
The Scopira Agents API provides an all-inclusive API for parallel and distributed computing. The API is object-oriented and scalable allow for ease of development. The implemenation is built into the Scopira library is is always availble (even if a network of machines is not).
The first step of paralleizing your algorithm with Agents involves converting your algorithm to task objects. These are simply classes you make that descend from scopira::agent::agent_task_i and implement the run() method. The run method has the following interface:
class scopira::agent::agent_task_i { ... virtual int run(scopira::agent::task_context &ctx) = 0; ... }
Through the scopira::agent::task_context interface, you tasks can comminicate with other task instances, effectivly cooperating to solve a larger problem in parallel.
Note that you must register all your class types. Place something like the following in your .cpp file for every distint task type that you have:
#include <scopira/core/register.h> static scopira::core::register_flow<slave_conway_task> r1("slave_conway_task");
The question of how to decompose and map a problem/algorithm musted be addressed in any parallel-implementation.
In this case, their is a distinct master-slave relationship. The master process (always has groupid of 1), spawns one or more worker tasks to perform the work. The master task only does managment and administrative tasks - all algorithmic work is done in the slaves. Therefore, there will always be aleast one slave.
Each slave asks the master for a unit of work. When it completes the work, it submits the results back to the master and asks for more. The master, in turn, simply doles out work units until there are no more, at which point it tells all the slaves to terminate.
You typically have one task type for the master, and another for the slaves (they can be merged, but it's cleaner to seperate the tasks).
The agent task will typically: