The product's infrastructure provides a mechanism to add parallelism and concurrency to applications. The architecture follows the task based parallelism paradigm. Applications interact with the client API by creating jobs composed of independent tasks run concurrently. Each task is a unit of work that gets executed in a host, which is a separate hosting Windows process. The isolation provided by the hosting Windows process makes this approach very attractive for efficiently parallelizing non-thread-safe code and libraries.
The infrastructure also manages how and where the host processes are created and executed. Two modes are supported. In the first mode, only local machine resources are utilized, and the hosts are child processes of the application. In the second mode, the hosts can be spread across separate machines. To achieve distribution across a cluster of machines, a coordinator manages a queue of tasks and dispatches the tasks to agents. The agents start and control the host processes on each machine participating in the cluster.
The API also provides fine grained control (when/if needed by specific requirements) over how the work is being distributed and processed. Events and monitoring capabilities are also exposed to the application, allowing it to react to the job progress and key events that happen in the system.