.. _keyconcepts: ************ Key Concepts ************ *Job*, *Task*, and *Task Environment* are the three key components exposed for extension. Overview ======== * :ref:`Job` * :ref:`Task` * :ref:`Task Environment` Job === A *Job* allows groups of tasks to be managed as a unit. A job consists of one or more *tasks* and possibly a *task environment*. Operations performed on a job affect all the tasks associated with the job. For example, setting the priority and requiring certain conditions to be met before a task can be executed. A job must be submitted before it can have its work run on a job scheduler. Once a job is submitted, wait until all the tasks of the job are completed. None of the job options are mandatory. Job Options """"""""""" There are many options on a job. All of the properties are optional. Below is a table of all the options and their default values. .. list-table:: :header-rows: 1 :widths: 30 50 20 * - Property - Description - Default Value * - :py:attr:`name ` - Name of the job displayed in the monitoring applications. - Job # * - :py:attr:`description ` - Description of the job displayed in the monitoring applications. - :py:obj:`None` * - :py:attr:`priority ` - Priority of the job's tasks. Higher priority tasks are selected to be executed before lower priority tasks. - :py:attr:`Normal ` * - :py:attr:`task_preconditions ` - List of conditions that must be met by an agent machine before a task is assigned to it. - :py:obj:`None` * - :py:attr:`agent_selection_preference ` - The scheduling algorithm to use when selecting which agent should execute a task. - :py:attr:`Default ` * - :py:attr:`cancel_on_client_disconnection ` - Whether to cancel all tasks if the client disconnects. Appropriate for interactive jobs. - :py:obj:`False` * - :py:attr:`cancel_on_task_failure ` - Whether to cancel all tasks if any task in the job fails. Appropriate if the result of the job depends on **all** the tasks completing successfully. - :py:obj:`False` * - :py:attr:`task_execution_timeout ` - The number of milliseconds for which the job's tasks are allowed to run before the task is considered to have timed-out. - -1 * - :py:attr:`max_interrupted_retry_attempts ` - Number of times the job's tasks will be retried if the task gets *interrupted*. - 5 Tips and Best Practices """"""""""""""""""""""" * There is a performance overhead when submitting jobs. If possible, packing more tasks per job is a way to decrease overhead. Example """"""" Here is an annotated example: .. literalinclude:: .\..\..\code\KeyConcepts.py :language: python :linenos: :dedent: 4 :lines: 24-45 Task ==== A *task* is a basic building block. The application executes programs contained within tasks. The most important components of a task are the :py:meth:`execute` method and the :py:attr:`result` property. Every task has an :py:attr:`unique_id` and a task number assigned to it. The unique id is created when the task is instantiated and the coordinator assigns the task number. Tasks can also be given names for easier identification. Serialization """"""""""""" Tasks are serialized by the pickle module when submitted to a job scheduler. When a task is executed, a copy of the pickled object is deserialized and it's :py:meth:`execute` method is called. The developer must ensure the object can be serialized by pickle. For more information on Python Serialization, visit `pickle documentation `_. .. note:: When the task is deserialized by pickle, the task's constructor will not be called. Task Properties """"""""""""""" Properties that can be set either before a task is submitted or when it is running: .. list-table:: :header-rows: 1 :widths: 10 70 30 * - Property - Description - Default Value * - :py:attr:`name` - Name of the task displayed in the monitoring applications. - Task's type name * - :py:attr:`result` - Result of the task after task executes. Result objects must be able to be serialized. If the task encounters an error, the task's result is the error message. - :py:obj:`None` * - :py:attr:`properties` - Dictionary of properties returned after the task executes. The items in this collection can only be serializable types. - { } Read-only properties set by the infrastructure: .. list-table:: :header-rows: 1 :widths: 25 75 * - Property - Description * - :py:attr:`standard_output` - The output of the process while it was executing the task. * - :py:attr:`standard_error` - The error stream of the process while it was executing the task. * - :py:attr:`unique_id` - Random GUID used to identify the task. * - :py:attr:`task_status` - The current status of the task. This will be updated when its state changes in the job scheduler. * - :py:attr:`task_cancellation_message` - If the task is canceled, this property gives a human readable reason why. * - :py:attr:`task_cancellation_reason` - If the task is canceled, a :py:class:`TaskCancellationReason ` that explains why the task was canceled. Task States """"""""""" A task can transition through several different states during its lifetime. Task states are represented by the :py:class:`TaskStatus ` enum. All tasks start at the :py:attr:`NOT_SUBMITTED ` state. After the task is submitted, the state changes to the :py:attr:`SUBMITTED ` state while the task sits in the coordinator queue. When a task is assigned to an agent for execution, its status changes to :py:attr:`ASSIGNED ` and then to :py:attr:`RUNNING ` once the Agent starts running the task. If no errors occur during execution of the task, the task status changes to :py:attr:`COMPLETED `. If an uncaught exception is encountered, it goes into the :py:attr:`FAILED ` state. Below is a state transition diagram and a full table of task statuses. .. image:: ..\\..\\..\\..\\..\\Documentation\\Media\\TaskStatus.png :align: center :alt: TaskStatus Possible task statuses: .. list-table:: :header-rows: 1 :widths: 18 54 28 * - Property - Description - Transition * - :py:attr:`NOT_SUBMITTED ` - Task is not submitted yet. - Always the initial state * - :py:attr:`SUBMITTED ` - Task is submitted but not assigned yet. - Always the state after the task is submitted * - :py:attr:`ASSIGNED ` - Task is assigned but not run yet. - Transition state. Next state is RUNNING * - :py:attr:`RUNNING ` - Task is currently running. - Expected * - :py:attr:`CANCELING ` - Task is in the process of canceling. - Transition * - :py:attr:`INTERRUPTED ` - Task encountered a system exception. Examples of system exceptions include if the agent disconnects or the host process exits unexpectedly. - End state * - :py:attr:`CANCELED ` - Task is canceled. - End state * - :py:attr:`ENVIRONMENT_ERROR ` - Task failed to run because of an uncaught exception in Task Environment setup. - End state * - :py:attr:`COMPLETED ` - Task completed successfully. - End state * - :py:attr:`TIMED_OUT ` - Task timed-out because it ran longer than the specified :py:attr:`task_execution_timeout ` value. - End state * - :py:attr:`FAILED ` - Task failed because of an uncaught exception while running the task. - End state Tips and Best Practices """"""""""""""""""""""" * How should tasks be split? It has been found that splitting the workload into more units can yield better performance. A common problem in parallel applications is that the workload is split more coarsely than is optimal. In some cases, the performance of the task becomes equal to the "time it takes for the slowest task to complete". As with any performance tip, measure, measure, measure. * Design tasks so they are decoupled from the application code as much as possible. Try not to store application logic in the Task class. This will prove helpful when refactoring and testing code. * If certain fields on the task instance are not used, ignore or remove the fields to reduce the size of the object being sent. Example """"""" Here is an example of a simple task: .. literalinclude:: .\..\..\code\KeyConcepts.py :language: python :linenos: :lines: 6-12 Task Environment ================ A *task environment* is used to provide a single set of functions performed for each host once prior to executing any of the tasks associated with the task environment and once before the host is recycled. Although not strictly required, using a task environment is crucial to optimizing many cases where it takes time to do a common operation. A very common scenario for using the task environment is to perform some sort of expensive operation before a task executes. Examples include loading required data, setting up and running an application, or simply starting the host environment. .. note:: When a task environment is not specified, a default task environment, common for all tasks in the job, is used. Task Environment Lifetime """"""""""""""""""""""""" A task environment is set up only once per host process. After a process sets up a task environment, the process remains idle until a task specifying the task environment is assigned to the process for execution. In other words, a host runs one task environment and multiple tasks. After each task is executed, the host checks whether any of the task environment's :py:attr:`recycle_settings ` conditions have been satisfied. If any one of the conditions have been met, the process is *recycled*. The Agent Tray Application exposes a user interface for configuring an agent's host recycle settings. The values set in the UI will be overridden if a submitted job's :py:class:`TaskEnvironment `'s :py:attr:`recycle_settings ` have been set programmatically. A task environment can also be recycled if the agent needs to free up a process to run an incoming task that has a different task environment. Identification """""""""""""" To identify whether two task environment references are equal, a few properties of the task environment are checked. A task environment is uniquely identified via a combination of its :py:attr:`unique_id ` and the :py:attr:`additional_id ` properties. Two task environment identifications are equal *if and only if both* id and additional_id properties are equal. Here are a few examples to demonstrate. Below is code for a number of task environments: .. literalinclude:: .\..\..\code\ProgrammersGuide\TaskEnvironmentID.py :language: python :linenos: :lines: 9-43 Imagine there is a hypothetical method called task_environmment_is_equal, which compares whether two task environments have the same identification reference. The results are below with the explanations of their results in the comments. .. literalinclude:: .\..\..\code\ProgrammersGuide\TaskEnvironmentID.py :language: python :linenos: :dedent: 4 :lines: 50-64 Serialization """"""""""""" As with Tasks, the pickle serializer must be able to serialize user defined TaskEnvironments. Properties """""""""" .. list-table:: :header-rows: 1 :widths: 20 45 35 * - Property - Description - Default value * - :py:attr:`unique_id ` - One of the ways to determine if two task environment references are equal. See the Identification section - Unique guid generated from :py:meth:`uuid.uuid4()` * - :py:attr:`additional_id ` - One of the ways to determine if two task environment references are equal. See the Identification section - :py:obj:`None` * - :py:attr:`host_architecture ` - Whether to host the task in a 32 bit or 64 bit process. :py:attr:`Any ` indicates that the task can execute either on a 32 bit or a 64 bit processes. - :py:attr:`x64 ` if the client application is 64 bit. Otherwise, :py:attr:`x32 ` if the client application is 32 bit. * - :py:attr:`host_recycle_settings ` - The strategies to use for determining when host processes get recycled (shutdown before another is started). - If not specified, the default :py:class:`HostRecycleSettings ` constructor sets an infinite idle timeout and unspecified fixed number of tasks. Example """"""" Here is a simple example: .. literalinclude:: .\..\..\code\KeyConcepts.py :language: python :linenos: :lines: 15-20 See Also ======== Reference """"""""" * :py:class:`Job ` * :py:class:`TaskEnvironment ` Other Resources """"""""""""""" * `Pickle documentation `_