D.2.1 The Task Dispatching Model
{
AI95-00321-01}
[The task dispatching model specifies task scheduling, based on conceptual
priority-ordered ready queues.]
Static Semantics
{
AI95-00355-01}
The following language-defined library package exists:
package Ada.Dispatching
is
pragma Pure(Dispatching);
Dispatching_Policy_Error :
exception;
end Ada.Dispatching;
Dispatching serves as the parent of other language-defined
library units concerned with task dispatching.
Dynamic Semantics
{
AI95-00321-01}
A task can become a
running task only if it is ready (see
9)
and the execution resources required by that task are available. Processors
are allocated to tasks based on each task's active priority.
It is implementation defined whether, on a multiprocessor,
a task that is waiting for access to a protected object keeps its processor
busy.
Implementation defined: Whether, on a
multiprocessor, a task that is waiting for access to a protected object
keeps its processor busy.
{
AI95-00321-01}
{task dispatching} {dispatching,
task} {task
dispatching point [distributed]} {dispatching
point [distributed]} Task dispatching
is the process by which one ready task is selected for execution on a
processor. This selection is done at certain points during the execution
of a task called
task dispatching points. A task reaches a task
dispatching point whenever it becomes blocked, and when it terminates.
[Other task dispatching points are defined throughout this Annex for
specific policies.]
Ramification: On multiprocessor systems,
more than one task can be chosen, at the same time, for execution on
more than one processor, as explained below.
{
AI95-00321-01}
{ready queue} {head
(of a queue)} {tail
(of a queue)} {ready
task} {task
dispatching policy [partial]} {dispatching
policy for tasks [partial]} Task dispatching
policies are specified in terms of conceptual
ready queues
and task states. A ready queue is an ordered list of ready tasks. The
first position in a queue is called the
head of the queue, and
the last position is called the
tail of the queue. A task is
ready
if it is in a ready queue, or if it is running. Each processor has one
ready queue for each priority value. At any instant, each ready queue
of a processor contains exactly the set of tasks of that priority that
are ready for execution on that processor, but are not running on any
processor; that is, those tasks that are ready, are not running on any
processor, and can be executed using that processor and other available
resources. A task can be on the ready queues of more than one processor.
Discussion: The core language defines
a ready task as one that is not blocked. Here we refine this definition
and talk about ready queues.
{
AI95-00321-01}
{running task} Each
processor also has one
running task, which is the task currently
being executed by that processor. Whenever a task running on a processor
reaches a task dispatching point it goes back to one or more ready queues;
a task (possibly the same task) is then selected to run on that processor.
The task selected is the one at the head of the highest priority nonempty
ready queue; this task is then removed from all ready queues to which
it belongs.
Discussion: There is always at least
one task to run, if we count the idle task.
This paragraph
was deleted.
This paragraph
was deleted.
Implementation Permissions
{
AI95-00321-01}
An implementation is allowed to define additional resources as execution
resources, and to define the corresponding allocation policies for them.
Such resources may have an implementation-defined effect on task dispatching.
Implementation defined: The effect of
implementation-defined execution resources on task dispatching.
An implementation may place implementation-defined
restrictions on tasks whose active priority is in the Interrupt_Priority
range.
Ramification: For example, on some operating
systems, it might be necessary to disallow them altogether. This permission
applies to tasks whose priority is set to interrupt level for any reason:
via a pragma, via a call to Dynamic_Priorities.Set_Priority, or via priority
inheritance.
{
AI95-00321-01}
[For optimization purposes,] an implementation may alter the points at
which task dispatching occurs, in an implementation-defined manner. However,
a
delay_statement
always corresponds to at least one task dispatching point.
7 Section 9 specifies under which circumstances
a task becomes ready. The ready state is affected by the rules for task
activation and termination, delay statements, and entry calls.
{blocked
[partial]} When a task is not ready, it is
said to be blocked.
8 An example of a possible implementation-defined
execution resource is a page of physical memory, which needs to be loaded
with a particular page of virtual memory before a task can continue execution.
9 The ready queues are purely conceptual;
there is no requirement that such lists physically exist in an implementation.
10 While a task is running, it is not on
any ready queue. Any time the task that is running on a processor is
added to a ready queue, a new running task is selected for that processor.
11 In a multiprocessor system, a task can
be on the ready queues of more than one processor. At the extreme, if
several processors share the same set of ready tasks, the contents of
their ready queues is identical, and so they can be viewed as sharing
one ready queue, and can be implemented that way. [Thus, the dispatching
model covers multiprocessors where dispatching is implemented using a
single ready queue, as well as those with separate dispatching domains.]
13 {
AI95-00321-01}
The setting of a task's base priority as a result of a call to Set_Priority
does not always take effect immediately when Set_Priority is called.
The effect of setting the task's base priority is deferred while the
affected task performs a protected action.
Wording Changes from Ada 95
{
AI95-00321-01}
This description is simplified to describe only the parts of the dispatching
model common to all policies. In particular, rules about preemption are
moved elsewhere. This makes it easier to add other policies (which may
not involve preemption).