To add a new scheduling policy to the Linux kernel it is necessary to create a new module. In the SCLS implementation, the CASIO module was added on the top of the modules hierarchy, thus it is the highest priority module. Therefore, scheduler modules becomes hierarchically organized as it is shown in the Figure 2 below.
Figure 2. Priority hierarchy of scheduler modules
Note that each scheduler module is coded in a file. Currently, Linux kernel has three native scheduler modules: RT (Real-Time), CFS (Completely Fair Scheduling) and Idle.
The RT, CFS and Idle are coded in the /kernel_ source_ code/kernel/sched_rt.c, /kernel_ source_ code /kernel/sched_fair.c and /kernel_source_code/kernel/sched_idletask.c files, respectively. Then, to implement the CASIO module we have created the /kerneI_source_ code/kernel/sched_casio.c file.
According to the modular scheduling framework rules each module must implement a set of functions specified in the sched_class structure. Listing 7 below shows the definition of casio_sched_class, which implements the CASIO module.
The first field of this structure (next) is a pointer to sched_class that is used to organize the scheduler modules by priority in a linked list and the scheduler core, starting by the highest priority scheduler module, will look for a runnable task of each module in a decreasing order priority.
In this case, as casio_sched_class is the highest priority scheduler module, then this field points to the next low priority scheduler module that is struct rt_sched_class which in turn implements the RT module. (Later we will explain how to declare the highest priority scheduler module .)
Listing 7. casio_sched_class scheduling class
The other fields are functions that act as callbacks to specific events, which are described in the following sections. In the description of the following functions it is assumed that the CASIO tasks are already stored in the linked list, which head is the casio list field of the struct casio_rq.
Later, we will explain how and when the tasks are added to that list. For right now, note that, some auxiliary functions are used to explain the behavior of these functions, but, their source code is not shown in this document.
The enqueue_task_casio (Listing 8 below ) is called whenever a CASIO task enters in a runnable state. It receives two pointers, one for the run-queue of the processor that is running this code (rq) and another to the task that is entering in a runnable state (p).
This function, invoking the find_casio task list function gets the pointer to the struct casio task stored in the linked list of the struct casio_rq that points to the task p. Then, it updates the absolute-deadline and inserts the casio_task on the red-black tree (insert_casio_task_rb_tree).
The kernel native sched_clock returns the current time in nanoseconds (this function is defined in the / kernel-source- code/kernel/sched.c file). Additionally, it registers this event on the CASIO logging system (described later in this series).
Listing 8. enque_task_casio function
When a CASIO task is no longer runnable, then the dequeue_task_casio function is called that undoes the work of the enqueue_task_casio function (Listing 9 below ). Note that, if the task is not being executed, then it is removed from the linked list (rem_casio_task_list).
As the name suggests, check_preempt_curr_casio function, checks whether the currently running task must be preempted. This function is invoked following the enqueuing or dequeuing of a task and only sets a flag that indicates to the scheduler core that the currently running task must be preempted (through the kernel native resched_task function).
Note that, the struct task_struct pointer rq->curr points to the task that is currently running on the processor.
The currently running task must be preempted (Listing 10 below): if there is at least one CASIO task on the run-queue and the currently assigned task to the processor (rq->curr) it is not a CASIO task and also if the currently running task is a CASIO task and there is at least one CASIO task on the red-black tree with the earlier deadline.
Listing 10. check preempt-curr_casio function
The pick_next_task_casio function (Listing 11 below ) selects the task to be executed by the current processor. This function is invoked by the scheduler core whenever the currently running task is marked to be preempted.
The CASIO task with the earliest absolute deadline (returned by earliest_deadline_casio_task_rb_tree function) is elected to be executed. Note that, if there is no CASIO task to be executed on the processor, then this function returns NULL, and this way the scheduler core tries to find one task on the next low priority scheduling class.
Listing 11. pick next task_casio function
Declare the Highest Priority Scheduler Module
After coding the new scheduler module, there is the need to include this module on the scheduler core file (/kernel_source_ code/kernel/sched.c) and also to inform the scheduler core which is the the highest priority scheduler module.
Listing 12 below shows the changes on the / kernel-source- code/kernel/sched.c file to include the file that implements the CASIO module (sched_casio. c) and also to inform the scheduler core that casio_sched_class is now the highest priority scheduler module.
Listing 12. Inclusion and definition of the highest scheduling class
Defining a CASIO task
In the previous sections we described the implementation of the CASIO scheduler module. However, to schedule a task, first, it has to be present in the system.
Initially, a CASIO task is created as any task in the system, using fork or clone system calls. After that, in order to be a CASIO task, there is the need to change its scheduling policy. This section describes how to do that.
In order to set the required scheduling parameters of a CASIO task the struct sched_param data structure must be changed. This has to be done in two files /usr/include/bits/sched .h and /kernel_source- code/include/linux/sched. h.
Listing 13 below shows the fields added to the struct sched_param data structure. Once again, the /usr/include/bits/sched.h file is outside of the kernel code then the CONFIG_SCHED_CASIOPOLICY configuration option must be commented or removed.
Two fields were added to struct sched_param data structure. One is an identifier (casio_id and the other one is used to set the relative deadline (deadline) of a CASIO task.
Listing 13. Changes on the struct sched_param data structure
The kernel native rt_policy function (defined in the /kernel-source- code/kernel/sched.c file) is used to decide if a given scheduling policy belongs to the real-time class (SCHED_RR and SCHED_FIFO) or not.
We changed this function in order to include the SCHED_CASIO as belonging to the real-time class, once SCHED_CASIO has higher priority than SCHED_RR and SCHEDFIFO. Listing 14 below shows the changes made to the rt_policy function.
Listing 14. Changes on the rt_policy function
The static priority is the priority assigned to the process when it is started. It can be modified with the nice and sched_setscheduler system calls. This is done by setting the sched class field of the struct task struct variable that represents the task in the system with the address of new scheduling class variable.
Listing 15. Changes on the setscheduler function
Listing 15 above shows how this is done in the __setscheduler function. __setscheduler function is called by the sched_setscheduler function as we can see in Listing 16 below.
The sched_setscheduler function sets the scheduling policy and scheduling parameters of the process specified by the pointer p and the parameters specified by the pointer param, respectively.
Listing 16. Changes on the sched_setscheduler function
Besides these actions, in this function, the add_casio_task_2_list function is invoked to add the task pointed by p to the linked list of the struct casio_rq data structure.
Next in Part 3: The SCHED_CASIO Logging System
To read Part 1 , go to: A new Linux Scheduling Policy
1. Source code for the SCHED CASIO Linux Scheduler
1. A. Kumar. Multiprocessing with the completely fair scheduler. Technical report, IBM, 2008.
2. D. Bovet and M. Cesati. Understanding The Linux Kernel. O Reilly & Associates Inc, 2005.
3. Wolfgang Mauerer. ProfessionalLinux Kernel Architecture. Wiley Publishing, Inc., 2008.
4. C. L. Liu and J. W. Layland. Scheduling algorithms for multiprogramming in a hard-real-time environment. J. ACM, 20(1):46-61, 1973.
5. John A. Stankovic. Misconceptions about real-time computing: A serious problem for next-generation systems. Computer, 21(10):10-19, 1988.
6. Arezou Mohammadi and Selim G. Akl. Technical report no. 2005-499 scheduling algorithms for real-time systems. 2005.
7. C. L. Liu. Scheduling algorithms for hard-real-time multiprogramming of a single processor. JPL Space Programs Summary, II(1):37-60, 1969.
Luis Lino Ferreira is a researcher on QoS Architectures and algorithms for mobile distributed systems at CISTER Research Group and Professor at the Polytechnic Institute of Porto.)