Çok İşlemcili Gerçek Zamanlı Sistemler İçin Sanal İhmal Edilebilirlik Güdümlü İş Sıralama
Çok İşlemcili Gerçek Zamanlı Sistemler İçin Sanal İhmal Edilebilirlik Güdümlü İş Sıralama
Dosyalar
Tarih
2012-08-07
Yazarlar
Seçinti, Gökhan
Süreli Yayın başlığı
Süreli Yayın ISSN
Cilt Başlığı
Yayınevi
Fen Bilimleri Enstitüsü
Institute of Science and Technology
Institute of Science and Technology
Özet
Bu tezde çok işlemcili sistemler için gerçek zamanlı bir iş sıralama algoritması sunulmuştur. Çalışmada kullanılan sistemin eş işlemciler içerdiği ve görev kümesinin ise yalnızca örtülü zaman sınırlarına sahip periyodik görevlerden oluştuğu kabul edilmiştir. Bu konuda yapılan güncel çalışmalarda, iş sıralama algoritmaları adil davranma prensibi üzerine yoğunlaşmış olmasına rağmen önerilen algoritmada bu kısıtlara bağlı kalmadan başarımı daha yüksek iş planları üretilmesi amaçlanmıştır. Tezde önerilen, Sanal İhmal Edilebilirlik Güdümlü (VLDS) iş sıralama algoritmasında adil davranma sınırları göz ardı edilerek bağlam değişimi sayısı düşürülmüş ve işlerin ortalama tepki süreleri iyileştirilimiştir. Bununla beraber VLDS algoritması yalnızca adil davranarak çözümlenebilecek görev kümeleri için de iş planlaması yapabilmektedir. VLDS algoritması iki ana parçadan meydana gelmektedir. Bunlardan ilki iş sıralama süreci boyunca mevcut görev kümesini belirli zaman aralıkları için alt kümelere ayıran algoritmayı içeren kısım, ikincisi ise bu alt kümeler içerisinde iş sıralamayı gerçekleştiren kısımdır. VLDS algoritmasın alt görev kümelerinin oluşturulması en yakın zaman sınırına $D_{n}$ göre yapılmaktadır. Amaç iş sıralaması yapılacak olan işlerin aynı zaman sınırına sahip olmasına sağlayabilmektir. Bu sağlandıktan sonra LLF (en az ihmal edilebilir önce) gibi algoritmalarla çözülebilmektedir. Bu çalışmada LLF iş sıralama algoritmasında oluşabilecek yüksek sayıda bağlam değişimini engellemek için LLF algoritmasına kesinti kısıtı eklenmiştir. Kesinti kısıtlı en az ihmal edilebilr önce (LLFPC) algoritması ile oluşturulan iş planlarındaki bağlam değişim sayısı düşürülmüş ve iş sıralama algoritmasının her kuantumda çalışması gerekliliği ortadan kaldırılmıştır. LLFPC den önce ilk yapılması gereken sistemde bekleyen işlerden bir alt küme oluşturulmasıdır. Öncelikle en yakın zaman sınırı $D_{n}$ belirlenir ve bu zaman sınırı ile belirlenen aralık için alt iş kümesi oluşturulmaya başlanır. Alt küme oluşturma işleminde tüm işler sahip oldukları zaman sınırına göre iki kümeye ayrılmaktadır. Bu kümelerden ilki Yüksek Öncelikli (HP), zaman sınırı $D_{n}$ ile aynı olan işleri içeren görev kümesi; ikincisi Düşük Öncelikli (LP), zaman sınırı $D_{n}$ den yüksek olan işleri içeren görev kümesidir. Oluşturulacak alt iş kümesinde zaman sınırlarının kaçırılmaması için HP de bulunan tüm işlerin bu alt kümeye dahil edilmesi gerekmektedir. Zaman sınırı daha yüksek olduğu halde bu aralık içerisinde zaman sınırını kaçırma riski taşıyan işler LP kümesi içerisinde bulunabilir. Bu yüzden alt iş kümesinin oluşturulduğu zaman aralığının genişliğinden daha düşük ihmal edilebilirlik değerine ($ \left(D_{n} -t \right) > l $) sahip LP görevlerin zaman sınırlarını kaçırmamaları için en az ($D_{n} -t -l$) kadar çalışma süresi kadar işlemciye sahip olmalıdırlar. Bu atama da gerçekleştirildikten sonra boş kalan işlemci süresi LP işler arasında ihmal edilebilirlik değerine göre dağıtılmaktadır. Böylece alt iş kümesinin oluşturulması tamamlanmaktadır. En yakın zaman sınırına ($D_{n}$) gelindiğinde bu işlem tekrar yapılarak yeni bir alt iş kümesi oluşturulmaktadır. VLDS algoritması, oluşturulan alt iş kümeleri içerisinde kullanılan LLFPC sayesinde işlerin yürütülmesi sırasında oluşabilecek kesilmelerinin sayısını azalttığından ve zaman sınırını kaçırma riski bulunmayan LP işler arasında, adil davranma kısıtı gözetmeksizin dağıtım yapabildiğinden oluşturduğu iş planlarındaki bağlam değişim sayısını düşürmekte ve işlerin ortalama tepki sürelerini azaltmaktadır.
An optimal real time scheduling algorithm has been presented in this thesis for multiprocessor systems. It has been assumed that system consists m identical processors and only contains periodic tasks with implicit deadlines. Despite of the recent studies focus on the notion of fairness, the proposed algorithm improves average response time of the tasks and decreases the number of the context switches by reducing the fragmented execution of the tasks without any fairness constraints. In spite of the common usage of traditional scheduling algorithms such as Earliest Deadline First (EDF) and Least Laxity First (LLF) in single processor systems, it has been shown that they are not suitable for multiprocessor scheduling problems. In this thesis, a new algorithm, Virtual Laxity Driven Scheduling (VLDS), is proposed. The VLDS algorithm ignores fairness for the sake of both fewer context switches and shorter response times. However, the VLDS algorithm also schedules the task sets that can only be scheduled by a fair schedulers. The VLDS is composed of two algorithms that would work consequently. One of the algorithms is used to create sub-work sets from GTS, another one is used to schedule these sub-work sets. Basic principal of the VLDS algorithm is to create sub work sets according to the nearest deadline. All tasks in this sub-work set must share the same deadline. Then the scheduling problem can be resolved over the sub-work set with Least Laxity First(LLF). But the number of context switches increases dramatically even for a small scale systems when there exists tasks which have equal laxity values. A simple derivation of the LLF algorithm, which constraints preemption operations, has been proposed in order to reduce the number of context switches in the scheduling. With the introduction of preemption constraints to Least Laxity First (LLF), the number of context switches and the overhead of the scheduling is reduced by removing the necessity of the laxity calculations on each quantum. LLFPC algorithm assigns least laxity tasks first and defines next preemption point according to laxity value of the unassigned task with the least laxity. This prevents context switches until an unassigned has no (zero) laxity. When a preemption occurs, algorithm reassigns tasks by their new laxity values and define a expected new preemption time. First step in the VLDS algorithm while creating sub-work-set is determining the nearest deadline ($D_{n}$). The duration between the current time and the $D_{n}$ is called interval and and all calculations are bounded with this interval. Once the $D_{n}$ is determined, the total number of time slots provided by the processors, which is contained by the interval, is calculated. Tasks with deadline $D_{n}$ are scheduled within the interval. However, there might be the additional time slots available within the interval for use of other tasks. To distribute these time slots among the tasks, they are grouped as High Priority (HP) and Low Priority (LP) task sets. The deadlines of the HP tasks equal to the $D_{n}$ and their executions have to completed in the current interval. Therefore remaining execution time of each HP tasks must be included in the created work-set to meet deadlines. After the HP tasks, included in the created work-set, remaining idle time is distributed among the LP tasks. LP tasks that have lower laxities than the interval could miss their deadlines if the tasks have not scheduled in the current interval. In order to prevent negative laxity (deadline miss), at least $( D_{n} - l_{T_{i}} )$ time slots must be assigned for each LP task with laxity lower than the nearest deadline. After these assignments have been made, there have to be additional remaining idle time to assign LP tasks. It has been shown with the proof of optimality that it is impossible to schedule given task set if there is no remaining idle time just after necessary distribution for LP and HP tasks to prevent deadline misses in the current interval has been made. After the necessary distributions, remaining idle time is distributed among LP tasks according to their laxity values. Until no idle time is left in the current interval or no waiting task is left in GTS, maximum idle time that could be assigned to a LP task has been assigned by starting the LP task with the lowest laxity. This last distribution provides improvements that algorithm offers. Idle time slots left after the necessary distribution is assigned to LP tasks without any fairness constraints and allows to reduce fragmented execution of the tasks i.e, decrease the number of context switches. Once all possible distribution of the time slots in the interval among tasks has been performed, LP Tasks which have been assigned the current work set could be considered as virtually divided into two parts. The first part of the task contains the information that is going to be used in the scheduling of the current sub work set by LLFPC algorithm. Second part holds the information that belongs to the remaining part of the task. Assigning time slots to the tasks could be considered as dividing task into two parts and assigned values will define virtual values of that task. In current work-set tasks could be distinguished from each other under two different categories. In first category, tasks have the states of LLFPC algorithm, a task could be in any state such as Ready, Virtual Zero Laxity and Running at any time during the interval which the current work set is created for. Second category depends on the real deadline value of the task. In this category, tasks are belong to the High Priority State or Low Priority State and there is not going to be any transition between these states until the end of interval. This category has no effect on decisions of the LLFPC algorithm. If there is a condition that the total number of time slots required by the Low Priority tasks is equal or greater than the idle time left in the interval after assignment of the High Priorty Tasks has done (totalTimeSlotsRequiredforLPTask $\geq$ idleTime) then the scheduler could not assign the required time to LP tasks. Missing deadlines will be unavoidable. With this assumption, it will be shown that where this condition exists, total utilization of GTS is greater than the number of processors. STORM (Simulation TOol for Real time Multiprocessor scheduling ) has been used to develop and test algorithms. STORM provides discrete time simulation environment for multiprocessor systems that may contain periodic and aperiodic tasks. A work conserving and optimal scheduling algorithm, VLDS, has been presented and it has been stated that VLDS when compared fair schedule algorithms decreases the number of context switches and improves the average response times of the tasks. This paper shows that fairness could be relaxed or even ignored in some cases to provide better schedules. As future work, cache awareness could be implemented to reduce context switches and migrations at the junctions that one interval finishes and begins. Thus, arbitrary assignments of the tasks to the processors could be replaced with assignment decisions that take into account the cache contents of the processors.
An optimal real time scheduling algorithm has been presented in this thesis for multiprocessor systems. It has been assumed that system consists m identical processors and only contains periodic tasks with implicit deadlines. Despite of the recent studies focus on the notion of fairness, the proposed algorithm improves average response time of the tasks and decreases the number of the context switches by reducing the fragmented execution of the tasks without any fairness constraints. In spite of the common usage of traditional scheduling algorithms such as Earliest Deadline First (EDF) and Least Laxity First (LLF) in single processor systems, it has been shown that they are not suitable for multiprocessor scheduling problems. In this thesis, a new algorithm, Virtual Laxity Driven Scheduling (VLDS), is proposed. The VLDS algorithm ignores fairness for the sake of both fewer context switches and shorter response times. However, the VLDS algorithm also schedules the task sets that can only be scheduled by a fair schedulers. The VLDS is composed of two algorithms that would work consequently. One of the algorithms is used to create sub-work sets from GTS, another one is used to schedule these sub-work sets. Basic principal of the VLDS algorithm is to create sub work sets according to the nearest deadline. All tasks in this sub-work set must share the same deadline. Then the scheduling problem can be resolved over the sub-work set with Least Laxity First(LLF). But the number of context switches increases dramatically even for a small scale systems when there exists tasks which have equal laxity values. A simple derivation of the LLF algorithm, which constraints preemption operations, has been proposed in order to reduce the number of context switches in the scheduling. With the introduction of preemption constraints to Least Laxity First (LLF), the number of context switches and the overhead of the scheduling is reduced by removing the necessity of the laxity calculations on each quantum. LLFPC algorithm assigns least laxity tasks first and defines next preemption point according to laxity value of the unassigned task with the least laxity. This prevents context switches until an unassigned has no (zero) laxity. When a preemption occurs, algorithm reassigns tasks by their new laxity values and define a expected new preemption time. First step in the VLDS algorithm while creating sub-work-set is determining the nearest deadline ($D_{n}$). The duration between the current time and the $D_{n}$ is called interval and and all calculations are bounded with this interval. Once the $D_{n}$ is determined, the total number of time slots provided by the processors, which is contained by the interval, is calculated. Tasks with deadline $D_{n}$ are scheduled within the interval. However, there might be the additional time slots available within the interval for use of other tasks. To distribute these time slots among the tasks, they are grouped as High Priority (HP) and Low Priority (LP) task sets. The deadlines of the HP tasks equal to the $D_{n}$ and their executions have to completed in the current interval. Therefore remaining execution time of each HP tasks must be included in the created work-set to meet deadlines. After the HP tasks, included in the created work-set, remaining idle time is distributed among the LP tasks. LP tasks that have lower laxities than the interval could miss their deadlines if the tasks have not scheduled in the current interval. In order to prevent negative laxity (deadline miss), at least $( D_{n} - l_{T_{i}} )$ time slots must be assigned for each LP task with laxity lower than the nearest deadline. After these assignments have been made, there have to be additional remaining idle time to assign LP tasks. It has been shown with the proof of optimality that it is impossible to schedule given task set if there is no remaining idle time just after necessary distribution for LP and HP tasks to prevent deadline misses in the current interval has been made. After the necessary distributions, remaining idle time is distributed among LP tasks according to their laxity values. Until no idle time is left in the current interval or no waiting task is left in GTS, maximum idle time that could be assigned to a LP task has been assigned by starting the LP task with the lowest laxity. This last distribution provides improvements that algorithm offers. Idle time slots left after the necessary distribution is assigned to LP tasks without any fairness constraints and allows to reduce fragmented execution of the tasks i.e, decrease the number of context switches. Once all possible distribution of the time slots in the interval among tasks has been performed, LP Tasks which have been assigned the current work set could be considered as virtually divided into two parts. The first part of the task contains the information that is going to be used in the scheduling of the current sub work set by LLFPC algorithm. Second part holds the information that belongs to the remaining part of the task. Assigning time slots to the tasks could be considered as dividing task into two parts and assigned values will define virtual values of that task. In current work-set tasks could be distinguished from each other under two different categories. In first category, tasks have the states of LLFPC algorithm, a task could be in any state such as Ready, Virtual Zero Laxity and Running at any time during the interval which the current work set is created for. Second category depends on the real deadline value of the task. In this category, tasks are belong to the High Priority State or Low Priority State and there is not going to be any transition between these states until the end of interval. This category has no effect on decisions of the LLFPC algorithm. If there is a condition that the total number of time slots required by the Low Priority tasks is equal or greater than the idle time left in the interval after assignment of the High Priorty Tasks has done (totalTimeSlotsRequiredforLPTask $\geq$ idleTime) then the scheduler could not assign the required time to LP tasks. Missing deadlines will be unavoidable. With this assumption, it will be shown that where this condition exists, total utilization of GTS is greater than the number of processors. STORM (Simulation TOol for Real time Multiprocessor scheduling ) has been used to develop and test algorithms. STORM provides discrete time simulation environment for multiprocessor systems that may contain periodic and aperiodic tasks. A work conserving and optimal scheduling algorithm, VLDS, has been presented and it has been stated that VLDS when compared fair schedule algorithms decreases the number of context switches and improves the average response times of the tasks. This paper shows that fairness could be relaxed or even ignored in some cases to provide better schedules. As future work, cache awareness could be implemented to reduce context switches and migrations at the junctions that one interval finishes and begins. Thus, arbitrary assignments of the tasks to the processors could be replaced with assignment decisions that take into account the cache contents of the processors.
Açıklama
Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2012
Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2012
Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2012
Anahtar kelimeler
Gerçek Zamanlı Sistemler,
Çizelgeleme,
İş Sıralama,
Çok işlemcili sistemler,
Real-Time Systems,
Scheduling,
Multiprocessor Systems