In packet networks resources are statistically shared among diﬀerent traﬃc sources that try to send their data to the desired destinations. Overload causes con- gestion that is solved either by delaying or by dropping excess packets. Many of the problems that we face in networks are related to the allocation of limited amount of shared resources (buﬀer, memory, band- width etc.) to competing data ﬂows. There are dif- ferent solutions that try to solve a challenge of as- suring high resource utilization and high application performance at the same time. Essentially they can be grouped into two categories: end-system based so- lutions and network (device) based solutions. As latency∗ is a network device property, we are here only interested in the network based solutions.
End-to-end delay is a very important QoS parameter. There are a number of factors that contribute to the amount of delay experienced by a packet as it tra- verses through a network: forwarding delay, queueing delay, propagation delay and serialization delay. When scheduling algorithms are discussed it is only the queu- ing delay that is of our interest. It denotes the amount of time that a packet has to wait in a queue as the system performs statistical multiplexing and while other packets are serviced before it can be transmitted on the output port. Jitter is the variation in delay over time experienced by consecutive packets that are part of the same ﬂow.
Given multiple packets awaiting transmission through an output link of a network device, it is the function of a scheduler and its algorithm to determine the exact order in which these packets are going to be trans- mitted. Congestion occurs when packets arrive to the input ports faster then they can be transmitted on the output ports. The amount of delay introduced by con- gestion can range anywhere from the amount of time that it takes to transmit the last bit of the previously arriving packet to inﬁnity, where the packet is dropped due to the buﬀer exhaustion. A scheduler should have a maximum delay limit (latency), low computational cost (complexity), an easy implementation, it also has to be eﬃcient and fair.
Since the delay is a very important transmission parameter, a scheduling algorithm of a scheduler should provide end-to-end delay guarantees for individual
ﬂows† without severely under-utilizing network resources. One component of the end-to-end delay is also the latency of the observed scheduler and we will study it closely in this paper. The notion of latency, that is going to be used here, is based on the length of time it takes a new ﬂow to begin receiving service at its reserved rate. Therefore, latency is directly rel- evant to the size of the playback buﬀers required in real-time applications.
They all diﬀer in the manner in which they calculate the global virtual time function. Generally, they give good fairness and low latency bound but they have great computational complexity so they are not ef- ﬁcient to implement. On the other hand, in frame based schedulers time is split into frames of ﬁxed or variable length. Reservations are made in terms of the maximum amount of traﬃc the ﬂow is allowed to transmit during a frame period. For some of them this frame size can vary too, so that the server can not stay idle if ﬂows transmit less traﬃc than their reservations over the duration of the frame. In this frame based schedulers the scheduler simply visits all non empty queues in a round robin order. The service received by a ﬂow in one round robin opportunity is proportional to its fair share of the bandwidth. These schedulers do not have to perform sorting among pack- ets and do not have to maintain global virtual time function, so they have lower computational complex- ity than the sorted priority schedulers. Deﬁcit Round Robin (DRR), Surplus Deﬁcit Round Robin, Elastic Round Robin, Nested Round Robin are some of the frame based schedulers with complexity O(1), but they have worse fairness and latency properties than the sorted priority schedulers.
In 1996 Shreedhar and Varghese  proposed one of the most popular frame-based scheduling algorithms. The main characteristic of all Deﬁcit Round Robin (DRR) like scheduling algorithms is their ability to provide guaranteed service rates for each ﬂow (queue). DRR services ﬂows in a strict round-robin order. It has complexity O(1) and it is easy to implement. Its latency is comparable to other frame-based schedulers. A detailed operation of DRR algorithm can be found in .
Stiliadis and Varma in  deﬁned a general class of schedulers, called Latency-Rate (LR) servers. The be- havior of an LR server is determined by two parameters – the latency and the allocated rate. Intuitively latency of an LR server is the worst-case delay seen
by the ﬁrst packet of a busy ﬂow. That is the packet
arriving when the ﬂow’s queue is empty.
The latency of particular scheduling policy may de- pend on its internal parameters: its transmission rate on outgoing link, the number of ﬂows sharing the link and their allocated rates.
In this deﬁnition of LR servers, they have not made any assumption on whether the server is based on a ﬂuid model or a packet-by-packet model. The only re- quirement is that a packet is not considered as depart- ing the server until its last bit has departed. Therefore, packet departures must be considered as impulses. DRR algorithm satisﬁes all of these assumptions.
The authors also developed and deﬁned the notion of latency of a scheduling algorithm and determined an upper bound on the latency for a number of schedulers
that belong to a class of LR servers. This notion of
latency is based on the length of time it takes a new
ﬂow to begin receiving service at its guaranteed rate.
Using the general idea of Stiliadis and Varma in  we derive the upper latency bound for DRR algorithm, which is diﬀerent from theirs. It is also diﬀerent from the bounds derived in . We show that our upper bound is mathematically correct in contrary to the ones derived in  and . More detailed analysis is given in  and .
Let us ﬁrst deﬁne active and busy periods of a ﬂow.
Definition 1 An active period of a ﬂow is the maxi- mal interval of time during which the ﬂow has at least one packet awaiting service or in service.
Definition 2 A busy period of a ﬂow is the maximal time interval during which the ﬂow would be active if served exactly at its reserved rate.
Active period reﬂects the actual behavior of the sched- uler since the service oﬀered to ﬂow varies depending on the number of active ﬂows. Busy period is a math- ematical construction that tells us how long would a ﬂow be active if served at exactly its reserved rate.
Stiliadis and Varma in  defined a general class of schedulers, called Latency-Rate (LR) servers. Using the general idea of Stiliadis and Varma in  and of Shreedhar and Varghese in , we have derived the upper latency bound for DRR scheduling algorithm. We have show that our upper bound is unique regardless of the approach used. We have derived the same result using two different approaches (see  for details). It should be mentioned that even though we have used the same ideas as Stiliadis and Varma in  and Shreedhar and Varghese in , we have made some changes in the derivation leading to the new and mathematically correct latency bound.
References Anton Kos: Zagotavljanje razliˇcnih stopenj kakovosti storitve v omreˇzjih s paketnim prenosom podatkov, Doctoral Thesis, Faculty of Electrical Engineering, Universitiy of Ljubljana, Slovenia, February 2006  Anton Kos, Jelena Mileti´c: Sub Critical Deficit Round Robin, Technical Report, Faculty of Electrical Engeneering, University of Ljubljana, Slovenia, July 2005  Salil S. Kanhere, Harish Sethu: On the Latency Bound of Deficit Round Robin, Proceedings of the
International Conference on Computer Communications and Networks, Miami, Florida, USA, October 14-16, 2002 M. Shreedhar, George Varghese: Efficient Fair Queuing Using Deficit Round Robin, IEEE/ACM Transactions on Networking, Volume 4, Issue 3, June 1996  D. Stiliadis, A.Varma: Latency-Rate Servers – A General Model for Analysis of Traffic Scheduling Algorithms, IEEE/ACM Transactions on Networking, Volume 6, Issue 5, October 1998