<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY RFC2119 SYSTEM 
  "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml">
<!ENTITY RFC2212 SYSTEM 
  "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2212.xml">
<!ENTITY RFC3393 SYSTEM 
  "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3393.xml">
<!ENTITY RFC8174 SYSTEM 
  "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8174.xml">
<!ENTITY RFC8655 SYSTEM 
  "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8655.xml">
<!ENTITY RFC8938 SYSTEM 
  "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8938.xml">
<!ENTITY I-D.liu-detnet-large-scale-requirements SYSTEM 
  "http://xml.resource.org/public/rfc/bibxml-ids/reference.I-D.liu-detnet-large-scale-requirements.xml">
	<!ENTITY RFC9320 SYSTEM 
  "http://xml.resource.org/public/rfc/bibxml/reference.RFC.9320.xml">
<!ENTITY I-D.yizhou-detnet-ipv6-options-for-cqf-variant SYSTEM 
  "http://xml.resource.org/public/rfc/bibxml-ids/reference.I-D.yizhou-detnet-ipv6-options-for-cqf-variant.xml">
]>

<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?> <!-- used by XSLT processors -->

<!-- OPTIONS, known as processing instructions (PIs) go here. -->
<!-- For a complete list and description of PIs,
     please see http://xml.resource.org/authoring/README.html. -->
<!-- Below are generally applicable PIs that most I-Ds might want to use. -->
<?rfc strict="yes" ?> <!-- give errors regarding ID-nits and DTD validation -->
<!-- control the table of contents (ToC): -->
<?rfc toc="yes"?> <!-- generate a ToC -->
<?rfc tocdepth="3"?> <!-- the number of levels of subsections in ToC. default: 3 -->
<!-- control references: -->
<?rfc symrefs="yes"?> <!-- use symbolic references tags, i.e, [RFC2119] instead of [1] -->
<?rfc sortrefs="yes" ?> <!-- sort the reference entries alphabetically -->
<!-- control vertical white space: 
     (using these PIs as follows is recommended by the RFC Editor) -->
<?rfc compact="yes" ?> <!-- do not start each main section on a new page -->
<?rfc subcompact="no" ?> <!-- keep one blank line between list items -->
<!-- end of popular PIs -->

<rfc category="info" 
     docName="draft-joung-detnet-asynch-detnet-framework-04" 
     ipr="trust200902">

<front>
 <title abbrev="Asynchronous DetNet Framework">
       Asynchronous Deterministic Networking Framework for Large-Scale Networks 
 </title>

 <author fullname="Jinoo Joung" initials="J." surname="Joung">
  <organization>Sangmyung University</organization>
  <address>
   <!-- <postal> </postal> --> 
   <!-- <phone></phone>  -->
   <!-- <facsimile/> -->
   <email>jjoung@smu.ac.kr</email>
   <!-- <uri/> -->
  </address>
 </author>

 <author fullname="Jeong-dong Ryoo" initials="J" surname="Ryoo">
  <organization>ETRI</organization>
  <address>
    <email>ryoo@etri.re.kr</email>
  </address>
 </author>

 <author fullname="Taesik Cheung" initials="T" surname="Cheung">
   <organization>ETRI</organization>
   <address>
    <email>cts@etri.re.kr</email>
    </address>
 </author>

 <author fullname="Yizhou Li" initials="Y" surname="Li">
   <organization>Huawei</organization>
   <address>
    <email>liyizhou@huawei.com</email>
    </address>
 </author>

 <author fullname="Peng Liu" initials="P" surname="Liu">
   <organization>China Mobile</organization>
   <address>
    <email>liupengyjy@chinamobile.com</email>
    </address>
 </author>



 <date />
<!--
 <date day="22" month="May" year="2019" />
-->

 <area>Routing Area</area>
 <workgroup>DetNet Working Group</workgroup>

 <keyword>DetNet</keyword> 
 <keyword>Asynchronous DetNet</keyword> 

 <abstract>
   <t>
   This document describes various solutions of Asynchronous Deterministic
   Networking (ADN) for large-scale networks. 
   The solutions in this document do not need strict time-synchronization of network nodes,
   while guaranteeing end-to-end latency or jitter.
   The functional architecture and requirements for such solutions are specified.
   </t>
 </abstract>
</front>

<middle>
 <section title="Introduction">
   <t>
   Deterministic Networking (DetNet) provides a capability to carry specified 
   unicast or multicast data flows for real-time applications with extremely 
   low data loss rates and bounded latency within a network domain.   
   The architecture of DetNet is defined in RFC 8655 <xref target="RFC8655"/>, 
   and the overall framework for DetNet data plane is provided in 
   RFC 8938 <xref target="RFC8938"/>. 
   Various documents on DetNet IP (Internet Protocol) and 
   MPLS (Multi-Protocol Label Switching) data planes and their
   interworking with Time-Sensitive Networking (TSN) have been standardized. 
   Technical elements necessary to extend DetNet to a large-scale network 
   spanning multiple administrative domains are identified in 
   <xref target="I-D.liu-detnet-large-scale-requirements"/>.
   </t>
   <t>
   This document considers the problem of guaranteeing both latency upper bounds
   and jitter upper bounds in large-scale networks with any type of 
   topology, with random dynamic input traffic. 
   The jitter is defined as the latency difference between two packets 
   within a flow, not a difference from a clock signal
   or from an average latency, as is summarized in RFC 3393 
   <xref target="RFC3393"/>.
   </t>
   <t>
   In large-scale networks, the end-nodes join and leave, 
   and a large number of flows are dynamically generated and terminated. 
   Achieving satisfactory deterministic performance in such environments 
   would be challenging. 
   The current Internet, which has adopted the DiffServ architecture, 
   has the problem of the burst accumulation and the cyclic dependency, 
   which is mainly due to FIFO queuing and strict priority scheduling. 
   Cyclic dependency is defined as a situation wherein 
   the graph of interference between flow paths has cycles 
   <xref target="THOMAS"/>.
   The existence of such cyclic dependencies makes the proof of determinism
   a much more challenging issue and can lead to system instability,
   that is, unbounded delays 
   <xref target="ANDREWS"/><xref target="BOUILLARD"/>. 
   The Internet architecture does not have an explicit solution for the
   jitter bound as well. 
   Solving the problem of latency and jitter as a joint 
   optimization problem would be even more difficult.
   </t>
   <t>
   The basic philosophy behind the framework proposed in this document is 
   to minimize the latency bounds first by taking advantage of the work 
   conserving schedulers with regulators or stateless fair queuing schedulers, 
   and then minimize the jitter bounds by adjusting 
   the packet inter-departure times to reproduce the inter-arrival times, 
   at the boundary of a network. 
   We argue that this is simpler than trying to minimize the latency and 
   the jitter at the same time. 
   The direct benefit of such simplicity is its scalability.
   </t>
   <t>
   For the first problem of guaranteeing latency bound alone, 
   the IEEE asynchronous traffic shaping (ATS)
   <xref target="IEEE802.1Qcr"/>, 
   the flow-aggregate interleaved regulators (FAIR) 
   <xref target="FAIR"/><xref target="Y.3113"/> frameworks, 
   the port-based flow aggregate regulators (PFAR) <xref target="ADN"/>,  
   and the work-conserving stateless core fair queuing (C-SCORE)
   are proposed as solutions.
   The key component of the ATS and the FAIR frameworks is 
   the interleaved regulator (IR)), which is described in 
   <xref target="RFC9320"/>.
   The IR has a single queue for all flows of the same class 
   from the same input port. 
   The head of the queue (HOQ) is examined if the packet is eligible to exit 
   the regulator. 
   To decide whether it is eligible, the IR is required to maintain the individual flow 
   states. 
   The key component of the PFAR framework is the regulators 
   for flow aggregates (FA) per port per class, which regulates the FA 
   based on the sum of average rates and the sum of maximum bursts of 
   the flows that belong to the FA.
   The key component of the C-SCORE is the packet state that is carried as metadata. 
   The C-SCORE does not need to maintain flow states at core nodes, 
   yet it works as one of the fair queuing schedulers, which is known to provide the best flow isolation performance. 
   The metadata to be carried in the packet header is simple and can be updated 
   during the stay in the queue or before joining the queue.

   </t>
   <t>
   For the second problem of guaranteeing jitter bound, 
   it is necessary to assume that the first problem is solved, that is, 
   the network guarantees latency bounds. 
   Furthermore, the network is required to specify the value of 
   the latency bound for a flow.  
   The end systems at the network boundary, or at the source and destination 
   nodes, then can adjust the inter-departure times of packets, 
   such that they are similar to their inter-arrival times. 
   In order to identify the inter-arrival times at the destination node,
   or at the network edge near the destination, the packets are required to 
   specify their arrival times, according to the clock at the source, 
   or the network edge near the source.
   The clocks are not required to be time-synchronized with any other clocks 
   in a network.
   In order to avoid a possible error due to a clock drift
   between a source and a destination, they are recommended to be 
   frequency-synchronized. 
   </t>
   <t>
   In this document, strict time-synchronization among network nodes is avoided.
   It is not easily achievable, especially over a large area network or 
   across multiple DetNet domains. 
   Asynchronous solutions described in this document can provide satisfactory 
   latency bounds with careful design without complex pre-computation, 
   configuration, and hardware support usually necessary for 
   time synchronization.
   </t>
 </section>

 <section title="Terminology">
  <section title="Terms Used in This Document">
   <t>
   </t>
  </section>
  <section title="Abbreviations">
   <t>
   </t>
  </section>
 </section>

 <section title="Conventions Used in This Document">
   <t>
   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
   NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
   "MAY", and "OPTIONAL" in this document are to be interpreted as
   described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/>
   when, and only when, they appear in all capitals, as shown here.
   </t>
 </section>

 <section anchor="secLatency" title="Framework for Latency Guarantee">
  <section title="Problem Statement">
   <t>
   In <xref target="secLatency"/>, 
   we assume there are only two classes of traffic.
   The high priority traffic requires latency upper bound guarantee.
   All the other traffic is considered to be the low priority traffic, 
   and be completely preempted by the high priority traffic. 
   High priority (HP) traffic is our only focus.
   </t>
   <t>
   It is well understood that the necessary conditions for
   a flow to have a bounded latency inside a network, are that;
   <list style="symbols">
     <t>
     a flow entering a network conforms to a prescribed 
     traffic specification (TSpec), including the arrival rate and 
     the maximum burst size, and
     </t>
     <t>
     all the network nodes serve the flow with a service rate 
     which are greater than or equal to the arrival rate.
     </t>
   </list>
   These conditions make the resource reservation and the admission control 
   mandatory. 
   These two functions are considered given and out of scope of this document.
   </t>
   <t>
   Here, the notion of arrival and service rates represent sustainable or 
   average values.
   A short-term discrepancy between these two rates contributes to the burst 
   size increment, which can be accumulated as the flow passes through 
   the downstream nodes. This results in an increase in the latency bound.  
   Therefore, the value of accumulated burst size is a critical performance 
   metric.
   </t>
   <t>
   The queuing and scheduling of a flow plays a key role in deciding the 
   accumulated burst size.
   Ideally, the flows can be queued in separate queues and the queues are 
   scheduled according to the flow rates. 
   In this case a flow can be considered isolated.
   With practical fair schedulers, such as the Deficit Round Robin (DRR),
   a isolated flow still can be affected by the other flows 
   as much as their maximum packet lengths. 
   </t>
   <t>
   If we adopt a separate queue per flow at an output port, 
   and assume identical flows from all the input ports, 
   then the maximum burst size of a flow out of the port, Bout, 
   is given as the following:
   </t>
   <figure>
     <artwork align="center"><![CDATA[
Bout < Bin + (n-1)L*r/C,
      ]]></artwork>
   </figure>
   <t>
   where Bout is the outgoing flow's maximum burst size, 
   Bin is the incoming flow's maximum burst size, 
   n is the number of the flows, 
   L is the maximum packet size, 
   r is the average rate of the flow, 
   and C is the link capacity.  
   This approach was taken in the integrated services (IntServ) framework
   <xref target="RFC2212"/>.
   </t>
   <t>
   The separate queues in the aforementioned case can be too many to be
   handled in real time, especially at the core of large-scale networks.
   The common practice therefore is to put all the HP flows in a single
   queue, and serve them with higher priority than best effort traffic.
   It is also well known that, with a resource reservation, a proper scheduling scheme such as the
   strict priority (SP) scheduling can guarantee service rates larger
   than the arrival rates, therefore the latency can still be
   guaranteed.  With such a single aggregate queue the flows are not
   considered isolated, however.  In this case a flow's burst size in a
   node can be increased proportionally to the sum of maximum burst
   sizes of the other flows in the queue.  That is,

   </t>
   <figure>
     <artwork align="center"><![CDATA[
Bout < Bin + (n-1)Bin*r/C.
      ]]></artwork>
   </figure>
   <t>
   The second term on the right-hand side 
   represents the amount of increased maximum burst.  
   It is dominated by 
   the term (n-1)Bin, which is the maximum total burst from 
   the other flows.
   </t>
   <t>
   Moreover, this increased burst affects the other flows' burst size 
   at the next node, and this feedforward can continue indefinitely 
   where a cycle is formed in a network.
   This phenomenon is called a cyclic dependency of a network. It is argued
   that the burst accumulation can explode into infinity,
   therefore the latency is no longer guaranteed.
   </t>
   <t>
   As such, a flow is required to be isolated to a certain level, from
   the other flows' bursts, such that its burst accumulations are kept
   within a necessary value.  By doing so, the other flows are also
   isolated.  The regulators or the fair queuing schedulers are 
   described as solutions in this document for such isolation.  They can
   decrease the accumulated burst into a desirable level and can isolate 
   flows from others.  In case of the regulators, however, if the
   regulation needs a separate queue per flow, then the scalability
   would be harmed just like the ideal IntServ case.  In this document 
   solutions with the IR or the regulations on flow aggregates are    
   described.
</t>
   <t>
Meanwhile, the fair queuing (FQ) technique limits interference 
between flows to the degree of maximum packet size. 
Packetized generalized processor sharing (PGPS) and weighted fair queuing (WFQ) 
are representative examples of this technique <xref target="PAREKH"/>.  
In this technique, the key is to record the service status of the flow 
in real time to provide exactly the assigned amount of service. 
The worst delay at each node with FQ is proportional to the maximum packet length 
divided by service rate.  However, for this purpose, the complexity of managing 
and recording a large amount of state information for each flow is a problem, 
so it is not usually applied in practice.
</t>
   <t>
To solve this problem, the flow entrance node can create state information 
for each packet, including the finish time (FT) and other necessary information, 
and record it in the packet.  Subsequent nodes infer the exact states of 
the packet at the node based on these records without managing flow state information.  
This method provides a scalable solution that enables isolation between flows 
by modifying the FT based on local and initial information as needed.  
In this document the method of such stateless FQ implementation in core nodes is described. 
   </t>
  </section>
  <section title="Asynchronous Traffic Shaping (ATS)">
   <t>
   The first solution in this document for latency guarantee is the IEEE
TSN TG's ATS technology.  Essentially it is a combined effort of the
flow aggregation per node per input/output ports pair per class, and
the interleaved regulator per flow aggregate (FA).  The IR examines
the HOQ, identifies the flow the packet belongs to, and transfers the
packet only when it is eligible according to the TSpec including the maximum burst size 
and arrival rate of the flow.  Having the flows regulated as their TSpecs, 
the flow's burst size in a node can be increased proportionally to the sum of 
initial maximum burst sizes of the other flows in the queue, denoted by B.    
</t>
   <figure>
     <artwork align="center"><![CDATA[
Bout < B + (n-1)B*r/C.
      ]]></artwork>
   </figure>
   <t>
   The initial maximum burst size refers to the maximum burst size specified in TSpec.
   </t>
   <t>
   This solution can have only one queue per FA, 
   but suffers from having to maintain each individual flow state.
   The detailed description on ATS can be found in 
   <xref target="IEEE802.1Qcr"/>.
   </t>
  </section>
  <section title="Flow Aggregate Interleaved Regulators (FAIR)">
   <section title="Overview of the FAIR">
   <t>
   In the FAIR framework, the network can be divided into several 
   aggregation domains (ADs).  HP flows of the same path within an AD 
   are aggregated into an FA.
   IRs per FA are implemented at the boundaries of the ADs. 
   An AD can consist of arbitrary number of nodes.
   The FA can be further subdivided based on the flow requirements and 
   characteristics. 
   For example, only video flows of the same path are
   aggregated into a single FA. 
   </t>
   <t>
   <xref target="figFAIR"/> shows an example architecture of 
   the FAIR framework.
   The IRs at the AD boundaries suppress the burst accumulations across the ADs
   with the latency upper bounds intact as they do in IEEE TSN ATS, 
   if the incoming flows are all properly regulated, 
   and the AD guarantees the FIFO property to all the packets in the FA 
   <xref target="LEBOUDEC"/>. 
   It is sufficient to put every FA into a single FIFO queue in a node,
   in order to maintain the FIFO property within an AD. 
   However, in this case, if cycles are formed, 
   the burst accumulations inside an AD can be accumulated indefinitely.
   If the topology does not include a cycle and the latency bound requirement 
   is not stringent, 
   then the FIFO queue and the SP scheduler would be allowable.
   Otherwise, the FAs are recommended to be treated with separated queues 
   and fair-queuing schedulers for flow isolation. 
   </t>
   <figure anchor="figFAIR" title="FAIR Framework">
     <artwork align="center"><![CDATA[
         .~~.    +---+    .~~,        +---+        .~~.
+---+   [    ]   |IR |   [    ]       |IR |       [    ]   +----+
|Src|->[  AD  ]->|per|->[  AD  ]-> ...|per|... ->[  AD  ]->|Dest|
+---+   [    ]   |FA |   [    ]       |FA |       [    ]   +----+
         '~~'    +---+    '~~'        +---+        '~~'
      ]]></artwork>
   </figure>
   </section>
   <section title="The performance of the FAIR">
   <t>
   The FAIR guarantees an end-to-end delay bound with reduced complexity 
   compared to the traditional flow-based approach. 
   Numerical analysis shows that, with a careful selection of AD size,
   the FAIR with DRR schedulers yields smaller latency bounds 
   than both the IntServ and the ATS <xref target="FAIR"/>.
   </t>
   <t>
   The ATS can be considered as a special case of the FAIR with the FIFO  
   schedulers, where all the ADs encompass only a single hop. 
   The IntServ can also be considered as an extreme case 
   of the FAIR with fair schedulers and queues per FA, 
   with an AD corresponding to an entire network; therefore, 
   regulators are unnecessary. 
   </t>
   </section>
  </section>
  <section title="Port-based Flow Aggregate Regulators (PFAR)">
   <t>
   The IR in the ATS and the FAIR suffers from two major complex
   tasks; the flow state maintenance and the HOQ lookup to determine the
   flow to which the packet belongs.  Both tasks involve real-time
   packet processing and queue management.  As the number of flows
   increases, the IR operation may become burdensome as much as the per-
   flow regulators. 
   Without maintaining individual flow states, however, the flows can be 
   isolated to a certain level, as is described in this section.  
   </t>
   <t>
   Let us call the set of flows sharing the same input/output ports pair the port-based FA (PFA).  
   The only aggregation criteria for a PFA are the ports and the class.  
   The port-based flow aggregate regulators (PFAR) framework puts a regulator 
   for each PFA in an output port module, just before the class-based 
   queuing/scheduling system of the output port module.  
   The PFAR framework sees a PFA as a single flow with the "PFA-Tspec", 
   {the sum of the initial maximum bursts; and the sum of the initial arrival rates} 
   of the flows that are the elements of the PFA; and regulates the PFA to meet its PFA-Tspec.
   </t>
   <t>
   If we assume identical flows in a network with ideal symmetrical topology, 
   then the maximum burst size of an arbitrary set of flows within an PFA, Bout, 
   is given as the following:
   </t>
   <figure>
     <artwork align="center"><![CDATA[
Bout < Bin + (p-1)B*r/C,
      ]]></artwork>
   </figure>
   <t>
   where Bin is the sum of maximum burst sizes of the flows within the FAin,
   B is the sum of initial maximum burst sizes of the flows within the FAin, 
   and p is the number of the ports in the node.
   </t>
   <t>
   The PFARs can be placed at the output port of a node 
   before the output SP scheduler.
   The architecture is similar to that suggested in the IEEE ATS, 
   except that in the ATS, the IRs are placed instead of the PFARs. 
   </t>
   <t>
   Note that Bout is affected mostly by B; in other words, the burst size out of a node is affected mostly by the initial maximum burst
sizes of the other PFAs from different input ports of the node.  This
property makes the Bout does not increase exponentially even in
the existence of cyclic dependencies.  The regulator in PFAR increases the worst latency, as much as (Bin - B)/r, while the IR does not.
   </t>
   <t>
   With the PFAR, the HOQ flow identification process is unnecessary, 
   and only the PFAs' states, instead of individual flows' states, 
   must be maintained at a node. 
   In this respect, the complexity of process of PFAR is reduced compared to       IR of the ATS or the FAIR. 
   </t>
   <t>
   In a recent study <xref target="ADN"/>, it was also shown, 
   through a numerical analysis with symmetrical networks with cycles, 
   that PFAR, when implemented at every node, 
   can achieve comparable latency bounds to the IEEE ATS technique. 
   </t>
   <t>
   The ATS, the FAIR, and the PFAR frameworks maintain regulators per FA. 
   The FAs in these frameworks are composed of the flows sharing the same 
   ingress/egress ports of an AD. 
   The ADs can encompass a single hop or multiple hops. 
   The regulators can be the IR or the aggregate regulator.
   There can be other combinations of AD and regulator type,
   which could be further investigated and compared to 
   the frameworks introduced in this document. 
   </t>
  </section>
  <section title="Work-conserving stateless core fair queuing (C-SCORE)">
      <section title="Framework">
   <t>
   The generalized processor sharing (GPS) <xref target="PAREKH"/>, 
   the weighted fair queuing (WFQ), the virtual clock (VC), 
   and similar other schedulers utilize the concept of finish time (FT) 
   that is the service order assigned to a packet.  
   The packet with the minimum FT in a buffer is served first.  
   We will call these works collectively as the fair queuing (FQ).
   </t>
   <t>
   As an example of the FQ, the VC scheduler <xref target="ZHANG"/> defines 
   the FT to be
     <figure>
     <artwork align="center"><![CDATA[
F(p) = max{F(p-1), A(p)} + L(p)/r,          (1)
      ]]></artwork>
   </figure>
    where (p-1) and p are consecutive packets of the flow under observation, 
   A(p)is the arrival time of p, L(p) is the length of p, and 
   r is the flow service rate. The flow index is omitted.
   </t>
   <t>
   The key idea of the FQ is to calculate the service finish times of packets 
   in an imaginary ideal fluid service model and use them as the service 
   order in the real packet-based scheduler.
   </t>
   <t>
   While having the excellent flow isolation property, 
   the FQ needs to maintain the flow state, F(p-1). 
   For every arriving packet, the flow it belongs to has to be identified 
   and its previous packet's FT should be extracted.  
   As the packet departs, the flow state, F(p), has to be updated as well.
   </t>
   <t>
   We consider a framework for constructing FTs for packets at core nodes without flow states.  
   In a core node, the following conditions on FTs SHOULD be met.
   </t>
   <t>
   <list style="hanging" hangIndent="5">
   <t hangText="C1)">The 'fair distance' of consecutive 
                     packets of a flow generated at the entrance node has to be kept in the core nodes.  
					 That is; Fh(p) >= Fh(p-1) + L(p)/r, 
                     where Fh(p) is the F(p) at core node h.  </t>
   <t hangText="C2)">The order of FTs and the actual service order, 
                     within a flow, have to be kept.  
                     That is; Fh(p) > Fh(p-1) and Ch(p) > Ch(p-1), 
                     where Ch(p) is the actual service completion time of 
                     packet p at node h. </t>
   <t hangText="C3)">The time lapse at each hop has to be reflected.  
                     That is; Fh(p) >= F(h-1)(p), where F(h-1)(p) is 
                     the FT of p at the node h-1, the upstream node of h. </t>
   </list>
   </t>
   <t>
   In essence, (1) has to be approximated in core nodes.  
   There can be many possible solutions to meet these conditions.  
   We describe a generic framework with requirements for constructing FTs 
   in core nodes that meet the conditions, without flow state, in the following. 
   </t>
   <t>
   Definition: An active period for a flow is a maximal interval of time during a node busy period, 
   over which the FT of the most recently arrived packet of the flow is greater than the virtual time 
   (equivalently the system potential). Any other period is an inactive period for the flow.
   </t>
   <t>
   Requirement 1: In the entrance node, it is REQUIRED to obtain the FTs with the following equation.  
   0 denotes the entrance node of the flow under observation. 
   </t>
   <figure>
     <artwork align="center"><![CDATA[
F0(p) = max{F0(p-1), A0(p)}+L(p)/r.                 
      ]]></artwork>
   </figure>
   <t>
   Note that if the FTs are constructed according to the above equation, the fair distance of consecutive packets is maintained.
   </t>
   <t>
   Requirement 2: In a core node h, it is REQUIRED to increase the FT of a packet 
   by an amount, d(h-1)(p), that depends on the previous node and the packet.  
   </t>

   <figure>
     <artwork align="center"><![CDATA[
Fh(p) = F(h-1)(p) + d(h-1)(p).                          
      ]]></artwork>
   </figure>
   <t>
   Requirement 3: It is REQUIRED that dh(p) is a non-decreasing function of p, 
   within a flow active period. 
   </t>
    <t>
   Requirements 1, 2, and 3 specify how to construct the FT in a network. 
   By these requirements Conditions C1), C2), and C3) are met. 
   The following requirements 4 and 5 specify how the FT is used for scheduling.
   </t>
   <t>
   Requirement 4: It is REQUIRED that a node provides service whenever there is a packet.
   </t>
   <t>
   Requirement 5: It is REQUIRED that all packets waiting for service in a node are served in the ascending order of their FTs. 
   </t>

   <t>
  We call this framework the work conserving stateless core fair queuing (C-SCORE), 
  which can be compared to the existing non-work conserving scheme 
   <xref target="STOICA"/>.
   </t>
   </section>
   <section title="Selection of delay factor for latency guarantee">
   <t>
   For C-SCORE to guarantee E2E latency bound, the dh(p) is RECOMMENDED to be defined as in the following. 
   </t>
   <figure>
     <artwork align="center"><![CDATA[
dh(p) = SLh.        (2)                         
      ]]></artwork>
   </figure>
   <t>
   The service latency of the flow at node h, denoted by SLh, is given as follows.
      <figure>
     <artwork align="center"><![CDATA[
SLh = Lh/Rh + L/r,       (3)                 
      ]]></artwork>
   </figure>
   where Lh is the max packet length in the node h over all the flows that are transmitted from the output port under observation, 
   Rh is the link capacity of the node h, and L is the max packet length in the flow.  
   The service latency was first introduced in the concept of Latency-rate server model <xref target="STILIADIS-LRS"/>, 
   which can be interpreted as the worst delay from the arrival of the first packet of a new flow until its service completion.
   </t>
   <t>
   Consider the worst case: Right before a new flow's first packet arrives at a node, 
   the transmission of another packet with length Lh has just started. This packet takes the transmission delay of Lh/Rh.
   After the transmission of the packet with Lh, the flow under observation could take only the allocated share of the link,
   and the service of the packet under observation would be completed after L/r.
   Therefore, the packet has to wait, in the worst case, Lh/Rh + L/r.
   </t>
   <t>
   The reason to add the service latency to F(h-1)(p) to get Fh(p) is 
   to meet Condition C3) in a most conservative way without being too excessive.
   Intuitively, when every packet's FT is updated with the flow's own worst delay,
   then a packet that experienced the worst delay gets a favor.
   Thus its worst delay will not get any worse,
   while the delay differences among flows are reflected.
   </t>
   <t>
   When dh(p) is decided by (2), then it can be proved that
   <figure>
   <artwork align="center"><![CDATA[
   Dh(p) <= (B-L)/r + SL0 + SL1 + ... + SLh,     (4)
      ]]></artwork>
   </figure>
   where Dh(p) is the latency experienced by p from the arrival at the node 0 
   to the departure from node h; B, L, and r are the max burst of, max packet length of, 
   and allocated rate to the flow under observation that p belongs to, respectively <xref target="KAUR"/>. 
   </t>
   <t>
   Note that the latency bound in (4) is the same to the network where every node has a stateful FQ scheduler, 
   including VC. The parameters in the latency bound are all intrinsic to the flow, except Lh/Rh.
   </t>

   <t>
   On the other hand, dh(p) may not be a function of p, and dependent only on the nodes. Then it could be denoted as dh. 
   For example, dh can be the minimum or maximum observed latency at the node h.
   </t>
   </section>
      <section title="Network configuration for latency guarantee">
   <t>
  A source requests an E2E latency bound for a flow, specifying its arrival rate, maximum packet length, and maximum burst size. 
  If the E2E latency bound can be met, the network admits the flow. 
  The network reserves the links in the path such that the sum of allocated service rates to the flows does not exceed the link capacity. 
  </t>
  <t>
In the process of admission decision, the service rate allocated to a flow can be decided according to the requested latency bound of the flow. 
The detailed operational procedure for such admission and reservation is out of scope of this document.
   </t>
   
   </section>
   <section title="Role of entrance node for generation and update of FT">

   <t>
   It is assumed that the packet length information is written in the packet header. 
   Entrance node maintains the flow state, e.g. FT of packet (p-1) at node 0 (F0(p-1)), 
   the maximum packet length of the flow (L), and the service rate allocated to the flow (r). 
   It operates a clock to identify the arrival time of a packet. 
   It collects the link information such as the maximum packet length of all the flow (L0) and link capacity (R0) to calculate the delay factor at node 0. 
	</t>
	<t>
	Upon receiving or generating packet p, it obtains FT of packet p at node 0 (F0(p)), 
	according to the VC algorithm and uses it as the service order in the entrance node. 
	If the queue is not empty, then it puts p in a priority queue, in which the packets are sorted according to their FT. 
	It also obtains the FT of packet p at node 1 (F1(p)) before or during p is in the queue. 
	It writes F1(p), L, and r in the packet as metadata for use in the next node 1. 
	Finally, it updates the flow state information F0(p-1) to F0(p).  
</t>
    </section>
	   <section title="Role of core node for update of FT">

   <t>
   A core node h collects the link information such as Lh and Rh. 
   As in an entrance node, Lh is a rather static value, but still can be changed over time. 
   Upon receiving packet p, it retrieves metadata Fh(p), L, and r, and uses Fh(p) as the FT value of the packet. 
   It puts p in a priority queue. 
   It obtains Fh+1(p) by updating Fh(p) with adding the delay factor and updates the packet metadata Fh(p) with Fh+1(p) before or during p is in the queue. 
	</t>
    </section>
		   <section title="Mitigation of the complexity of entrance node">

   <t>
   Flow states still have to be maintained in entrance nodes. 
   When the number of flows is large, maintaining flow states can be burdensome. 
   However, this burden can be mitigated as follows.
   </t>
   <t>
The notion of an entrance node can be understood as a various edge device, including a source itself. 
FT of a packet is decided based on the maximum of F0(p-1) and A0(p); and L(p)/r. 
These parameters are flow specific. There is no need to know any other external parameters. 
The arrival time of p to the network, A0(p), can be defined as the generation time of p at the source. 
Then F0(p) is determined at the packet generation time and can be recorded in the packet. 
In other words, the entrance node functionality can reside in the source itself. 
</t>
<t>
Therefore, we can significantly alleviate the complexity of the proposed framework. The framework is scalable and can be applied to any network.
</t>
    </section>
			   <section title="Compensation of time difference between nodes">

   <t>
   We have assumed zero propagation delays between nodes so far. 
   In reality, there are time differences between nodes, including the differences due to the propagation delays and due to the clock mismatches. 
   This time difference can be defined as the difference between the service completion time measured at the upstream node and the arrival time measured at the current node. 
   </t>
   <t>
	FT does not need to be precise. It is used just to indicate the packet service order. 
	Therefore, if we can assume that the propagation delay is constant and the clocks do not drift, 
	then the time difference is constant for all the packets in a flow. 
	In this case the delay factor in (2) can be modified by adding the time difference value. 
	The E2E latency bound in (4) increases as much as the sum of propagation delays from node 0 to h.

   </t>
   <t>
The notion of an entrance node can be understood as a various edge device, including a source itself. 
FT of a packet is decided based on the maximum of F0(p-1) and A0(p); and L(p)/r. 
These parameters are flow specific. There is no need to know any other external parameters. 
The arrival time of p to the network, A0(p), can be defined as the generation time of p at the source. 
Then F0(p) is determined at the packet generation time and can be recorded in the packet. 
In other words, the entrance node functionality can reside in the source itself. 
</t>
<t>
Therefore, we can significantly alleviate the complexity of the proposed framework. The framework is scalable and can be applied to any network.
</t>
<t>
Moreover, the time difference can be updated only once in a while. 
By the time difference compensation, the nodes become aware of the global clock discrepancies 
using a periodic quantification of the local clock discrepancies between adjacent nodes. 
Link by link, this ends up producing awareness of the discrepancies between the clocks of all the nodes, 
which is then included in the computation of FTs in core nodes. 
It is not synchronization in a strict sense because it does not involve the re-alignment of the clocks, only the quantification of their differences.
</t>
<t>
Even with the clock differences and propagation delays, the C-SCORE framework does not need global time synchronization. 
</t>
    </section>
	
 </section>
 </section>

 <section anchor="secJitter" 
          title="Framework for Jitter Guarantee">
  <section title="Problem statement">
   <t>   
   The problem of guaranteeing jitter bounds in arbitrarily sized networks 
   with any type of topology with random dynamic input traffic is considered. 
   </t>
   <t>
   There are several possible solutions to guarantee jitter bounds 
   in packet networks, such as IEEE TSN's cyclic queuing and forwarding (CQF) 
   <xref target="IEEE802.1Qch"/>, 
   its asynchronous variations 
   <xref target="I-D.yizhou-detnet-ipv6-options-for-cqf-variant"/>,
   and the latency-based forwarding (LBF) <xref target="LBF"/>.
   </t>
   <t>
   The CQF requires time-synchronization across every node in the network 
   including the source. It is not scalable to a large network with 
   significant propagation delays between the nodes. 
   The asynchronous CQFs are scalable, but they may not satisfy 
   applications' jitter requirements. 
   This is because their jitter bounds cannot be controlled as desired, 
   but are only determined by the cycle time, which should be large enough 
   to accommodate all the traffic to be forwarded.
   </t>
   <t>
   The systems with slotted operations such as the CQF and its variations 
   turn the problem of packet scheduling into the problem of scheduling flows 
   to fit into slots. 
   The difficulty of such a slot scheduling is a significant drawback 
   in large scale dynamic networks with irregular traffic generations and 
   various propagation delays.
   </t>
   <t>
   The LBF is a framework of the forwarding action decision based on the flow 
   and packet status, such as the delay budget left for a packet in a node. 
   The LBF does not specify the actions to take according to the status. 
   It suggests a packet slow down or speedup by changing the service order, 
   by pushing packets into any desirable position of a first out queue, 
   as a possible action to take. 
   In essence, by having latency budget information of every packet, 
   the LBF is expected to maintain the latency and jitter within desired bounds.
   The processing latency required in LBF includes times 
   1) to lookup the latency budget information on every packet header, 
   2) to decide the queue position of the packet, 
   3) to modify the queue linked list, and 
   4) to update the budget information on the packet upon transmission. 
   This processing latency, however, can affect the scalability especially 
   in high speed core networks.
   </t>
   <t>
   The ATS, the FAIR, and the PFAR utilize the regulation function 
   to proactively prevent the possible burst accumulation in the downstream 
   nodes. It is not clear whether the LBF can take such preventive action. 
   If so the LBF can also act as a regulator and yield a similar latency bound.
   </t>
  </section>

  <section title="Buffered network (BN)">
   <t>
   The BN framework in this document for jitter bound guarantee is composed of 
   <list style="symbols">
     <t>
     a network that guarantees latency upper bounds;
     </t>
     <t>
     a timestamper for packets with a clock that is not necessarily 
     synchronized with the other nodes, which resides in between, 
     including the source and the network ingress interface; and
     </t>
     <t>
     a buffer that can hold the packets for a predetermined interval, 
     which resides in between, including the destination and the network 
     egress interface. 
     </t>
   </list>
   </t>
   <t>
   <xref target="figBN"/> depicts the overall architecture of the BN framework 
   for jitter-bound guarantees <xref target="BN"/>. 
   Only a single flow is depicted between the source and destination in 
   <xref target="figBN"/>. 
   The arrival (an), departure (bn), and buffer-out (cn) times of the nth 
   packet of a flow are denoted. 
   The end-to-end (E2E) latency and the E2E buffered latency are defined 
   as (bn-an) and (cn-an), respectively.
   </t>
   <figure anchor="figBN" 
           title="Buffered Network (BN) Framework for Jitter Guarantee">
     <artwork align="center"><![CDATA[
                            +--------------+
+-----+an +-------------+   | Network with |bn +--------+cn +-------+
| Src |-->| Timestamper |-->|   latency    |-->| Buffer |-->| Dest. |
+-----+   +-------------+   |  guarantee   |   +--------+   +-------+
                            +--------------+    
        |<--------------- E2E latency ------>|
        |<--------------- E2E buffered latency ---------->|
      ]]></artwork>
   </figure>
   <t>
   The buffer supports as many as the number of the flows destined for 
   the destination. 
   The destination shown in <xref target="figBN"/> can be an end station 
   or another deterministic network. 
   The buffer holds packets in a flow according to predefined intervals. 
   The decision of the buffering intervals involves the time-stamp value 
   within each packet.
   </t>
   <t>
   The network in between the time-stamper and the buffer can be of 
   arbitrarily sized network. 
   The input traffic can be dynamic. 
   It is required that the network be able to guarantee and identify 
   the E2E latency upper bounds of the flows. 
   The network is also required to let the buffer be aware of the E2E latency 
   upper bounds of the flows it has to process. 
   It is recommended that the E2E latency lower bound information is provided 
   by the network as well. 
   The lower bound may be contributed from the transmission and 
   propagation delays within the network.
   </t>
   <t>
   The time-stamper marks on the packets their arrival times. 
   The time-stamping function can use the real-time transport protocol (RTP) 
   over the user datagram protocol (UDP) 
   or the transmission control protocol (TCP). 
   Either the source or network ingress interface can stamp the packet. 
   In the case where the source stamps, the timestamp value is the packet 
   departure time from the source, which is only a propagation time away 
   from the packet arrival time to the network. 
   The source and destination do not need to share a synchronized clock. 
   All we need to know is the differences between the time stamps, that is, 
   the information about the inter-arrival times.
   </t>
  </section>

  <section anchor="secPropBN" title="Properties of the BN">
   <t>
   Let the arrival time of the nth packet of a flow be an. 
   Similarly, let bn be the departure time from the network of the nth packet. 
   Then, a1 and b1 are the arrival and departure times of the first packet 
   of the flow, respectively.  
   The first packet of a flow is defined as the first packet generated 
   by the source, among all the packets that belong to the flow. 
   Further, let cn be the buffer-out time of the nth packet of the flow.
   Let us define m as the jitter control parameter, 
   which will be described later in detail. 
   </t>
   <t>
   Since buffers can be without cut-through capability, 
   the processing delay within a buffer has to be taken in account. 
   Let gn be the processing delay within the buffer of the nth packet of 
   the flow.  
   The gn includes the time to look up the timestamp and 
   to store/forward the packet. 
   However, it does not include an intentional buffer-holding interval. 
   By definition, cn - bn >= gn. 
   Let max_n(gn)=g, the maximum processing delay for the flow in the buffer. 
   It is assumed that a buffer can identify the value of g. 
   Let U and W be the latency upper and lower bounds guaranteed to the flow 
   by the network.
   Let m be the jitter control parameter, W+g &lt;= m.
   </t>
   <t>
   The rules for the buffer-holding interval decision are given as follows:
    <list style="symbols">
     <t> c1=(b1+m-W), </t>                       
     <t> cn=max{(bn+g), (c1+an-a1)}, for n > 1. </t> 
    </list>
   </t>
   <t>
   The second rule governing the cn states that a packet should be held 
   in the buffer to make its inter-buffer-out time, (cn-c1), equal to the 
   inter-arrival time, (an-a1). 
   However, when its departure from the network is too late, 
   the inter-buffer-out time should be larger than the inter-arrival time, 
   then hold the packet as much as the maximum processing delay in the buffer, 
   that is, cn=bn+g. 
   The buffer does not need to know the exact values of an or a1. 
   It is sufficient to determine the difference between these values, 
   which can be easily obtained by subtracting the timestamp values of the two 
   packets.
   </t>
   <t>
   The following theorems holds <xref target="ADN"/>.
   </t>
   <t>
   Theorem 1 (Upper bound of E2E buffered latency). 
   The latency from the packet arrival to the buffer-out times (cn-an), 
   is upper bounded by (U-W+m).
   </t>
   <t>
   Theorem 2 (Lower bound of E2E buffered latency). 
   The latency from the packet arrival to the buffer-out times (cn-an), 
   is lower bounded by m.
   </t>
   <t>
   Theorem 3 (Upper bound of jitter). 
   The jitter is upper bounded by max{0, (U+g-m)}.
   </t>
   <t>
   By setting m=(U+g), we can achieve zero jitter. 
   In this case, the E2E buffered latency bound becomes (2U+g-W), 
   which is roughly twice the E2E latency bound. 
   In contrast, if we set m to its minimum possible value W+g, 
   then the jitter bound becomes (U-W), which is roughly equal to U, 
   while the E2E buffered latency bound becomes U, 
   which is the same as the E2E latency bound.
   </t>
   <t>
   The parameter m directly controls the holding interval of the first packet.  
   It plays a critical role in determining the jitter and the buffered latency 
   upper bounds of a flow in the BN framework.  
   The larger the m, the smaller the jitter bound, 
   and the larger the latency bound.  
   With a sufficiently large m, we can guarantee zero jitter, 
   at the cost of an increased latency bound.
   </t>
  </section>


  <section title="Frequency synchronization between the source and the buffer">
   <t>
   Clock drift refers to phenomena wherein a clock does not run at exactly 
   the same rate as a reference clock. 
   If we do not frequency-synchronize the clocks of 
   different nodes in a network, clock drift is unavoidable. 
   Consequently, jitter occurs owing to the clock frequency difference 
   or clock drift between the source (timestamper) and the buffer.
   Therefore, it is recommended to frequency-synchronize the source 
   (timestamper) and the buffer. 
   </t>
  </section>

  <section title="Omission of the timestamper">
   <t>
   For isochronous traffic whose inter-arrival times are well-known fixed 
   values, and the network can preserve the FIFO property for such traffic,
   then the timestampers can be omitted.
   </t>
   <t>
   Otherwise the FIFO property cannot be guaranteed, then a sequence number 
   field in the packet header would be enough to replace the timestamper.
   </t>
  </section>

  <section title="Mitigation of the increased E2E buffered latency">
   <t>
   The increased E2E buffered latency bound by the proposed framework, 
   from U to almost 2U, can be mitigated by one of the added functionalities 
   given as follows. 
   </t>
   <t>
   1) First, one can measure the E2E latency of a flow's first packet exactly, 
   and buffer it to make its E2E buffered latency be U. 
   Then, by following the rules given in <xref target="secPropBN"/>, 
   every subsequent packet will experience the same E2E buffered latency, 
   which is U, with zero jitter. 
   An example of the exact latency measurement may be performed by 
   time-synchronization between the source (timestamper) and the buffer. 
   However, how to measure the latency is for further investigation. 
   </t>
   <t>
   2) Second, one can expedite the first packet's service with a special 
   treatment, to make its latency lower, compared to the other packets of 
   the flow.  
   If we can make the first packet's latency to be a small value d,
   then every packet will experience the same buffered latency d+U, 
   with zero jitter. 
   Considering that the E2E latency bound is calculated from the worst case 
   in which rare events occur simultaneously, however, the first packet's 
   latency is likely to be far less than what the bound suggests. 
   Therefore, the special treatment to the first packet may be ineffective 
   in real implementations.
   </t>
  </section>

  <section title="Multi-sources single-destination flows' jitter control">
   <t>
The BN framework can also be used for jitter control among multiple sources' flows 
having a single destination. When a session is composed of more than one sources, 
physically or virtually separated, the buffer at the boundary can mitigate the latency variations of packets 
from different sources due to different routes or network treatments. 
Such a scenario may arise in cases such as 
   <list style="empty">
     <t>
     1) that a central unit controls multiple devices for a coordinated 
     execution in smart factories, or 
     </t>
     <t>
     2) multi-user conferencing applications, in which multiple devices/users 
     physically separated can have a difficulty in real-time interactions. 
     </t>
   </list>
   The sources, or the ingress boundary nodes of the network, 
   need to be synchronized with each other in order for the time-stamps 
   from separated sources to be able to identify the absolute arrival times. 
   </t>
  </section>

 </section>


 <section anchor="secIANA" title="IANA Considerations">
   <t>
   There are no IANA actions required by this document.
   </t>
 </section>

 <section anchor="secSec" title="Security Considerations">
   <t>
   This section will be described later. 
   </t>
 </section>

 <section anchor="Acknowledgements" title="Acknowledgements">
   <t>
   </t>
 </section>

 <section anchor="secCon" title="Contributor">
   <t>
   </t>
 </section>

</middle>

<back>
 <references title="Normative References">
   &RFC2119;
   &RFC8174;
   &RFC8655;
   &RFC8938;
   &I-D.liu-detnet-large-scale-requirements;
   &RFC9320;
 </references>

 <references title="Informative References">
   &RFC2212;
   &RFC3393;
   &I-D.yizhou-detnet-ipv6-options-for-cqf-variant;
   <reference anchor="IEEE802.1Qch"> 
     <front>
       <title>IEEE Standard for Local and metropolitan area networks -- 
       Bridges and Bridged Networks - Amendment 29: 
       Cyclic Queuing and Forwarding
       </title>
       <author>
         <organization>IEEE</organization> 
       </author>
      <date year="2017" month="June" day="28"/>
    </front>
    <seriesInfo name="IEEE" value="802.1Qch-2017"/>
    <seriesInfo name="DOI" value="10.1109/IEEESTD.2017.7961303"/>
  </reference>

   <reference anchor="IEEE802.1Qcr"> 
     <front>
       <title>IEEE Standard for Local and metropolitan area networks -- 
       Bridges and Bridged Networks - Amendment 34: 
       Asynchronous Traffic Shaping
       </title>
       <author>
         <organization>IEEE</organization> 
       </author>
      <date year="2020" month="November" day="6"/>
    </front>
    <seriesInfo name="IEEE" value="802.1Qcr-2020"/>
    <seriesInfo name="DOI" value="10.1109/IEEESTD.2020.9253013"/>
  </reference>

   <reference anchor="Y.3113"> 
     <front>
       <title>Framework for Latency Guarantee in Large Scale Networks 
              Including IMT-2020 Network
       </title>
       <author>
         <organization>International Telecommunication Union</organization> 
       </author>
      <date year="2021" month="February"/>
    </front>
    <seriesInfo name="ITU-T" value="Recommendation Y.3113"/>
  </reference>

   <reference anchor="ADN"> 
     <front>
       <title>Asynchronous Deterministic Network Based on the DiffServ 
              Architecture
       </title>
       <author initials="J" surname="Joung"> </author>
       <author initials="J" surname="Kwon"> </author>
       <author initials="J" surname="Ryoo"> </author>
       <author initials="T" surname="Cheung"> </author>
      <date year="2022"/>
    </front>
    <seriesInfo name="IEEE Access, " 
                value="vol. 10, pp. 15068-15083,
                       doi:10.1109/ACCESS.2022.3146398"/>
  </reference>

   <reference anchor="BN"> 
     <front>
       <title>Zero jitter for deterministic networks 
              without time-synchronization
       </title>
       <author initials="J" surname="Joung"> </author>
       <author initials="J" surname="Kwon"> </author>
      <date year="2021"/>
    </front>
    <seriesInfo name="IEEE Access," 
                value="vol. 9, pp. 49398-49414, 
                       doi:10.1109/ACCESS.2021.3068515"/>
  </reference>

   <reference anchor="ANDREWS"> 
     <front>
       <title>Instability of FIFO in the permanent sessions model 
              at arbitrarily small network loads
       </title>
       <author initials="M" surname="Andrews"> </author>
      <date year="2009" month="July"/>
    </front>
    <seriesInfo name="ACM Trans. Algorithms," 
                value="vol. 5, no. 3, pp. 1-29,
                       doi: 10.1145/1541885.1541894"/>
  </reference>

   <reference anchor="BOUILLARD"> 
     <front>
       <title>Deterministic network calculus: 
              From theory to practical implementation
       </title>
       <author initials="A" surname="Bouillard"> </author>
       <author initials="M" surname="Boyer"> </author>
       <author initials="E" surname="Le Corronc"> </author>
      <date year="2018"/>
    </front>
    <seriesInfo name="in Networks and Telecommunications. Hoboken, NJ, USA:" 
                value="Wiley,
                       doi: 10.1002/9781119440284"/>
  </reference>

  <reference anchor="FAIR"> 
     <front>
       <title>Framework for delay guarantee in multi-domain networks 
              based on interleaved regulators
       </title>
       <author initials="J" surname="Joung"> </author>
      <date year="2020" month="March"/>
    </front>
    <seriesInfo name="Electronics," 
                value="vol. 9, no. 3, p. 436,
                       doi:10.3390/electronics9030436"/>
  </reference>
  
  <reference anchor="KAUR"> 
     <front>
       <title>Core-stateless guaranteed rate scheduling algorithms
       </title>
       <author initials="J" surname="Kaur"> </author>
       <author initials="H.M" surname="Vin"> </author>
       <date year="2001" />
    </front>
    <seriesInfo name="in Proc. INFOCOM," 
                value="vol.3, pp. 1484-1492"/>
  </reference>
  
  <reference anchor="LBF"> 
     <front>
       <title>High-precision latency forwarding over packet-programmable 
              networks
       </title>
       <author initials="A" surname="Clenm"> </author>
       <author initials="T" surname="Eckert"> </author>
      <date year="2020" month="April"/>
    </front>
    <seriesInfo name="NOMS 2020 - " 
                value="IEEE/IFIP Network Operations and Management Symposium"/>
  </reference>

  <reference anchor="LEBOUDEC"> 
     <front>
       <title>A theory of traffic regulators for deterministic networks 
              with application to interleaved regulators
       </title>
       <author initials="J" surname="Le Boudec"> </author>
      <date year="2019" month="December"/>
    </front>
    <seriesInfo name="IEEE/ACM Trans. Networking," 
                value="vol. 26, no. 6, pp. 2721-2733, 
                       doi:10.1109/TNET.2018.2875191"/>
  </reference>

  <reference anchor="THOMAS"> 
     <front>
       <title>On cyclic dependencies and regulators in time-sensitive networks
       </title>
       <author initials="L" surname="Thomas"> </author>
       <author initials="J" surname="Le Boudec"> </author>
       <author initials="A" surname="Mifdaoui"> </author>
      <date year="2019" month="December"/>
    </front>
    <seriesInfo name="in Proc. IEEE Real-Time Syst. Symp. (RTSS)," 
                value="York, U.K., pp. 299-311"/>
  </reference>

  <reference anchor="PAREKH"> 
     <front>
       <title>A generalized processor sharing approach to flow control 
              in integrated services networks: the single-node case
       </title>
       <author initials="A" surname="Parekh"> </author>
       <author initials="R" surname="Gallager"> </author>
      <date year="1993" month="June"/>
    </front>
    <seriesInfo name="IEEE/ACM Trans. Networking," 
                value="vol. 1, no. 3, pp. 344-357"/>
  </reference>
<!--
  <reference anchor="STILIADIS-RPS"> 
     <front>
       <title>Rate-proportional servers: A design methodology for 
              fair queueing algorithms
       </title>
       <author initials="D" surname="Stiliadis"> </author>
       <author initials="A" surname="Anujan"> </author>
      <date year="1998"/>
    </front>
    <seriesInfo name="IEEE/ACM Trans. Networking," 
                value="vol. 6, no. 2, pp. 164-174"/>
  </reference>
 -->
    <reference anchor="STILIADIS-LRS"> 
     <front>
       <title>Latency-rate servers: A general model for analysis of traffic scheduling algorithms
       </title>
       <author initials="D" surname="Stiliadis"> </author>
       <author initials="A" surname="Anujan"> </author>
      <date year="1998"/>
    </front>
    <seriesInfo name="IEEE/ACM Trans. Networking," 
                value="vol. 6, no. 5, pp. 611-624"/>
  </reference>

  <reference anchor="STOICA"> 
     <front>
       <title>Providing guaranteed services without per flow management
       </title>
       <author initials="I" surname="Stoica"> </author>
       <author initials="H" surname="Zhang"> </author>
      <date year="1999"/>
    </front>
    <seriesInfo name="ACM SIGCOMM Computer Communication Review," 
                value="vol. 29, no. 4, pp. 81-94"/>
  </reference>

  <reference anchor="ZHANG"> 
     <front>
       <title>Virtual clock: A new traffic control algorithm for 
              packet switching networks
       </title>
       <author initials="L" surname="Zhang"> </author>
      <date year="1990" />
    </front>
    <seriesInfo name="in Proc. ACM symposium on Communications architectures 
                      &amp; protocols," 
                value="pp. 19-29"/>
  </reference>

 </references>
</back>

</rfc>

