
From rs@netapp.com  Sun Aug  1 14:34:15 2010
Return-Path: <rs@netapp.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 0A4593A69E2 for <conex@core3.amsl.com>; Sun,  1 Aug 2010 14:34:15 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.889
X-Spam-Level: 
X-Spam-Status: No, score=-4.889 tagged_above=-999 required=5 tests=[AWL=-1.490, BAYES_50=0.001, J_CHICKENPOX_34=0.6, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fqXmMXjAk0Ql for <conex@core3.amsl.com>; Sun,  1 Aug 2010 14:34:12 -0700 (PDT)
Received: from mx3.netapp.com (mx3.netapp.com [217.70.210.9]) by core3.amsl.com (Postfix) with ESMTP id 65A083A6805 for <conex@ietf.org>; Sun,  1 Aug 2010 14:34:11 -0700 (PDT)
X-IronPort-AV: E=Sophos;i="4.55,299,1278313200"; d="scan'208";a="183230925"
Received: from smtp3.europe.netapp.com ([10.64.2.67]) by mx3-out.netapp.com with ESMTP; 01 Aug 2010 14:34:36 -0700
Received: from ldcrsexc1-prd.hq.netapp.com (emeaexchrs.hq.netapp.com [10.65.251.109]) by smtp3.europe.netapp.com (8.13.1/8.13.1/NTAP-1.6) with ESMTP id o71LXoJH016115; Sun, 1 Aug 2010 14:34:05 -0700 (PDT)
Received: from LDCMVEXC1-PRD.hq.netapp.com ([10.65.251.108]) by ldcrsexc1-prd.hq.netapp.com with Microsoft SMTPSVC(6.0.3790.3959);  Sun, 1 Aug 2010 22:33:50 +0100
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Date: Sun, 1 Aug 2010 22:33:49 +0100
Message-ID: <5FDC413D5FA246468C200652D63E627A09C489DB@LDCMVEXC1-PRD.hq.netapp.com>
In-Reply-To: <793F49BA1FC821409F99F10862A0E4DB07B63225@FHDP1LUMXCV14.us.one.verizon.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [conex] Off-Topic: Smoothing Variations in BitRate
Thread-Index: Acsqwqk1zJZiVEflSJysFO0EGBGdNQCl62swAQekrVA=
References: <201007231949.o6NJn38t014418@bagheera.jungle.bt.co.uk><AANLkTinr6Y-mKrMNJPUwHuQMYc4C5dLUCdqbztvwmxA=@mail.gmail.com><201007232235.o6NMZZ1e015875@bagheera.jungle.bt.co.uk><20100723235447.GE69747@verdi> <793F49BA1FC821409F99F10862A0E4DB07B63225@FHDP1LUMXCV14.us.one.verizon.com>
From: "Scheffenegger, Richard" <rs@netapp.com>
To: "John Leslie" <john@jlc.net>, "Bob Briscoe" <rbriscoe@jungle.bt.co.uk>
X-OriginalArrivalTime: 01 Aug 2010 21:33:50.0468 (UTC) FILETIME=[3B5E5C40:01CB31C1]
Cc: conex@ietf.org
Subject: [conex]  Off-Topic: Smoothing Variations in BitRate
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 01 Aug 2010 21:34:15 -0000

A few add-ons to that thread, I'd like some expert statements on:

>From Bob Briscoe:

> As long as the queue gives you more drop or marking for a longer=20
> *queue*, congestion indications will be convex (ie hockeystick)=20
> with *load*. Any queue will do that, with or without active queue=20
> management (AQM).
>
> Nonetheless, an AQM does have to be designed carefully to maintain=20
> stability, responsiveness and randomness (ie no bias against=20
> certain packets). But its design can hardly affect the equilibrium
> at all.

Will that hold for assessment hold true for a simple instantaneous
queue-depth
Related marking scheme?

p ^
1-|       +-------------
  |       |
  |       |
  +-------+------------->
           Queue Depth

(Instantaneous Queue depth would take some of the sluggishness out from
AQM ECN marking...)


Note that short RTT flows might be then running at quite high CE marking
rates, while long RTT flows under lower - assuming a very bursty traffic
regime...



From: John Leslie:

>  Fundamentally, to get smoother variation, the reduction signaled by
>  an individual ECN-CE mark would need to be less than 50%. This, IMHO,
>  would indeed be reasonable if we agreed somehow to create more of
them.
>  (Unfortunately, setting more CONEX-CE marks isn't likely to cause
this,
>  despite my poorly-worded earlier posting.)

> > With a 0.1% equilibrium point the video streaming service will
likely
> > have to resort to TCP like behavior (rate halving) in its response
> > to the ECN marks.=20

>  Unfortunately, the basic ECN expectation in RFC 3168 is that an
>  ECN-CE mark will cause rate halving.

To be pedantic here, from RFC3168:

   The sending
   TCP SHOULD NOT increase the congestion window in response to the
   receipt of an ECN-Echo ACK packet.

   TCP should not react to congestion indications more than once every
   window of data (or more loosely, more than once every round-trip
   time). That is, the TCP sender's congestion window should be reduced
   only once in response to a series of dropped and/or CE packets from a
   single window of data.  In addition, the TCP source should not
   decrease the slow-start threshold, ssthresh, if it has been decreased
   within the last round trip time.  However, if any retransmitted
   packets are dropped, then this is interpreted by the source TCP as a
   new instance of congestion.

Thus, independed on the number of CE marks per cwnd/rtt received, only a

Single response per cwnd/rtt is allowed... However, RFC3168 leaves it
open
if a more granular reaction is allowed, right?

Thus, a LEDBAT-like CC could use a scheme such as=20

Cwnd =3D max(cwnd * 1/2 * (1-f), mss), with=20

f ... Fraction of reflected CE received vs. total segments sent.


Similarily, a sender which has higher demands on the speediness could do
with

Cwnd =3D max(cwnd *  (1/2 + f/2), mss)

But it's true, defining the CC behaviour is beyond the scope of conex :)

Richard Scheffenegger

From fred@cisco.com  Sun Aug  1 23:12:00 2010
Return-Path: <fred@cisco.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id F39113A686D for <conex@core3.amsl.com>; Sun,  1 Aug 2010 23:11:57 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -109.991
X-Spam-Level: 
X-Spam-Status: No, score=-109.991 tagged_above=-999 required=5 tests=[AWL=0.008, BAYES_00=-2.599, J_CHICKENPOX_34=0.6, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id oVhwi5Qic4PQ for <conex@core3.amsl.com>; Sun,  1 Aug 2010 23:11:52 -0700 (PDT)
Received: from rtp-iport-1.cisco.com (rtp-iport-1.cisco.com [64.102.122.148]) by core3.amsl.com (Postfix) with ESMTP id 85B813A67AF for <conex@ietf.org>; Sun,  1 Aug 2010 23:11:52 -0700 (PDT)
Authentication-Results: rtp-iport-1.cisco.com; dkim=neutral (message not signed) header.i=none
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AvsEABb+VUxAZnwN/2dsb2JhbACgEHGkaJouhTkEiH8
X-IronPort-AV: E=Sophos;i="4.55,301,1278288000"; d="scan'208";a="142115804"
Received: from rtp-core-2.cisco.com ([64.102.124.13]) by rtp-iport-1.cisco.com with ESMTP; 02 Aug 2010 06:12:18 +0000
Received: from dreamel-con.cisco.com (rtp-vpn2-805.cisco.com [10.82.243.37]) by rtp-core-2.cisco.com (8.13.8/8.14.3) with ESMTP id o726CA1k007912; Mon, 2 Aug 2010 06:12:12 GMT
Received: from [127.0.0.1] by dreamel-con.cisco.com (PGP Universal service); Mon, 02 Aug 2010 08:12:18 +0200
X-PGP-Universal: processed; by dreamel-con.cisco.com on Mon, 02 Aug 2010 08:12:18 +0200
Mime-Version: 1.0 (Apple Message framework v1081)
From: Fred Baker <fred@cisco.com>
In-Reply-To: <5FDC413D5FA246468C200652D63E627A09C489DB@LDCMVEXC1-PRD.hq.netapp.com>
Date: Mon, 2 Aug 2010 08:12:04 +0200
Message-Id: <CC91BAF7-DCF0-4205-AB37-FD0FF010483D@cisco.com>
References: <201007231949.o6NJn38t014418@bagheera.jungle.bt.co.uk><AANLkTinr6Y-mKrMNJPUwHuQMYc4C5dLUCdqbztvwmxA=@mail.gmail.com><201007232235.o6NMZZ1e015875@bagheera.jungle.bt.co.uk><20100723235447.GE69747@verdi> <793F49BA1FC821409F99F10862A0E4DB07B63225@FHDP1LUMXCV14.us.one.verizon.com> <5FDC413D5FA246468C200652D63E627A09C489DB@LDCMVEXC1-PRD.hq.netapp.com>
To: "Scheffenegger, Richard" <rs@netapp.com>
X-Mailer: Apple Mail (2.1081)
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable
Cc: conex@ietf.org
Subject: Re: [conex] Off-Topic: Smoothing Variations in BitRate
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Aug 2010 06:12:00 -0000

It really kind of depends on how the whole thing is designed. AQM is =
designed to distribute marks or drops across all sessions, with each =
getting an occasional mark/drop but no consistent rate, so that as =
mark/drop becomes more consistent the endpoint can infer greater =
insistence on the part of the network. Instantaneous Marking is the ECN =
counterpart to Tail Drop, and (especially if the IW=3D10 proposal is =
adopted) will likely tend to hit new sessions a lot harder than =
established sessions. As a result, the end system behavior has to be =
adapted to the expected network behavior. If the end system is sending =
with IW=3D10 (sending an initial burst of up to ten packets) and can =
reasonably expect one or two to be marked, it should be pretty sensitive =
to marking. If it is sending ten packets and can reasonably expect all =
ten to be marked, it should probably be less aggressive in its response =
to a mark.

I'm rather in favor of the AQM model.

On Aug 1, 2010, at 11:33 PM, Scheffenegger, Richard wrote:
> A few add-ons to that thread, I'd like some expert statements on:
>=20
>> =46rom Bob Briscoe:
>=20
>> As long as the queue gives you more drop or marking for a longer=20
>> *queue*, congestion indications will be convex (ie hockeystick)=20
>> with *load*. Any queue will do that, with or without active queue=20
>> management (AQM).
>>=20
>> Nonetheless, an AQM does have to be designed carefully to maintain=20
>> stability, responsiveness and randomness (ie no bias against=20
>> certain packets). But its design can hardly affect the equilibrium
>> at all.
>=20
> Will that hold for assessment hold true for a simple instantaneous
> queue-depth
> Related marking scheme?
>=20
> p ^
> 1-|       +-------------
>  |       |
>  |       |
>  +-------+------------->
>           Queue Depth
>=20
> (Instantaneous Queue depth would take some of the sluggishness out =
from
> AQM ECN marking...)
>=20
>=20
> Note that short RTT flows might be then running at quite high CE =
marking
> rates, while long RTT flows under lower - assuming a very bursty =
traffic
> regime...
>=20
>=20
>=20
> From: John Leslie:
>=20
>> Fundamentally, to get smoother variation, the reduction signaled by
>> an individual ECN-CE mark would need to be less than 50%. This, IMHO,
>> would indeed be reasonable if we agreed somehow to create more of
> them.
>> (Unfortunately, setting more CONEX-CE marks isn't likely to cause
> this,
>> despite my poorly-worded earlier posting.)
>=20
>>> With a 0.1% equilibrium point the video streaming service will
> likely
>>> have to resort to TCP like behavior (rate halving) in its response
>>> to the ECN marks.=20
>=20
>> Unfortunately, the basic ECN expectation in RFC 3168 is that an
>> ECN-CE mark will cause rate halving.
>=20
> To be pedantic here, from RFC3168:
>=20
>   The sending
>   TCP SHOULD NOT increase the congestion window in response to the
>   receipt of an ECN-Echo ACK packet.
>=20
>   TCP should not react to congestion indications more than once every
>   window of data (or more loosely, more than once every round-trip
>   time). That is, the TCP sender's congestion window should be reduced
>   only once in response to a series of dropped and/or CE packets from =
a
>   single window of data.  In addition, the TCP source should not
>   decrease the slow-start threshold, ssthresh, if it has been =
decreased
>   within the last round trip time.  However, if any retransmitted
>   packets are dropped, then this is interpreted by the source TCP as a
>   new instance of congestion.
>=20
> Thus, independed on the number of CE marks per cwnd/rtt received, only =
a
>=20
> Single response per cwnd/rtt is allowed... However, RFC3168 leaves it
> open
> if a more granular reaction is allowed, right?
>=20
> Thus, a LEDBAT-like CC could use a scheme such as=20
>=20
> Cwnd =3D max(cwnd * 1/2 * (1-f), mss), with=20
>=20
> f ... Fraction of reflected CE received vs. total segments sent.
>=20
>=20
> Similarily, a sender which has higher demands on the speediness could =
do
> with
>=20
> Cwnd =3D max(cwnd *  (1/2 + f/2), mss)
>=20
> But it's true, defining the CC behaviour is beyond the scope of conex =
:)
>=20
> Richard Scheffenegger
> _______________________________________________
> conex mailing list
> conex@ietf.org
> https://www.ietf.org/mailman/listinfo/conex

http://www.ipinc.net/IPv4.GIF


From rbriscoe@jungle.bt.co.uk  Mon Aug  2 12:26:30 2010
Return-Path: <rbriscoe@jungle.bt.co.uk>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id BF53B3A694C for <conex@core3.amsl.com>; Mon,  2 Aug 2010 12:26:30 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.122
X-Spam-Level: 
X-Spam-Status: No, score=0.122 tagged_above=-999 required=5 tests=[AWL=-0.961,  BAYES_50=0.001, DNS_FROM_RFC_BOGUSMX=1.482, J_CHICKENPOX_34=0.6, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id jfRx0wYSb32p for <conex@core3.amsl.com>; Mon,  2 Aug 2010 12:26:23 -0700 (PDT)
Received: from smtp4.smtp.bt.com (smtp4.smtp.bt.com [217.32.164.151]) by core3.amsl.com (Postfix) with ESMTP id 82A6A3A6972 for <conex@ietf.org>; Mon,  2 Aug 2010 12:26:22 -0700 (PDT)
Received: from i2kc08-ukbr.domain1.systemhost.net ([193.113.197.71]) by smtp4.smtp.bt.com with Microsoft SMTPSVC(6.0.3790.3959);  Mon, 2 Aug 2010 20:26:48 +0100
Received: from cbibipnt08.iuser.iroot.adidom.com ([147.149.100.81]) by i2kc08-ukbr.domain1.systemhost.net with Microsoft SMTPSVC(6.0.3790.4675); Mon, 2 Aug 2010 20:26:48 +0100
Received: From bagheera.jungle.bt.co.uk ([132.146.168.158]) by cbibipnt08.iuser.iroot.adidom.com (WebShield SMTP v4.5 MR1a P0803.399); id 1280777207464; Mon, 2 Aug 2010 20:26:47 +0100
Received: from MUT.jungle.bt.co.uk ([10.215.130.87]) by bagheera.jungle.bt.co.uk (8.13.5/8.12.8) with ESMTP id o72JQhOe004211; Mon, 2 Aug 2010 20:26:43 +0100
Message-Id: <201008021926.o72JQhOe004211@bagheera.jungle.bt.co.uk>
X-Mailer: QUALCOMM Windows Eudora Version 7.1.0.9
Date: Mon, 02 Aug 2010 20:26:48 +0100
To: Fred Baker <fred@cisco.com>, "Scheffenegger, Richard" <rs@netapp.com>
From: Bob Briscoe <rbriscoe@jungle.bt.co.uk>
In-Reply-To: <CC91BAF7-DCF0-4205-AB37-FD0FF010483D@cisco.com>
References: <201007231949.o6NJn38t014418@bagheera.jungle.bt.co.uk> <AANLkTinr6Y-mKrMNJPUwHuQMYc4C5dLUCdqbztvwmxA=@mail.gmail.com> <201007232235.o6NMZZ1e015875@bagheera.jungle.bt.co.uk> <20100723235447.GE69747@verdi> <793F49BA1FC821409F99F10862A0E4DB07B63225@FHDP1LUMXCV14.us.one.verizon.com> <5FDC413D5FA246468C200652D63E627A09C489DB@LDCMVEXC1-PRD.hq.netapp.com> <CC91BAF7-DCF0-4205-AB37-FD0FF010483D@cisco.com>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"; format=flowed
X-Scanned-By: MIMEDefang 2.56 on 132.146.168.158
X-OriginalArrivalTime: 02 Aug 2010 19:26:48.0732 (UTC) FILETIME=[A6DF7DC0:01CB3278]
Cc: conex@ietf.org
Subject: Re: [conex] Off-Topic: Smoothing Variations in BitRate
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Aug 2010 19:26:31 -0000

Fred & Richard,

inline...
[Remembering, this is off-topic for ConEx]

At 07:12 02/08/2010, Fred Baker wrote:
>It really kind of depends on how the whole thing is designed. AQM is 
>designed to distribute marks or drops across all sessions, with each 
>getting an occasional mark/drop but no consistent rate, so that as 
>mark/drop becomes more consistent the endpoint can infer greater 
>insistence on the part of the network. Instantaneous Marking is the 
>ECN counterpart to Tail Drop, and (especially if the IW=10 proposal 
>is adopted) will likely tend to hit new sessions a lot harder than 
>established sessions.

I believe the bias against new flows and towards established flows 
comes from TCP not anything the AQM is doing. It's because TCP treats 
any number of marks within a RTT as one event. Any well-randomised 
AQM will hit an established flow with more marks if the flow has 
established more bit-rate than a new flow. But if both use the TCP 
assumption (many marks within 1 RTT = 1 mark), then the established 
flow will gain because it ignores more marks.

FWIW, there is an important difference between a step ECN threshold 
and tail-drop; packets that arrive when the queue is greater than the 
threshold still get forwarded (they are just marked). Once the queue 
has been driven to the threshold, an instantaneous AQM will surely 
mark any arriving packets in proportion to their relative 
instantaneous bit-rates. I don't think the AQM per se will be biased 
by whether packets are from established or new flows - only TCP 
creates that bias I think.

BTW, for similar reasons, an ECN step threshold doesn't lock out 
larger packets like tail drop does. With tail drop, the queue has to 
have drained more to fit a larger packet than a smaller one. But with 
an ECN step threshold, any size packet can still fit into the buffer 
above the threshold without being dropped.  So, as the queue drains, 
random marking will be proportionate to the arriving bit-rates of the 
different flows, irrespective of their packet sizes.

>As a result, the end system behavior has to be adapted to the 
>expected network behavior. If the end system is sending with IW=10 
>(sending an initial burst of up to ten packets) and can reasonably 
>expect one or two to be marked, it should be pretty sensitive to 
>marking. If it is sending ten packets and can reasonably expect all 
>ten to be marked, it should probably be less aggressive in its 
>response to a mark.
>
>I'm rather in favor of the AQM model.

Yup. But I have an open mind as to exactly what 'the AQM model' 
means, as there's a large part of the AQM design space still to 
explore. However, you're right that we do have to worry about 
interoperability of responses.

But I believe interop is best served by thinking of it the other way 
round. The best AQM will be the one that most accurately reflects 
back on a bursty flow the harm from the increase in queue it has 
caused. And interop is best served if transports assume that all AQMs 
will try to reflect back harm as quickly as poss, and any that are 
sluggish are just the deficient exceptions that a bursty transport 
has been 'lucky' enough to encounter.

So queue averaging can be thought of as a weakness; because one flow 
can bump up the queue then it can leave the system before the 
averaging has caught up - causing someone else to get hit. An AQM can 
never be as fast as it ought to be ideally, but I'm coming round to 
the view that adding delay deliberately is not good (I used to buy 
Sally Floyd's argument, but now I'm not so sure).

Sally's argument: The idea of queue averaging is to filter out spikes 
that will go away of their own accord. Ie. filtering improves 
performance. DCTCP implements this filtering on the end-system 
instead of in the network. That is better in many respects except 
one: the averaging can be done more quickly across an aggregate of 
flows, rather than over one flow.

An interesting thought experiment: If the AQM provided both an 
instantaneous signal and an averaged one, which would an end-system 
react to? I think the reason we've had trouble on this point is that 
you will choose the smoothed one if all you care about is your own 
performance. But if you are also made to care about causing 
unnecessary congestion to others (as in some ConEx use-cases), the 
answer is not so clear-cut.

a response to RIchard inline too...


>On Aug 1, 2010, at 11:33 PM, Scheffenegger, Richard wrote:
> > A few add-ons to that thread, I'd like some expert statements on:
> >
> >> From Bob Briscoe:
> >
> >> As long as the queue gives you more drop or marking for a longer
> >> *queue*, congestion indications will be convex (ie hockeystick)
> >> with *load*. Any queue will do that, with or without active queue
> >> management (AQM).
> >>
> >> Nonetheless, an AQM does have to be designed carefully to maintain
> >> stability, responsiveness and randomness (ie no bias against
> >> certain packets). But its design can hardly affect the equilibrium
> >> at all.
> >
> > Will that hold for assessment hold true for a simple instantaneous
> > queue-depth
> > Related marking scheme?

Yes.

The point is that if (marks vs queue length) is a step function, 
(marks vs load) will be a hockey-stick function. This is true only 
for traffic with some variance in it; so it's not true if every flow 
is pure CBR (in which case marks vs load will be a sharply bent 
right-angled hockey-stick!). However, as long as the traffic has at 
least some variance in its inter-arrival times, the little peaks in 
load hit the threshold in the queue earlier than the rest of the 
traffic, which starts the marking smoothly growing up the hockey-stick.

If we don't want to rely on the traffic's variance being the only 
source of smoothness in the response, then we want to put a ramp in 
the AQM's response function (marking vs queue length). But Damon 
Wischik tells me that setting a step threshold at a small queue 
length in the buffer to a high speed line ensures that it will pick 
out sufficient randomness in the traffic to give a smooth response. 
He was going to write a paper about this, but I don't think he has 
yet - so I am not sure of the exact conditions/assumptions.

If this bleeding edge research turns out to be true, it would mean 
that we might be able to get away with step threshold in high speed 
equipment (where simplicity of implementation is more important), 
while still needing a ramp threshold in slower queues (where there is 
probably time to run the extra cyles the ramp algorithm will need).

> >
> > p ^
> > 1-|       +-------------
> >  |       |
> >  |       |
> >  +-------+------------->
> >           Queue Depth
> >
> > (Instantaneous Queue depth would take some of the sluggishness out from
> > AQM ECN marking...)
> >
> >
> > Note that short RTT flows might be then running at quite high CE marking
> > rates, while long RTT flows under lower - assuming a very bursty traffic
> > regime...
> >
> >
> >
> > From: John Leslie:
> >
> >> Fundamentally, to get smoother variation, the reduction signaled by
> >> an individual ECN-CE mark would need to be less than 50%. This, IMHO,
> >> would indeed be reasonable if we agreed somehow to create more of
> > them.
> >> (Unfortunately, setting more CONEX-CE marks isn't likely to cause
> > this,
> >> despite my poorly-worded earlier posting.)
> >
> >>> With a 0.1% equilibrium point the video streaming service will
> > likely
> >>> have to resort to TCP like behavior (rate halving) in its response
> >>> to the ECN marks.
> >
> >> Unfortunately, the basic ECN expectation in RFC 3168 is that an
> >> ECN-CE mark will cause rate halving.
> >
> > To be pedantic here, from RFC3168:
> >
> >   The sending
> >   TCP SHOULD NOT increase the congestion window in response to the
> >   receipt of an ECN-Echo ACK packet.
> >
> >   TCP should not react to congestion indications more than once every
> >   window of data (or more loosely, more than once every round-trip
> >   time). That is, the TCP sender's congestion window should be reduced
> >   only once in response to a series of dropped and/or CE packets from a
> >   single window of data.  In addition, the TCP source should not
> >   decrease the slow-start threshold, ssthresh, if it has been decreased
> >   within the last round trip time.  However, if any retransmitted
> >   packets are dropped, then this is interpreted by the source TCP as a
> >   new instance of congestion.
> >
> > Thus, independed on the number of CE marks per cwnd/rtt received, only a
> >
> > Single response per cwnd/rtt is allowed... However, RFC3168 leaves it
> > open
> > if a more granular reaction is allowed, right?
> >
> > Thus, a LEDBAT-like CC could use a scheme such as
> >
> > Cwnd = max(cwnd * 1/2 * (1-f), mss), with
> >
> > f ... Fraction of reflected CE received vs. total segments sent.
> >
> >
> > Similarily, a sender which has higher demands on the speediness could do
> > with
> >
> > Cwnd = max(cwnd *  (1/2 + f/2), mss)
> >
> > But it's true, defining the CC behaviour is beyond the scope of conex :)

Indeed - other than to claim that ConEx should make it possible to 
encourage a wider range of congestion responses.


Bob

> >
> > Richard Scheffenegger
> > _______________________________________________
> > conex mailing list
> > conex@ietf.org
> > https://www.ietf.org/mailman/listinfo/conex
>
>http://www.ipinc.net/IPv4.GIF

________________________________________________________________
Bob Briscoe,                                BT Innovate & Design 


From anyrhine@cs.helsinki.fi  Mon Aug  2 16:45:01 2010
Return-Path: <anyrhine@cs.helsinki.fi>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 5CFF03A6C8D for <conex@core3.amsl.com>; Mon,  2 Aug 2010 16:45:01 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.999
X-Spam-Level: 
X-Spam-Status: No, score=-3.999 tagged_above=-999 required=5 tests=[BAYES_50=0.001, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id g4C5V4Pv6uHE for <conex@core3.amsl.com>; Mon,  2 Aug 2010 16:45:00 -0700 (PDT)
Received: from mail.cs.helsinki.fi (courier.cs.helsinki.fi [128.214.9.1]) by core3.amsl.com (Postfix) with ESMTP id 2B86B3A6C8C for <conex@ietf.org>; Mon,  2 Aug 2010 16:44:59 -0700 (PDT)
Received: from [192.168.22.129] (cs109108001124.pp.htv.fi [109.108.1.124]) (AUTH: PLAIN anyrhine, SSL: TLSv1/SSLv3,256bits,AES256-SHA) by mail.cs.helsinki.fi with esmtp; Tue, 03 Aug 2010 02:45:25 +0300 id 00093E5C.4C575895.00004E01
From: Aki Nyrhinen <anyrhine@cs.helsinki.fi>
To: Bob Briscoe <rbriscoe@jungle.bt.co.uk>
In-Reply-To: <201008021926.o72JQhOe004211@bagheera.jungle.bt.co.uk>
References: <201007231949.o6NJn38t014418@bagheera.jungle.bt.co.uk> <AANLkTinr6Y-mKrMNJPUwHuQMYc4C5dLUCdqbztvwmxA=@mail.gmail.com> <201007232235.o6NMZZ1e015875@bagheera.jungle.bt.co.uk> <20100723235447.GE69747@verdi> <793F49BA1FC821409F99F10862A0E4DB07B63225@FHDP1LUMXCV14.us.one.verizon.com> <5FDC413D5FA246468C200652D63E627A09C489DB@LDCMVEXC1-PRD.hq.netapp.com> <CC91BAF7-DCF0-4205-AB37-FD0FF010483D@cisco.com> <201008021926.o72JQhOe004211@bagheera.jungle.bt.co.uk>
Date: Tue, 03 Aug 2010 02:47:03 +0300
Message-Id: <1280792823.4092.98.camel@e42.nyrhi.net>
Mime-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 7bit
X-Mailer: Evolution 2.22.3.1 
Cc: conex@ietf.org
Subject: Re: [conex] Off-Topic: Smoothing Variations in BitRate
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Aug 2010 23:45:01 -0000

Hi all,

Just wanted to have my piece of this off-topic discussion.. everything
Bob says is true, but there is another important difference between a
step threshold marker and tail-drop, which is easier to miss but not
less substantial. I hope this hasn't been discussed over and over in the
past. The following is much a result of a thought experiment, but i was
originally hit (in the head) by the difference when experimenting with
"different" marking and congestion control algorithms. Bob gives a hint
to this very difference in his remark about the pure CBR case funny
looking hockeystick.

In general, a step threshold marker will mark way more packets than
tail-drop would drop. Simplifying, tail-drop will drop packets
proportional to load when congested. A step threshold marker will mark
ALL packets between the start and end of congestion independent of load
and capacity between those two instants.

This is not a property we want a bottleneck to have in an environment
where congestion control algorithms seek to cause load > 1 on the
bottleneck. Yes I am serious, tail-drop is immeasurably better. TCP is
probably the only thing I can imagine working reasonably well over the
step threshold marker.

A trivial example: we have a bottleneck device and a single bulk tcp
connection going through it. The tcp connection has reached a point (in
congestion avoidance for simplicity) where it fully utilizes the link
and some queue has built up at the bottleneck. Now here is what will
follow, 

	1) window passes by, tcp increases cwnd by one
	2) marking threshold is exceeded (by one).
	3) bottleneck starts marking.
	4) after a window's worth of segments, tcp receives first mark.
	5) tcp halves transmission rate.
	6) queue starts to drain and falls below marking threshold
	7) bottleneck stops marking.

How much time is there between steps 3 and 7? Does this have anything to
do with load at the bottleneck between those two steps? I'd say '1 RTT
(tcp-view)' and 'no'. The load is (1+cwnd)/cwnd but cwnd packets are
marked instead of a single one, which would be proportional to load. Due
to self-clocking, a single tcp connection looks very much like CBR
except for the once in RTT bump which periodically increases the
bottleneck queue length.

Relentless would respond to the step threshold marker by dropping its
sending rate to zero (which may or may not be surprising) and anything
more fine-grained than TCP would still suffer.

I don't think smarter marking schemes necessarily imply AQM, random
numbers or anything very complicated at all, but I really think we (me
included) should actively forget the idea of step threshold being
similar to tail-drop: the only similarity they have is "if(qlen >
threshold)".

Just my fraction of a cent.

	Aki



From rbriscoe@jungle.bt.co.uk  Tue Aug  3 12:41:58 2010
Return-Path: <rbriscoe@jungle.bt.co.uk>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 7ECBB3A6AD9 for <conex@core3.amsl.com>; Tue,  3 Aug 2010 12:41:58 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.146
X-Spam-Level: 
X-Spam-Status: No, score=-0.146 tagged_above=-999 required=5 tests=[AWL=-0.629, BAYES_50=0.001, DNS_FROM_RFC_BOGUSMX=1.482, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6G-Ui3TqU-oy for <conex@core3.amsl.com>; Tue,  3 Aug 2010 12:41:52 -0700 (PDT)
Received: from smtp3.smtp.bt.com (smtp3.smtp.bt.com [217.32.164.138]) by core3.amsl.com (Postfix) with ESMTP id 0908B3A68B1 for <conex@ietf.org>; Tue,  3 Aug 2010 12:41:51 -0700 (PDT)
Received: from i2kc08-ukbr.domain1.systemhost.net ([193.113.197.71]) by smtp3.smtp.bt.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Aug 2010 20:42:19 +0100
Received: from cbibipnt08.iuser.iroot.adidom.com ([147.149.100.81]) by i2kc08-ukbr.domain1.systemhost.net with Microsoft SMTPSVC(6.0.3790.4675); Tue, 3 Aug 2010 20:42:19 +0100
Received: From bagheera.jungle.bt.co.uk ([132.146.168.158]) by cbibipnt08.iuser.iroot.adidom.com (WebShield SMTP v4.5 MR1a P0803.399); id 1280864538467; Tue, 3 Aug 2010 20:42:18 +0100
Received: from MUT.jungle.bt.co.uk ([10.215.130.87]) by bagheera.jungle.bt.co.uk (8.13.5/8.12.8) with ESMTP id o73JgGSW018260; Tue, 3 Aug 2010 20:42:16 +0100
Message-Id: <201008031942.o73JgGSW018260@bagheera.jungle.bt.co.uk>
X-Mailer: QUALCOMM Windows Eudora Version 7.1.0.9
Date: Tue, 03 Aug 2010 20:42:22 +0100
To: Christopher Morrow <christopher.morrow@gmail.com>
From: Bob Briscoe <rbriscoe@jungle.bt.co.uk>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"; format=flowed
X-Scanned-By: MIMEDefang 2.56 on 132.146.168.158
X-OriginalArrivalTime: 03 Aug 2010 19:42:19.0311 (UTC) FILETIME=[FBF453F0:01CB3343]
Cc: conex@ietf.org
Subject: [conex] ConEx & DDoS
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Aug 2010 19:41:58 -0000

Chris,

During the ConEx w-g session last Tuesday in Maastricht you suggested 
we should not include DDoS mitigation as a use-case for ConEx. I was 
willing to agree as we don't need to court controversy.

However, the co-authors of draft-moncaster-conex-concepts-uses have 
asked me to float the idea that, altho we won't include DDoS as a 
use-case in its own right, we should mention it as an extreme case of 
two other use-cases. Let me explain...

The use-cases we plan to include are:
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
5. Use cases (Highlighting that this is neither an exhaustive list nor a
prescriptive list...)
  5.1  ConEx for better traffic Control
   a. Targeting the right traffic
   b. Encouraging (and eventually enforcing) better CC
  5.2 ConEx for better traffic monitoring
   a. For compliance with SLAs
   b. For assessing performance of your provider
   c. Monitoring congestion hotspots for targeted upgrades
   d. Monitoring congestion anomalies (Equipment problems or helping 
identify DDoS)
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

We could include mentions of DDoS along the following lines (no need 
to word-smith - I'm just trying to outline the concepts).

5.1b Encouraging (and eventually enforcing) better CC
"  ConEx information can be used as a control metric for making
    traffic control decisions, such as deciding which traffic to
    prioritise or to identify and block sources of persistent and
    damaging congestion.

    Simple ingress policer mechanisms, such as those described in
    [Policing-freedom] and [re-ecn-motive], could control the
    overall volume of congestion entering a network from each user.
    Such a policer could lead to a number of beneficial outcomes:

    o Heavy users might be encouraged to shift their usage away from
      peak times in order to be able to transfer more data without
      triggering a response from the policer;
    o Users might be encouraged to use software that shifts its usage away
      from congestion peaks (shifting in time whether by hours or seconds
      [LEDBAT] or shifting to less congested routes [MPTCP]), again to
      transfer more data without triggering the policer;
    o Developers of operating systems might be encouraged to supply such
      software as the default;
    o If certain applications did not use a congestion responsive transport
      and caused high levels of congestion-bit-rate, the policer would
      eventually force the bit-rate to reduce in response to congestion.

    It is believed that a system of ConEx policers could be built that
    can verify the integrity of ConEx information and remove traffic
    that does not comply with the protocol [re-ecn-motive]. If such
    robustness against cheating is indeed possible, a ConEx policer would
    mitigate DDoS flooding attacks, at least to some extent, merely as a
    function of its ability to enforce a response to persistent and
    excessive congestion.

    It would be foolhardy to claim that it will be possible to make ConEx
    invulnerable to all cheats and attacks, even though it has been hard to
    attack it so far. Nonetheless, ConEx would still be useful even if a
    specific deployment of ConEx policers contained vulnerabilities. ConEx
    could still prevent congestion collapse due to careless ommission of
    congestion control, or due to release of software containing an
    accidental congestion control bug.
"

5.2d: Monitoring congestion anomalies
"  One of the most useful things ConEx provides is the ability to
    monitor the amount of congestion entering a network. Thus ConEx
    would add congestion to the information used by existing anomaly
    detection systems, thus greatly improving their ability to discriminate
    between pathological and benign anomalies. Such congestion-carrying
    anomalies might be due to accidental misconfigurations in another
    network, or deliberate malicious attacks.

    ConEx provides the additional benefit that it exposes congestion
    information as packets enter a network, not only at the point of
    congestion. Therefore it could be feasible for anomaly detection
    systems to use ConEx information to detect and shut down dangerous
    floods of congestion at the point where traffic enters a network.
"

Do you think circumspect wording about DDoS like the above would 
still trigger an allergic reaction from some readers? Would this sort 
of text allay your concerns? Or are you adamant that there should be 
no mention of DDoS at all?



Bob



________________________________________________________________
Bob Briscoe,                                BT Innovate & Design 


From christopher.morrow@gmail.com  Tue Aug  3 20:37:22 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 498FB3A6765 for <conex@core3.amsl.com>; Tue,  3 Aug 2010 20:37:22 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.299
X-Spam-Level: 
X-Spam-Status: No, score=-1.299 tagged_above=-999 required=5 tests=[AWL=-1.300, BAYES_50=0.001]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sDy0FXT3CxOb for <conex@core3.amsl.com>; Tue,  3 Aug 2010 20:37:20 -0700 (PDT)
Received: from mail-iw0-f172.google.com (mail-iw0-f172.google.com [209.85.214.172]) by core3.amsl.com (Postfix) with ESMTP id E6CA03A6821 for <conex@ietf.org>; Tue,  3 Aug 2010 20:37:19 -0700 (PDT)
Received: by iwn3 with SMTP id 3so1813750iwn.31 for <conex@ietf.org>; Tue, 03 Aug 2010 20:37:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=B6tg1Z3zz4AAuZQW9f0We13f/BXnSKjZPwjQ4BJMLMI=; b=vuZMzEQOj8PZckz57a729pEhP12alAq3R12MdlSm3v2oWM+4paaYclv12+lodc6wi9 R8lmH0ATqBuZn6S8lwTI24K8i2Ry0BNJqgq9UaBDYR/o+bYOd9tXTnZqhIa5a9qHt40D dcAigDLGNaMc3hynhBiyXqIOsvzFzCTmtQAQU=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=QuRoW4EFkv3DHrXf2NYUIX3S3ngELSf7zlkoFMqLBT7juCDAsEtVvbOAivZvCkV2Aa Pmy1Tk/0rC7lrMP0Rtaifjzc0IluJY86BuCdMCzr0BkASiRxPaFotL0iry+JwwVFFpcM jSxcntC9p5umjO6BST8gJ/hfabYNZiF0r6nZA=
MIME-Version: 1.0
Received: by 10.231.143.9 with SMTP id s9mr9819355ibu.65.1280893067241; Tue,  03 Aug 2010 20:37:47 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.156.203 with HTTP; Tue, 3 Aug 2010 20:37:47 -0700 (PDT)
In-Reply-To: <201008031942.o73JgGSW018260@bagheera.jungle.bt.co.uk>
References: <201008031942.o73JgGSW018260@bagheera.jungle.bt.co.uk>
Date: Tue, 3 Aug 2010 23:37:47 -0400
X-Google-Sender-Auth: 61ai8nwIdfQrURIwCfdpOQXC5k8
Message-ID: <AANLkTikZFvkOQjNLuasif+vAjeJSac1E-BqR6pptn=7p@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: Bob Briscoe <rbriscoe@jungle.bt.co.uk>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: conex@ietf.org
Subject: Re: [conex] ConEx & DDoS
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Aug 2010 03:37:22 -0000

(I think I sub'd to the list with this address...)

On Tue, Aug 3, 2010 at 3:42 PM, Bob Briscoe <rbriscoe@jungle.bt.co.uk> wrot=
e:
> Chris,
>
> During the ConEx w-g session last Tuesday in Maastricht you suggested we
> should not include DDoS mitigation as a use-case for ConEx. I was willing=
 to
> agree as we don't need to court controversy.

yup, no use ratholing if it's not central to the discussion.

> However, the co-authors of draft-moncaster-conex-concepts-uses have asked=
 me
> to float the idea that, altho we won't include DDoS as a use-case in its =
own
> right, we should mention it as an extreme case of two other use-cases. Le=
t
> me explain...
>
> The use-cases we plan to include are:
> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
> 5. Use cases (Highlighting that this is neither an exhaustive list nor a
> prescriptive list...)
> =A05.1 =A0ConEx for better traffic Control
> =A0a. Targeting the right traffic
> =A0b. Encouraging (and eventually enforcing) better CC
> =A05.2 ConEx for better traffic monitoring
> =A0a. For compliance with SLAs
> =A0b. For assessing performance of your provider
> =A0c. Monitoring congestion hotspots for targeted upgrades
> =A0d. Monitoring congestion anomalies (Equipment problems or helping iden=
tify
> DDoS)
> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
>
> We could include mentions of DDoS along the following lines (no need to
> word-smith - I'm just trying to outline the concepts).
>
> 5.1b Encouraging (and eventually enforcing) better CC
> " =A0ConEx information can be used as a control metric for making
> =A0 traffic control decisions, such as deciding which traffic to
> =A0 prioritise or to identify and block sources of persistent and
> =A0 damaging congestion.

this, I think, falls into the category of things that Lee from TWC is
interested in (and maybe his buddies at comcast/cox as well),
understanding what traffic can suffer more loss without causing more
end-user pain, and shifting that traffic to said profile.

> =A0 Simple ingress policer mechanisms, such as those described in
> =A0 [Policing-freedom] and [re-ecn-motive], could control the
> =A0 overall volume of congestion entering a network from each user.
> =A0 Such a policer could lead to a number of beneficial outcomes:

these seem to be of the flavor of things Comcast 'powerboost' does, at
the CPE actually (if I understand their brand of magic correctly).

> =A0 o Heavy users might be encouraged to shift their usage away from
> =A0 =A0 peak times in order to be able to transfer more data without
> =A0 =A0 triggering a response from the policer;
> =A0 o Users might be encouraged to use software that shifts its usage awa=
y
> =A0 =A0 from congestion peaks (shifting in time whether by hours or secon=
ds
> =A0 =A0 [LEDBAT] or shifting to less congested routes [MPTCP]), again to
> =A0 =A0 transfer more data without triggering the policer;
> =A0 o Developers of operating systems might be encouraged to supply such
> =A0 =A0 software as the default;
> =A0 o If certain applications did not use a congestion responsive transpo=
rt
> =A0 =A0 and caused high levels of congestion-bit-rate, the policer would
> =A0 =A0 eventually force the bit-rate to reduce in response to congestion=
.

Today this is (the last bullet) the equivalent of classifying a type
of traffic (port/protocol/src/dst as classification hints) into a
'high loss' bucket/queue and just dropping faster/more of it. If the
traffic is TCP you should get drops, backoff, sawtooth behaviour until
you reach steady-state (maybe?) slower transfers.

> =A0 It is believed that a system of ConEx policers could be built that
> =A0 can verify the integrity of ConEx information and remove traffic
> =A0 that does not comply with the protocol [re-ecn-motive]. If such
> =A0 robustness against cheating is indeed possible, a ConEx policer would
> =A0 mitigate DDoS flooding attacks, at least to some extent, merely as a
> =A0 function of its ability to enforce a response to persistent and
> =A0 excessive congestion.

So, there are surely cases of 'ddos' (or DoS) that include loud
speakers... There are also many instances of DDoS that are many
hundreds of thousands of (or millions) of very quiet voices that in
total cause the DoS/DDoS effect.

If you look at a system of marking of congestion information,
depending upon where that marking happens, and on the traffic in
question, there's no guarantee at all that the sources will be
squelched.

For example, imagine a DDoS of 1 RST pkt (could also do this with 1
icmp-error-type message) per second from 1 million hosts across the
whole of the network (a 'botnet' for instance). There will never be
any packet sent back to the originators, the rate will be low enough
that unless there is coincident traffic to the victim from these hosts
(in the same protocol/port profile probably) no signal will ever be
seen that leads to squelching of traffic. The victim still suffers
~1mpps of traffic, and actually conex/CC just makes the problem far
worse for the victim as all of his real-user traffic will be marked
(over time) and thus squelched out while the DDoS continues to flow
unabated.

>
> =A0 It would be foolhardy to claim that it will be possible to make ConEx
> =A0 invulnerable to all cheats and attacks, even though it has been hard =
to
> =A0 attack it so far. Nonetheless, ConEx would still be useful even if a
> =A0 specific deployment of ConEx policers contained vulnerabilities. ConE=
x
> =A0 could still prevent congestion collapse due to careless ommission of
> =A0 congestion control, or due to release of software containing an
> =A0 accidental congestion control bug.

I think that I nullified the above paragraph actually... except for a
software bug case, though I'd argue that in the case of the UW NTP
server incident ConEx/CC wouldn't have helped there either as the
broken software would still have been broken and the real-users of the
system would have just been CC"d out of existence.

> "
>
> 5.2d: Monitoring congestion anomalies
> " =A0One of the most useful things ConEx provides is the ability to
> =A0 monitor the amount of congestion entering a network. Thus ConEx
> =A0 would add congestion to the information used by existing anomaly
> =A0 detection systems, thus greatly improving their ability to discrimina=
te
> =A0 between pathological and benign anomalies. Such congestion-carrying
> =A0 anomalies might be due to accidental misconfigurations in another
> =A0 network, or deliberate malicious attacks.

I'm on board with letting folks know there is congestion, I'm not
convinced making decisions based on this is simple/helpful, yet.
Adding this information to packets, provided I don't need new
silicon/cpu-cycles/state to do this doesn't strike me as hard, I can
imagine that most backbone providers wont' care and probably won't
implement the marking, but... I could be wrong.

Marking something purely 'congested path' or no though isn't very
helpful (there are almost always) many hops along a path and many
paths a packet/flow may take. At a single point in time one part of a
path may be congested, but there isn't any guarantee that part will be
affected even by the 1RTT time required to do something. (last-mile
problems aside, nothing fixes a 56k modem except... more bandwidth)

Being able to use information about congestion along a path to more
efficiently use the available bandwidth is a fine plan, less
re-transmits is a good plan.

> =A0 ConEx provides the additional benefit that it exposes congestion
> =A0 information as packets enter a network, not only at the point of
> =A0 congestion. Therefore it could be feasible for anomaly detection

ok, this is a point I need clarification on. Where is the congestion
it exposes?
  'there is congestion'
     or
  'at hop 12 there was congestion, you are at hop 22, fyi' (honestly
it'd be better to tell me 'as 3 is congested' I think)

If the information as traffic enters my network is 'somewhere there is
congestion!' I'm not sure I can do much aside from buffer (not going
to happen for very long buffers are expensive) or WRED/drop packets.
If the information is that at a router-hop 10 hops back there was
congestion I may be able to either WRED traffic to that destination or
maybe shuttle packets on slightly longer paths if they are available
(though that seems overly complex as a solution, to me).

> =A0 systems to use ConEx information to detect and shut down dangerous
> =A0 floods of congestion at the point where traffic enters a network.
> "
>
> Do you think circumspect wording about DDoS like the above would still
> trigger an allergic reaction from some readers? Would this sort of text
> allay your concerns? Or are you adamant that there should be no mention o=
f
> DDoS at all?

it's too easy to rathole on ddos, there are far too many ways to
create problems with it, and today it's not really that much of a
problem. There are tools in existence today to deal with the vast
majority of ddos problems, it really isn't a huge problem provided you
prepare and understand the threat(s).

I suppose in summary: I'm not adamant about it one way or the other,
but I can see that it'll end up distracting from your actual
point/topic and thus cost you cycles you could have spent explaining
why conex/cc isn't just gussied up 'longdistance toll settlement for
IP' (for instance, though I think conex itself just marking packets
doesn't really fall into the quoted phrase).

-chris

> Bob
>
>
>
> ________________________________________________________________
> Bob Briscoe, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0BT Innovate & Design
>

From rs@netapp.com  Tue Aug  3 03:45:34 2010
Return-Path: <rs@netapp.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id E602C3A68F0 for <conex@core3.amsl.com>; Tue,  3 Aug 2010 03:45:34 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.424
X-Spam-Level: 
X-Spam-Status: No, score=-6.424 tagged_above=-999 required=5 tests=[AWL=0.175,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id GMIVM7S4krYW for <conex@core3.amsl.com>; Tue,  3 Aug 2010 03:45:33 -0700 (PDT)
Received: from mx3.netapp.com (mx3.netapp.com [217.70.210.9]) by core3.amsl.com (Postfix) with ESMTP id 7AD1E3A6940 for <conex@ietf.org>; Tue,  3 Aug 2010 03:45:32 -0700 (PDT)
X-IronPort-AV: E=Sophos;i="4.55,308,1278313200"; d="scan'208";a="184009915"
Received: from smtp3.europe.netapp.com ([10.64.2.67]) by mx3-out.netapp.com with ESMTP; 03 Aug 2010 03:45:59 -0700
Received: from ldcrsexc2-prd.hq.netapp.com (emeaexchrs.hq.netapp.com [10.65.251.110]) by smtp3.europe.netapp.com (8.13.1/8.13.1/NTAP-1.6) with ESMTP id o73AgTUB013221; Tue, 3 Aug 2010 03:45:13 -0700 (PDT)
Received: from LDCMVEXC1-PRD.hq.netapp.com ([10.65.251.108]) by ldcrsexc2-prd.hq.netapp.com with Microsoft SMTPSVC(6.0.3790.3959);  Tue, 3 Aug 2010 11:44:53 +0100
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 3 Aug 2010 11:44:52 +0100
Message-ID: <5FDC413D5FA246468C200652D63E627A09C48FF7@LDCMVEXC1-PRD.hq.netapp.com>
In-Reply-To: <1280792823.4092.98.camel@e42.nyrhi.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [conex] Off-Topic: Smoothing Variations in BitRate
Thread-Index: AcsynMthh9dxStTSScmwQvx3hdV0iwATrAkQ
References: <201007231949.o6NJn38t014418@bagheera.jungle.bt.co.uk> <AANLkTinr6Y-mKrMNJPUwHuQMYc4C5dLUCdqbztvwmxA=@mail.gmail.com> <201007232235.o6NMZZ1e015875@bagheera.jungle.bt.co.uk> <20100723235447.GE69747@verdi> <793F49BA1FC821409F99F10862A0E4DB07B63225@FHDP1LUMXCV14.us.one.verizon.com> <5FDC413D5FA246468C200652D63E627A09C489DB@LDCMVEXC1-PRD.hq.netapp.com> <CC91BAF7-DCF0-4205-AB37-FD0FF010483D@cisco.com> <201008021926.o72JQhOe004211@bagheera.jungle.bt.co.uk> <1280792823.4092.98.camel@e42.nyrhi.net>
From: "Scheffenegger, Richard" <rs@netapp.com>
To: "Aki Nyrhinen" <anyrhine@cs.helsinki.fi>, "Bob Briscoe" <rbriscoe@jungle.bt.co.uk>
X-OriginalArrivalTime: 03 Aug 2010 10:44:53.0381 (UTC) FILETIME=[E7E1E350:01CB32F8]
X-Mailman-Approved-At: Wed, 04 Aug 2010 23:53:43 -0700
Subject: Re: [conex] Off-Topic: Smoothing Variations in BitRate
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Aug 2010 10:45:35 -0000

Bcc'ed ConEx, as it sways too much off topic for conex, and into ICCRG
territory...
Hi Aki,

=20

> -----Original Message-----
> From: Aki Nyrhinen [mailto:anyrhine@cs.helsinki.fi]=20
>=20
> In general, a step threshold marker will mark way more=20
> packets than tail-drop would drop. Simplifying, tail-drop=20
> will drop packets proportional to load when congested. A step=20
> threshold marker will mark ALL packets between the start and=20
> end of congestion independent of load and capacity between=20
> those two instants.

True; but therefore, it's implementation at wirespeed on 10G, 40G and
100G interfaces would be much less complex. I agree, that obtaining more
granular information from the bottleneck would be ideal, though.

> This is not a property we want a bottleneck to have in an=20
> environment where congestion control algorithms seek to cause=20
> load > 1 on the bottleneck. Yes I am serious, tail-drop is=20
> immeasurably better. TCP is probably the only thing I can=20
> imagine working reasonably well over the step threshold marker.

CBR wouldn't care much about step-threshold either.
=20
>=20
> How much time is there between steps 3 and 7? Does this have=20
> anything to do with load at the bottleneck between those two=20
> steps? I'd say '1 RTT (tcp-view)' and 'no'. The load is=20
> (1+cwnd)/cwnd but cwnd packets are marked instead of a single=20
> one, which would be proportional to load. Due to=20
> self-clocking, a single tcp connection looks very much like=20
> CBR except for the once in RTT bump which periodically=20
> increases the bottleneck queue length.
>=20
> Relentless would respond to the step threshold marker by=20
> dropping its sending rate to zero (which may or may not be=20
> surprising) and anything more fine-grained than TCP would=20
> still suffer.
>=20
> I don't think smarter marking schemes necessarily imply AQM,=20
> random numbers or anything very complicated at all, but I=20
> really think we (me
> included) should actively forget the idea of step threshold=20
> being similar to tail-drop: the only similarity they have is=20
> "if(qlen > threshold)".
>=20

I believe one can not look at AQM schemes completely independent of the
transport layer and it'S congestion control.

Currently, RED has implemented multiple tunables:

Min_th
Max_th
Mark_p (@max_th)
EWMA weight

(Max_th-Min_th is the tunable, with Min_th only being an offset to queue
length and could be excluded).

For none of which exists a good, closed formula, how to deploy them when
one doesn't run (and know about) the full e2e path (AFAIK).

However, all of these tunables directly influence the CC algorithm in
the transport layer.

And to get back to your objection - moving the averaging up from the
bottleneck to the transport layer, to cover, say, 10 RTT, you can still
obtain a very fine-grained, congestion-related signal from a simple
step-response marking scheme. At the same time, as the end systems
actually knows the RTT, it can optimize the weight for different flows
(ie, low for short RTT , and high for long RTT - so that the
responsiveness in absolute time (not RTT) stays about the same).

Also, remember that MulTFRC could do more educated things in response to
a more fine-grained congestion signal - even if the fine-grain signal is
only sythesized within the sender (after feedback from the receiver).

I have to admit, that I'm not an expert in control theory; but it would
appear to me that building a more suitable control strategy based on a
valid set of assumptions has the potential of better results than
relying on the notion that AQM knows what it's doing, even though each
independent entity can freely choose between 3 arbitrary values
(max-min; p; weight) when setting up AQM (unless an implementator has
choose all or some already). And step-response would only be a
corner-case of AQM (so nothing prevents one from using a set of
(0;1.0;1.0) at the moment other than potential implementiation
limitations somewhere - such as impacting TCP-like flows independent of
relative bandwidth though the bottleneck, as there is only a single
response, once per RTT, in TCP).

Perhaps I should take this to ICCRG though ;)

Richard Scheffenegger

From rbriscoe@jungle.bt.co.uk  Fri Aug  6 08:51:02 2010
Return-Path: <rbriscoe@jungle.bt.co.uk>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 824243A67D1 for <conex@core3.amsl.com>; Fri,  6 Aug 2010 08:51:02 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 2.543
X-Spam-Level: **
X-Spam-Status: No, score=2.543 tagged_above=-999 required=5 tests=[AWL=-2.940,  BAYES_50=0.001, DNS_FROM_RFC_BOGUSMX=1.482, GB_SUMOF=5, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Oa7Vd9eCzlTf for <conex@core3.amsl.com>; Fri,  6 Aug 2010 08:50:54 -0700 (PDT)
Received: from smtp2.smtp.bt.com (smtp2.smtp.bt.com [217.32.164.150]) by core3.amsl.com (Postfix) with ESMTP id 8245A3A67A2 for <conex@ietf.org>; Fri,  6 Aug 2010 08:50:53 -0700 (PDT)
Received: from i2kc06-ukbr.domain1.systemhost.net ([193.113.197.70]) by smtp2.smtp.bt.com with Microsoft SMTPSVC(6.0.3790.4675);  Fri, 6 Aug 2010 16:51:23 +0100
Received: from cbibipnt05.iuser.iroot.adidom.com ([147.149.196.177]) by i2kc06-ukbr.domain1.systemhost.net with Microsoft SMTPSVC(6.0.3790.4675); Fri, 6 Aug 2010 16:51:23 +0100
Received: From bagheera.jungle.bt.co.uk ([132.146.168.158]) by cbibipnt05.iuser.iroot.adidom.com (WebShield SMTP v4.5 MR1a P0803.399); id 1281109882583; Fri, 6 Aug 2010 16:51:22 +0100
Received: from MUT.jungle.bt.co.uk ([10.215.130.87]) by bagheera.jungle.bt.co.uk (8.13.5/8.12.8) with ESMTP id o76FpKPZ010840; Fri, 6 Aug 2010 16:51:20 +0100
Message-Id: <201008061551.o76FpKPZ010840@bagheera.jungle.bt.co.uk>
X-Mailer: QUALCOMM Windows Eudora Version 7.1.0.9
Date: Fri, 06 Aug 2010 16:51:26 +0100
To: Christopher Morrow <morrowc.lists@gmail.com>
From: Bob Briscoe <rbriscoe@jungle.bt.co.uk>
In-Reply-To: <AANLkTikZFvkOQjNLuasif+vAjeJSac1E-BqR6pptn=7p@mail.gmail.c om>
References: <201008031942.o73JgGSW018260@bagheera.jungle.bt.co.uk> <AANLkTikZFvkOQjNLuasif+vAjeJSac1E-BqR6pptn=7p@mail.gmail.com>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"; format=flowed
X-Scanned-By: MIMEDefang 2.56 on 132.146.168.158
X-OriginalArrivalTime: 06 Aug 2010 15:51:23.0609 (UTC) FILETIME=[388D3C90:01CB357F]
Cc: conex@ietf.org
Subject: Re: [conex] ConEx & DDoS
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Aug 2010 15:51:02 -0000

Chris,

At 04:37 04/08/2010, Christopher Morrow wrote:
>(I think I sub'd to the list with this address...)
>
>On Tue, Aug 3, 2010 at 3:42 PM, Bob Briscoe <rbriscoe@jungle.bt.co.uk> wrote:
> > Chris,
> >
> > During the ConEx w-g session last Tuesday in Maastricht you suggested we
> > should not include DDoS mitigation as a use-case for ConEx. I was 
> willing to
> > agree as we don't need to court controversy.
>
>yup, no use ratholing if it's not central to the discussion.
>
> > However, the co-authors of draft-moncaster-conex-concepts-uses 
> have asked me
> > to float the idea that, altho we won't include DDoS as a use-case 
> in its own
> > right, we should mention it as an extreme case of two other use-cases. Let
> > me explain...
> >
> > The use-cases we plan to include are:
> > /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
> > 5. Use cases (Highlighting that this is neither an exhaustive list nor a
> > prescriptive list...)
> >  5.1  ConEx for better traffic Control
> >  a. Targeting the right traffic
> >  b. Encouraging (and eventually enforcing) better CC
> >  5.2 ConEx for better traffic monitoring
> >  a. For compliance with SLAs
> >  b. For assessing performance of your provider
> >  c. Monitoring congestion hotspots for targeted upgrades
> >  d. Monitoring congestion anomalies (Equipment problems or helping identify
> > DDoS)
> > /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
> >
> > We could include mentions of DDoS along the following lines (no need to
> > word-smith - I'm just trying to outline the concepts).
> >
> > 5.1b Encouraging (and eventually enforcing) better CC
> > "  ConEx information can be used as a control metric for making
> >   traffic control decisions, such as deciding which traffic to
> >   prioritise or to identify and block sources of persistent and
> >   damaging congestion.
>
>this, I think, falls into the category of things that Lee from TWC is
>interested in (and maybe his buddies at comcast/cox as well),
>understanding what traffic can suffer more loss without causing more
>end-user pain, and shifting that traffic to said profile.

It's no coincidence that Rich Woundy is a co-author.

Sure, Comcast have a current solution, but Rich is the first to say 
that it gives no encouragement (and actually punishes) approaches 
like LEDBAT, because it attributes blame for high utilisation by 
volume rather than congestion-volume. There's a list of other things 
Rich presented in the ConEx BoF that makes ConEx worth doing beyond 
what Comcast currently do.
<http://www.ietf.org/proceedings/76/slides/conex-3.pdf>

Whatever, I only included this text as a lead-up to the DDoS text later...


> >   Simple ingress policer mechanisms, such as those described in
> >   [Policing-freedom] and [re-ecn-motive], could control the
> >   overall volume of congestion entering a network from each user.
> >   Such a policer could lead to a number of beneficial outcomes:
>
>these seem to be of the flavor of things Comcast 'powerboost' does, at
>the CPE actually (if I understand their brand of magic correctly).

No, powerboost is very different.

[BTW, I and others did a start-up called Qariba back in 2001 which 
built a powerboost-like feature for cable networks, with a network 
API so it could be initiated from a CDN - we called it Broadband-800 
by analogy to 800 phone calls, because the server end was effectively 
temporarily buying more access capacity on the end-user's behalf. We 
had a lot of interest in the US cable industry at the time, but we 
got hit by the bubble bursting.]

Powerboost gives you access to extra capacity irrespective of whether 
you will contribute more to congestion. ConEx is much simpler and more generic.

> >   o Heavy users might be encouraged to shift their usage away from
> >     peak times in order to be able to transfer more data without
> >     triggering a response from the policer;
> >   o Users might be encouraged to use software that shifts its usage away
> >     from congestion peaks (shifting in time whether by hours or seconds
> >     [LEDBAT] or shifting to less congested routes [MPTCP]), again to
> >     transfer more data without triggering the policer;
> >   o Developers of operating systems might be encouraged to supply such
> >     software as the default;
> >   o If certain applications did not use a congestion responsive transport
> >     and caused high levels of congestion-bit-rate, the policer would
> >     eventually force the bit-rate to reduce in response to congestion.
>
>Today this is (the last bullet) the equivalent of classifying a type
>of traffic (port/protocol/src/dst as classification hints) into a
>'high loss' bucket/queue and just dropping faster/more of it. If the
>traffic is TCP you should get drops, backoff, sawtooth behaviour until
>you reach steady-state (maybe?) slower transfers.
>
> >   It is believed that a system of ConEx policers could be built that
> >   can verify the integrity of ConEx information and remove traffic
> >   that does not comply with the protocol [re-ecn-motive]. If such
> >   robustness against cheating is indeed possible, a ConEx policer would
> >   mitigate DDoS flooding attacks, at least to some extent, merely as a
> >   function of its ability to enforce a response to persistent and
> >   excessive congestion.
>
>So, there are surely cases of 'ddos' (or DoS) that include loud
>speakers... There are also many instances of DDoS that are many
>hundreds of thousands of (or millions) of very quiet voices that in
>total cause the DoS/DDoS effect.
>
>If you look at a system of marking of congestion information,
>depending upon where that marking happens, and on the traffic in
>question, there's no guarantee at all that the sources will be
>squelched.

In the following I'm going to talk about re-ECN, rather than ConEx, 
because we haven't defined ConEx yet...


>For example, imagine a DDoS of 1 RST pkt (could also do this with 1
>icmp-error-type message) per second from 1 million hosts across the
>whole of the network (a 'botnet' for instance). There will never be
>any packet sent back to the originators,

(BTW, re-ECN doesn't need any packet sent back to the originators - 
you might have misunderstood the design.)

>the rate will be low enough
>that unless there is coincident traffic to the victim from these hosts
>(in the same protocol/port profile probably) no signal will ever be
>seen that leads to squelching of traffic.

Yes, of course we've thought of this sort of attack. Even if we 
hadn't thought of this, since 2005 the security/DoS community have 
been thinking up attacks against re-ECN (pre-ConEx), so it would have 
been hard to miss such an obvious one.

If each bot was behind a 100Mb/s link, and everyday congestion levels 
were typically 0.2%, a reasonable config of basic re-ECN policer (I 
can give typical numbers if you want) would allow a 
congestion-bit-rate of 10Mb/s of *marked* packets for a minute, then 
30kb/s of sustained marked traffic after that.

If your attack were a flooding attack with MTU size data packets 
(12,000b), rather than an RST attack (which can be detected and 
ignored), each bot would have to congestion mark all the packets to 
have any flooding effect. If each bot could draw congestion from its 
policer allowance at 30kb/s (see above), a basic re-ECN policer would 
allow it to sustain an attack at 1 pkt every 0.4sec - a little more 
than twice as fast as your attack.

Agreeing on the 'largest' botnet ever seen isn't easy, but the 
Mariposa botnet dismantled earlier this year involved ~12M separate 
IP addresses. No-one can know whether that implied 12M separate 
machines, but I doubt it - source address spoofing could have been 
used. This is relevant because a re-ECN policer would sit at the 
physical access - nothing in re-ECN depends on valid source 
addresses. If each machine was spoofing 24 other addresses (a total 
guess), that's about 500,000 real machines.

Let's assume it gets much harder to marshall a larger army than the 
one said to be the largest yet. Then, a basic re-ECN deployment would 
contain an attack to about 30kb/s x 500,000 = 15Gb/s (which is 
similar to your scenario of 12kb/s x 1M = 12Gb/s).

In summary, just a basic re-ECN policer that isn't even looking for 
DoS can raise the bar to require a botnet about 3,000 times bigger 
than without re-ECN [because that's the ratio of the peak traffic 
each site can send (100Mb/s) divided by the averaged rate of losses 
in normal traffic (30kb/s)].

Yes, this isn't particularly impressive. But it's not meant to be. 
This is just what you get without even trying to deal with DDoS specifically.

i) The most obvious thing to add to a basic re-ECN policer would be a 
simple anomaly detector triggered by packets to one destination 
prefix with >10% of them marked for expected congestion. That could 
trigger a much more stringent limit just for that destination prefix.

ii) If the victim network deployed ConEx monitors around its border, 
attack traffic would really stand out obviously because nearly all 
attack packets would be marked for rest-of-path congestion. These 
monitors could then trigger similar tight limits on 
congestion-bit-rate to that destination.

iii) I've assumed a homogeneous botnet. In practice many would be on 
slower access links, and many would be within larger sites (e.g. 
campus nets) sharing an aggregated congestion allowance at the access 
from the campus to the Internet. I think my rough calculation is 
closer to 'worst case' than 'typical'.

>The victim still suffers
>~1mpps of traffic, and actually conex/CC just makes the problem far
>worse for the victim as all of his real-user traffic will be marked
>(over time) and thus squelched out while the DDoS continues to flow
>unabated.

Nope. Importantly, normal /unsustained/ traffic to that destination 
would take a while (1 minute in my example) to hit the tighter 
congestion policer limits. So regular usage would be reasonably 
unaffected. Whereas the bots have to sustain the attack to be effective.


> >
> >   It would be foolhardy to claim that it will be possible to make ConEx
> >   invulnerable to all cheats and attacks, even though it has been hard to
> >   attack it so far. Nonetheless, ConEx would still be useful even if a
> >   specific deployment of ConEx policers contained vulnerabilities. ConEx
> >   could still prevent congestion collapse due to careless ommission of
> >   congestion control, or due to release of software containing an
> >   accidental congestion control bug.
>
>I think that I nullified the above paragraph actually... except for a
>software bug case, though I'd argue that in the case of the UW NTP
>server incident ConEx/CC wouldn't have helped there either as the
>broken software would still have been broken and the real-users of the
>system would have just been CC"d out of existence.

Nope. ConEx congestion policers would be run by the network operator 
and effectively take over congestion control if hosts fail to do it 
for a sustained period.

Yes, you could get bugs in ConEx policers. But it's unlikely (though 
not impossible) that would happen at the same time as a widespread 
bug in hosts. Safety in diversity.

Yes, the ConEx marking process on the hosts might be bugged. But, as 
with DoS protection, if a host or router is being flooded and has to 
drop stuff, it is easy to preferentially drop unmarked ConEx packets 
(or non-ConEx packets) first.

Agreed, this takes a leap-of-faith to believe a network might deploy 
all this, but it only has to deploy its own protections for itself, 
which is a good deployment property.


> > "
> >
> > 5.2d: Monitoring congestion anomalies
> > "  One of the most useful things ConEx provides is the ability to
> >   monitor the amount of congestion entering a network. Thus ConEx
> >   would add congestion to the information used by existing anomaly
> >   detection systems, thus greatly improving their ability to discriminate
> >   between pathological and benign anomalies. Such congestion-carrying
> >   anomalies might be due to accidental misconfigurations in another
> >   network, or deliberate malicious attacks.

I must also add that re-ECN is only handling the flooding aspect of 
an attack, not the /information/ in the packets (the RST flag in your 
example). Of course a DDoS protection system needs to take this 
information into account.

What 5.2d says is that all ConEx aims to do is add rest-of-path 
congestion to that information, which is a powerful addition to the 
network's visibility. Particularly because info in the payload need 
not be visible to the network at all.


>I'm on board with letting folks know there is congestion, I'm not
>convinced making decisions based on this is simple/helpful, yet.
>Adding this information to packets, provided I don't need new
>silicon/cpu-cycles/state to do this doesn't strike me as hard,

Good

>I can
>imagine that most backbone providers wont' care and probably won't
>implement the marking, but... I could be wrong.

No network has to implement anything if they don't want to. It still 
works for others who do. I also don't imagine backbone providers will 
care about this.

It's easier for a backbone to throw capacity at the problem when 
they're running a few large links rather than loads of smaller links. 
So the smaller links around the network edge are always going to 
bottleneck congestion on behalf of cores and backbones.


>Marking something purely 'congested path' or no though isn't very
>helpful (there are almost always) many hops along a path and many
>paths a packet/flow may take. At a single point in time one part of a
>path may be congested, but there isn't any guarantee that part will be
>affected even by the 1RTT time required to do something. (last-mile
>problems aside, nothing fixes a 56k modem except... more bandwidth)

I think you might have misunderstood. ConEx isn't intended to change 
who does congestion control. Hosts still do that on fast timescales.

ConEx is merely intended to allow the network to count up how much 
congestion is still contributed to by hosts, so the network can judge 
how effectively the host is doing congestion control. That can 
ultimately be used to take over control (via a network-based 
congestion policer) if the host is persistently being profligate 
(whether through selfishness, malice or accident).


>Being able to use information about congestion along a path to more
>efficiently use the available bandwidth is a fine plan, less
>re-transmits is a good plan.
>
> >   ConEx provides the additional benefit that it exposes congestion
> >   information as packets enter a network, not only at the point of
> >   congestion. Therefore it could be feasible for anomaly detection
>
>ok, this is a point I need clarification on. Where is the congestion
>it exposes?
>   'there is congestion'
>      or
>   'at hop 12 there was congestion, you are at hop 22, fyi' (honestly
>it'd be better to tell me 'as 3 is congested' I think)

Re-ECN deliberately doesn't tell anyone exactly where the congestion 
is - that would reveal too much. Whatever your viewing point, it only 
tells you how much congestion there is upstream of you and how much downstream.

Rather than re-write all that's ever been written on ConEx in one 
email, can I point you at a section of a paper to get this?:
         Section 4.2 of
         "Using Self-interest to Prevent Malice;
         Fixing the Denial of Service Flaw of the Internet"
         <http://www.bobbriscoe.net/projects/refb/index.html#refb_dplinc>

This is the only paper where the full re-ECN protocol is described 
but in outline form, to save you having to wade through the detailed 
protocol spec. It's also the only paper about re-ECN and DDoS (other 
than my later PhD thesis).

Plenty of other papers describe the two main codepoints of the protocol:
- Expected whole path congestion, W [inserted by the sender and immutable]
- (ECN) congestion experienced upstream so far, U [marks added by routers]
These are enough to answer your question above: a monitor at any 
point on the path can meter both W & U, so it can calculate expected 
downstream congestion on the rest-of-the-path as well: D = W - U.

But only the above DDoS paper explains the third and last part of the 
protocol (initial credit) which enables re-ECN to work without any 
feedback at all - you need that for the DDoS case.


>If the information as traffic enters my network is 'somewhere there is
>congestion!' I'm not sure I can do much aside from buffer (not going
>to happen for very long buffers are expensive) or WRED/drop packets.
>If the information is that at a router-hop 10 hops back there was
>congestion I may be able to either WRED traffic to that destination or
>maybe shuttle packets on slightly longer paths if they are available
>(though that seems overly complex as a solution, to me).

See above: We're not expecting the network to do fine-grained 
congestion control per flow - that's for the host to decide. We're 
wanting to look at the sum of all the congestion that traffic from a 
site (household, campus) is contributing to; everything on the 
Internet side of its access boundary.

Then at the next network border, the pair of networks at that border 
can see how much congestion is each side of that border - for all 
traffic, whereever it is destined. And so on.

That's all you need - necessary and sufficient - to regulate 
profligate users (or bugged, or malicious) and to regulate networks 
that allow their users to be profligate (or bugged or malicious).


> >   systems to use ConEx information to detect and shut down dangerous
> >   floods of congestion at the point where traffic enters a network.
> > "
> >
> > Do you think circumspect wording about DDoS like the above would still
> > trigger an allergic reaction from some readers? Would this sort of text
> > allay your concerns? Or are you adamant that there should be no mention of
> > DDoS at all?
>
>it's too easy to rathole on ddos, there are far too many ways to
>create problems with it, and today it's not really that much of a
>problem. There are tools in existence today to deal with the vast
>majority of ddos problems, it really isn't a huge problem provided you
>prepare and understand the threat(s).
>
>I suppose in summary: I'm not adamant about it one way or the other,
>but I can see that it'll end up distracting from your actual
>point/topic and thus cost you cycles you could have spent explaining
>why conex/cc isn't just gussied up 'longdistance toll settlement for
>IP' (for instance, though I think conex itself just marking packets
>doesn't really fall into the quoted phrase).

Yes, I understand you're trying to help us avoid interminable arguments.

I guess what I'm saying is: We don't just change IP for chuckles. The 
job of ConEx is to limit congestion caused by profligacy, malice and 
accidents. If it can only do that against people who don't try too 
hard to push back, we should not spend our precious time on ConEx.

IOW, we /should/ have some level of argument about whether ConEx can 
achieve what it aims to achieve. Just not too much at this early 
stage, when we haven't even defined the ConEx protocol (as opposed to 
the re-ECN protocol).


Bob


>-chris
>
> > Bob
> >
> >
> >
> > ________________________________________________________________
> > Bob Briscoe,                                BT Innovate & Design
> >

________________________________________________________________
Bob Briscoe,                                BT Innovate & Design  


From ingemar.s.johansson@ericsson.com  Wed Aug 11 05:14:58 2010
Return-Path: <ingemar.s.johansson@ericsson.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id C39093A6828 for <conex@core3.amsl.com>; Wed, 11 Aug 2010 05:14:58 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.57
X-Spam-Level: 
X-Spam-Status: No, score=-3.57 tagged_above=-999 required=5 tests=[AWL=-0.970,  BAYES_00=-2.599]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id u4O17kpw8qEv for <conex@core3.amsl.com>; Wed, 11 Aug 2010 05:14:57 -0700 (PDT)
Received: from mailgw9.se.ericsson.net (mailgw9.se.ericsson.net [193.180.251.57]) by core3.amsl.com (Postfix) with ESMTP id F1EBF3A6781 for <conex@ietf.org>; Wed, 11 Aug 2010 05:14:56 -0700 (PDT)
X-AuditID: c1b4fb39-b7b91ae000001aef-bd-4c629464d18c
Received: from esessmw0184.eemea.ericsson.se (Unknown_Domain [153.88.253.124]) by mailgw9.se.ericsson.net (Symantec Mail Security) with SMTP id BA.07.06895.464926C4; Wed, 11 Aug 2010 14:15:32 +0200 (CEST)
Received: from ESESSCMS0366.eemea.ericsson.se ([169.254.1.96]) by esessmw0184.eemea.ericsson.se ([153.88.115.81]) with mapi; Wed, 11 Aug 2010 14:15:32 +0200
From: Ingemar Johansson S <ingemar.s.johansson@ericsson.com>
To: "conex@ietf.org" <conex@ietf.org>
Date: Wed, 11 Aug 2010 14:15:31 +0200
Thread-Topic: Joint policy statement on open internet between Verizon and Google
Thread-Index: AQHLOU7kWKX56uqb0UyZUgW4yy1XGA==
Message-ID: <DBB1DC060375D147AC43F310AD987DCC0A3B30019E@ESESSCMS0366.eemea.ericsson.se>
Accept-Language: sv-SE, en-US
Content-Language: sv-SE
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: sv-SE, en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-Brightmail-Tracker: AAAAAA==
Subject: [conex] Joint policy statement on open internet between Verizon and Google
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 11 Aug 2010 12:14:58 -0000

Hi

This may be an interesting read for some of you
http://policyblog.verizon.com/BlogPost/742/JointPolicyProposalforanOpenInte=
rnet.aspx

Regards
Ingemar=

From john@jlc.net  Wed Aug 11 06:50:04 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id B98853A68EA for <conex@core3.amsl.com>; Wed, 11 Aug 2010 06:50:04 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -104.059
X-Spam-Level: 
X-Spam-Status: No, score=-104.059 tagged_above=-999 required=5 tests=[AWL=-0.060, BAYES_50=0.001, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PFhI-hnQJdJ0 for <conex@core3.amsl.com>; Wed, 11 Aug 2010 06:50:03 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id 903AB3A6900 for <conex@ietf.org>; Wed, 11 Aug 2010 06:50:03 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id BAA3D33C7E; Wed, 11 Aug 2010 09:50:39 -0400 (EDT)
Date: Wed, 11 Aug 2010 09:50:39 -0400
From: John Leslie <john@jlc.net>
To: Ingemar Johansson S <ingemar.s.johansson@ericsson.com>
Message-ID: <20100811135039.GC16820@verdi>
References: <DBB1DC060375D147AC43F310AD987DCC0A3B30019E@ESESSCMS0366.eemea.ericsson.se>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <DBB1DC060375D147AC43F310AD987DCC0A3B30019E@ESESSCMS0366.eemea.ericsson.se>
User-Agent: Mutt/1.4.1i
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] Joint policy statement on open internet between Verizon and Google
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 11 Aug 2010 13:50:04 -0000

Ingemar Johansson S <ingemar.s.johansson@ericsson.com> wrote:
> 
> http://policyblog.verizon.com/BlogPost/742/JointPolicyProposalforanOpenInternet.aspx

   I encourage folks to read the proposal itself (not second-hand reports)
at:

http://www.scribd.com/doc/35599242/Verizon-Google-Legislative-Framework-Proposal

   IMHO, it reads as 100% capitulation by Google to what Verizon wants.
Verizon can do anything it wants in wireless (which is where they see
their future); the FCC can't write any rules; and the FCC can only ask
Verizon to show that a service is truthfully advertised, on a case-by-
case basis.

   As far as any interaction with our work in ConEx, _anything_ we might
standardize is permissible for any wireless or wireline provider to do.

--
John Leslie <john@jlc.net>

From swmike@swm.pp.se  Thu Aug 12 03:27:46 2010
Return-Path: <swmike@swm.pp.se>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id EF9CE3A6782 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 03:27:46 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.588
X-Spam-Level: 
X-Spam-Status: No, score=-2.588 tagged_above=-999 required=5 tests=[AWL=0.011,  BAYES_00=-2.599]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id a1d0ZOSpeYL6 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 03:27:46 -0700 (PDT)
Received: from uplift.swm.pp.se (ipv6.swm.pp.se [IPv6:2a00:801::f]) by core3.amsl.com (Postfix) with ESMTP id CCBC53A6765 for <conex@ietf.org>; Thu, 12 Aug 2010 03:27:45 -0700 (PDT)
Received: by uplift.swm.pp.se (Postfix, from userid 501) id 4F5CFA6; Thu, 12 Aug 2010 12:28:22 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1]) by uplift.swm.pp.se (Postfix) with ESMTP id 4EECD9E for <conex@ietf.org>; Thu, 12 Aug 2010 12:28:22 +0200 (CEST)
Date: Thu, 12 Aug 2010 12:27:04 +0200 (CEST)
From: Mikael Abrahamsson <swmike@swm.pp.se>
To: conex@ietf.org
Message-ID: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se>
User-Agent: Alpine 1.10 (DEB 962 2008-03-14)
Organization: People's Front Against WWW
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII
ReSent-Date: Thu, 12 Aug 2010 12:28:17 +0200 (CEST)
ReSent-From: Mikael Abrahamsson <swmike@swm.pp.se>
ReSent-To: conex@ietf.org
ReSent-Subject: comments on draft-conex-mechanism-00.txt
ReSent-Message-ID: <alpine.DEB.1.10.1008121228170.8562@uplift.swm.pp.se>
ReSent-User-Agent: Alpine 1.10 (DEB 962 2008-03-14)
Subject: [conex] comments on draft-conex-mechanism-00.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 10:27:47 -0000

Hi.

Just subscribed to the list (missed the start of this WG), sorry, can't 
reply to another thread because I don't have the original email.

So, I've started reading draft-conex-mechanism-00.txt. I object to the 
writing:

"For an operator facing congestion caused
    by other operators' networks, building out its own capacity is
    unlikely to solve the congestion problem."

If traffic isn't DDoS or caused my malware, it's traffic the customer 
wanted. The traffic isn't caused by "other operators' networks", it's 
caused by customer interaction with customers on other ISPs networks, thus 
something the customer wants to do. The above writing perhaps has place in 
a text sent so the regulator by a an entity who wants to justify its 
actions and avoid regulation or repercussions, but I don't think it is 
appropriate in an IETF document.

Oh well, over to the more technical part:

7. Use cases. I don't agree on the three approcahes. Point 1 and 3 means a 
router has to handle flows, this doesn't scale. Point 2 involves the 
"router" being congested. Perhaps what's meant is that a link connected to 
the router is congested egress (because that's where the queue will be)?

As I've said before, we're going more and more to simple cheap devices 
with small buffers and forwarding tables that are really quick (TCAM or 
alike). These devices are not flow aware and they don't really do RED 
either (it's pointless considering their small buffers). It's an important 
consideration for this wg how this should be handled. I don't see any 
discussion about it...

7.1. Here it's said that flow-aware equipment is expensive, that's good. 
It's important to remember. Again, there is talk about "congested 
routers" which I think is the wrong term.

8. Here is a good point about congestion not being a secret. The problem 
is that the IP stack knows about it, but the user does not (apart from 
deducing this from what how the application is behaving ("slow")). As some 
has seen from my postings on other IETG lists, I think there should be 
user insight into performance (metrics how the IP stack is behaving so 
this can be exposed to the user). Could this be part of the work done in 
this WG, actually exposing the CONEX performance data to the user? I think 
this is an important point.

I don't really object to the work being done, it's just that since ECN 
hasn't caught on in real life (it's in most end user OSes nowadays, I 
don't know many network devices which are ECN aware) I don't really see 
how the new propsals are actually going to be deployed? Are the hardware 
guys involved in the discussions so we know how hard things are to 
implement?

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se


From toby@moncaster.com  Thu Aug 12 04:07:36 2010
Return-Path: <toby@moncaster.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 9E3F43A6982 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 04:07:36 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.249
X-Spam-Level: 
X-Spam-Status: No, score=-2.249 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HELO_EQ_DE=0.35]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qZyFfo2XJGUd for <conex@core3.amsl.com>; Thu, 12 Aug 2010 04:07:35 -0700 (PDT)
Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.186]) by core3.amsl.com (Postfix) with ESMTP id 9FDA93A67FE for <conex@ietf.org>; Thu, 12 Aug 2010 04:07:34 -0700 (PDT)
Received: from TobysHP (host86-170-52-148.range86-170.btcentralplus.com [86.170.52.148]) by mrelayeu.kundenserver.de (node=mreu2) with ESMTP (Nemesis) id 0MQJyK-1OMNgy2LPQ-00UKO1; Thu, 12 Aug 2010 13:08:08 +0200
From: "Toby Moncaster" <toby@moncaster.com>
To: "'Mikael Abrahamsson'" <swmike@swm.pp.se>
References: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se>
In-Reply-To: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se>
Date: Thu, 12 Aug 2010 12:08:06 +0100
Message-ID: <000f01cb3a0e$a447c070$ecd74150$@com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Office Outlook 12.0
thread-index: Acs6CRpEcdwHpR9qRXGHG2WIUZKlGwAAuTtw
Content-Language: en-gb
X-Provags-ID: V02:K0:9GIWdVdOrR1Louzv6ptnP47fwIAVRlV0zePpkl4mGFI UsxN2mwqozsOeOKu5dvrSMAELlP0oPyPxMTZO5i2kr6CYtHHjh Wy4h49m+GZtKYVWcJAmrdEjPuyjY8FrZcVww/PgJj+LlL80Ga/ 7e+C+J/OjTI7jKCPFVIV/WT4fBClKu6nGwU8euhzee0yLrNMWd v4vBOou/bURTzt7VPWnIB1QPf4nbN2vcVWCZ8UIYT8=
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-conex-mechanism-00.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 11:07:36 -0000

Hi Mikael,

Thanks for the comments. However we have written two major revisions since
the version you read. It would probably be more useful if you read the
updated version (which has a different filename...).

<http://moncaster.com/conex/draft-moncaster-conex-concepts-uses-01.txt>
<http://moncaster.com/conex/draft-moncaster-conex-concepts-uses-01.html>

There is a -02 likely to ship by early September and there will hopefully be
a separate abstract mechanism draft released before the Fall IETF meeting.

The aim is to try and make ConEx as independent of ECN as possible - ECN
would provide a useful additional signal but the system could work with just
taildrop... Bob Briscoe (who is away this week) can explain to you how ConEx
actually provides a motivation to deploy ECN - the main reason that ECN has
not been deployed (or not turned on) is that the network currently gets
little or no benefit from it. ConEx can provide a benefit by giving the
network more information...

Regarding your comment about "simple cheap devices with small buffers" it
would be useful if you could give some good references to such devices. I am
assuming these are mainly aimed at core networks? We already know that most
congestion comes from the access and backhaul so it is those that are really
the key for ConEx...

{speaking as an individual...} ConEx certainly can be used to provide a user
with more information and that information is expected to influence a user's
behaviour. If you read section 6.2 of
<http://bobbriscoe.net/projects/refb/polfree_rearch08.pdf> it mentions how a
user might be able to make some use of the information. Realistically I
would expect the application writers to provide the interface, for instance
you might get an application that intelligently manages the user's
congestion control, allowing the user to prioritise their applications, or
which uses mouse focus to apply the priority, etc... It's not that this
information is purposely hidden from the user, rather up till now no one has
thought to make it available... 

Toby

> -----Original Message-----
> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf
> Of Mikael Abrahamsson
> Sent: 12 August 2010 11:27
> To: conex@ietf.org
> Subject: [conex] comments on draft-conex-mechanism-00.txt
> 
> 
> Hi.
> 
> Just subscribed to the list (missed the start of this WG), sorry, can't
> reply to another thread because I don't have the original email.
> 
> So, I've started reading draft-conex-mechanism-00.txt. I object to the
> writing:
> 
> "For an operator facing congestion caused
>     by other operators' networks, building out its own capacity is
>     unlikely to solve the congestion problem."
> 
> If traffic isn't DDoS or caused my malware, it's traffic the customer
> wanted. The traffic isn't caused by "other operators' networks", it's
> caused by customer interaction with customers on other ISPs networks,
> thus
> something the customer wants to do. The above writing perhaps has place
> in
> a text sent so the regulator by a an entity who wants to justify its
> actions and avoid regulation or repercussions, but I don't think it is
> appropriate in an IETF document.
> 
> Oh well, over to the more technical part:
> 
> 7. Use cases. I don't agree on the three approcahes. Point 1 and 3
> means a
> router has to handle flows, this doesn't scale. Point 2 involves the
> "router" being congested. Perhaps what's meant is that a link connected
> to
> the router is congested egress (because that's where the queue will
> be)?
> 
> As I've said before, we're going more and more to simple cheap devices
> with small buffers and forwarding tables that are really quick (TCAM or
> alike). These devices are not flow aware and they don't really do RED
> either (it's pointless considering their small buffers). It's an
> important
> consideration for this wg how this should be handled. I don't see any
> discussion about it...
> 
> 7.1. Here it's said that flow-aware equipment is expensive, that's
> good.
> It's important to remember. Again, there is talk about "congested
> routers" which I think is the wrong term.
> 
> 8. Here is a good point about congestion not being a secret. The
> problem
> is that the IP stack knows about it, but the user does not (apart from
> deducing this from what how the application is behaving ("slow")). As
> some
> has seen from my postings on other IETG lists, I think there should be
> user insight into performance (metrics how the IP stack is behaving so
> this can be exposed to the user). Could this be part of the work done
> in
> this WG, actually exposing the CONEX performance data to the user? I
> think
> this is an important point.
> 
> I don't really object to the work being done, it's just that since ECN
> hasn't caught on in real life (it's in most end user OSes nowadays, I
> don't know many network devices which are ECN aware) I don't really see
> how the new propsals are actually going to be deployed? Are the
> hardware
> guys involved in the discussions so we know how hard things are to
> implement?
> 
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
> 
> _______________________________________________
> conex mailing list
> conex@ietf.org
> https://www.ietf.org/mailman/listinfo/conex


From swmike@swm.pp.se  Thu Aug 12 04:26:30 2010
Return-Path: <swmike@swm.pp.se>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 8B1F43A6866 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 04:26:30 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.588
X-Spam-Level: 
X-Spam-Status: No, score=-2.588 tagged_above=-999 required=5 tests=[AWL=0.011,  BAYES_00=-2.599]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TAmQif+MNgeb for <conex@core3.amsl.com>; Thu, 12 Aug 2010 04:26:29 -0700 (PDT)
Received: from uplift.swm.pp.se (ipv6.swm.pp.se [IPv6:2a00:801::f]) by core3.amsl.com (Postfix) with ESMTP id 09B083A6857 for <conex@ietf.org>; Thu, 12 Aug 2010 04:26:29 -0700 (PDT)
Received: by uplift.swm.pp.se (Postfix, from userid 501) id 3513EA5; Thu, 12 Aug 2010 13:27:05 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1]) by uplift.swm.pp.se (Postfix) with ESMTP id 3418A9E; Thu, 12 Aug 2010 13:27:05 +0200 (CEST)
Date: Thu, 12 Aug 2010 13:27:05 +0200 (CEST)
From: Mikael Abrahamsson <swmike@swm.pp.se>
To: Toby Moncaster <toby@moncaster.com>
In-Reply-To: <000f01cb3a0e$a447c070$ecd74150$@com>
Message-ID: <alpine.DEB.1.10.1008121314400.8562@uplift.swm.pp.se>
References: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se> <000f01cb3a0e$a447c070$ecd74150$@com>
User-Agent: Alpine 1.10 (DEB 962 2008-03-14)
Organization: People's Front Against WWW
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-conex-mechanism-00.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 11:26:30 -0000

On Thu, 12 Aug 2010, Toby Moncaster wrote:

> Thanks for the comments. However we have written two major revisions since
> the version you read. It would probably be more useful if you read the
> updated version (which has a different filename...).
>
> <http://moncaster.com/conex/draft-moncaster-conex-concepts-uses-01.txt>
> <http://moncaster.com/conex/draft-moncaster-conex-concepts-uses-01.html>

Thanks, will read and comment separately.

> The aim is to try and make ConEx as independent of ECN as possible - ECN 
> would provide a useful additional signal but the system could work with 
> just taildrop... Bob Briscoe (who is away this week) can explain to you 
> how ConEx actually provides a motivation to deploy ECN - the main reason 
> that ECN has not been deployed (or not turned on) is that the network 
> currently gets little or no benefit from it. ConEx can provide a benefit 
> by giving the network more information...

What I hav a hard time understanding is exactly what the network should do 
with this information.

> Regarding your comment about "simple cheap devices with small buffers" 
> it would be useful if you could give some good references to such 
> devices. I am assuming these are mainly aimed at core networks? We 
> already know that most congestion comes from the access and backhaul so 
> it is those that are really the key for ConEx...

Well, I know of a lot of networks that are built in a hierarchy like this, 
connected by dark/grey fiber:

Residential household
(10/100)
Basement switches connected in a ring
(gig)
Larger L3 switch
(10GE)
Core, which might consist of just mor larger L3 switches.
(multiple 10GEs)

In the above, taking Cisco as an example, these would be ME3400 for 
basement, 6500/7600 for Larger 10GE switch, and cores could be that as 
well. Most of the WS-67xx linecards have < 5ms of buffers and very little 
queue management. In this kind of network there are very little ways of 
congestion signalling.

> which uses mouse focus to apply the priority, etc... It's not that this
> information is purposely hidden from the user, rather up till now no one has
> thought to make it available...

My thinking is that exposing this information to the user will make them 
aware of what's going on and can give metrics in quality measurement of 
different ISPs, which can improve the ability of the users to choose an 
ISP which actually is low or free of congestion in their network.

Congestion isn't a force of nature, it's when the ISP wants to oversell 
their network to be able to supply a lower cost service and/or increase 
profit margins. This might be fine, but it should be done openly and users 
should know about it and be able to act accordingly.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

From john@jlc.net  Thu Aug 12 04:38:10 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 07AA93A6890 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 04:38:10 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.353
X-Spam-Level: 
X-Spam-Status: No, score=-105.353 tagged_above=-999 required=5 tests=[AWL=1.246, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id V+UPcm1pzyhn for <conex@core3.amsl.com>; Thu, 12 Aug 2010 04:38:09 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id C67263A6857 for <conex@ietf.org>; Thu, 12 Aug 2010 04:38:08 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id 9806B33C33; Thu, 12 Aug 2010 07:38:45 -0400 (EDT)
Date: Thu, 12 Aug 2010 07:38:45 -0400
From: John Leslie <john@jlc.net>
To: Mikael Abrahamsson <swmike@swm.pp.se>
Message-ID: <20100812113845.GD16820@verdi>
References: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se>
User-Agent: Mutt/1.4.1i
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-conex-mechanism-00.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 11:38:10 -0000

Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> 
> So, I've started reading draft-conex-mechanism-00.txt.

   That I-D is of historic interest only, being a mistaken submission,
where both Toby and I missed the incorrect name. Were it for real, it
would be titled "draft-ietf-conex-mechanism".

> I object to the writing:
> 
> "For an operator facing congestion caused
>    by other operators' networks, building out its own capacity is
>    unlikely to solve the congestion problem."
> 
> If traffic isn't DDoS or caused by malware, it's traffic the customer 
> wanted. The traffic isn't caused by "other operators' networks",

   The intent was to talk about congestion _within_ other networks.
The wording was unfortunate, and IIRC is entirely gone from the "current"
I-D. BTW, even the "current" I-D is of largely historic interest, since
it is undergoing really substantial surgery to amputate discussion of
mechanism.

> 7. Use cases. I don't agree on the three approcahes. Point 1 and 3 means
> a router has to handle flows, this doesn't scale.

   There is a not-fully-resolved issue there. The authors have tried to
reword things so we can agree there is no need to handle flows; but in
any case there is no need for "routers" to handle flows: at worst, what
we call "policers" may have to keep some flow state for flows they will
police.

> Point 2 involves the "router" being congested. Perhaps what's meant is
> that a link connected to the router is congested egress (because that's
> where the queue will be)?

   Correct.

> As I've said before, we're going more and more to simple cheap devices 
> with small buffers and forwarding tables that are really quick (TCAM or 
> alike). These devices are not flow aware and they don't really do RED 
> either (it's pointless considering their small buffers). It's an important 
> consideration for this wg how this should be handled. I don't see any 
> discussion about it...

   Were this a WG document and I the document-editor, I'd say "send text."
However, that's probably premature.

   I will note that the intent is that congestion actually experienced
is the only thing to be marked by routers along the path: otherwise
there is no need for awareness, least of all modification of packets;
and routers which "never" experience congestion have no need to do
anything.

   It is not yet clear what to do about packet loss in routers that have
no congestion-marking ability; but the fall-through is simply to let the
transport protocol (at the receiver) recognize the loss.

> 7.1. Here it's said that flow-aware equipment is expensive, that's good. 
> It's important to remember. Again, there is talk about "congested 
> routers" which I think is the wrong term.

   Correct.

> 8. Here is a good point about congestion not being a secret. The problem 
> is that the IP stack knows about it, but the user does not (apart from 
> deducing this from what how the application is behaving ("slow")). As some 
> has seen from my postings on other IETG lists, I think there should be 
> user insight into performance (metrics how the IP stack is behaving so 
> this can be exposed to the user). Could this be part of the work done in 
> this WG, actually exposing the CONEX performance data to the user? I think 
> this is an important point.

   IMHO that's out of scope for ConEx, though I think it appropriate to
briefly discuss what the users would like to see.

> I don't really object to the work being done, it's just that since ECN 
> hasn't caught on in real life (it's in most end user OSes nowadays, I 
> don't know many network devices which are ECN aware) I don't really see 
> how the new propsals are actually going to be deployed?

   ConEx _will_ be deployable in scenarios where only the end-hosts and
their ISPs are ConEx-aware. To the extent that no intermediate networks
actually drop packets, it will work flawlessly: to the extent packets
are dropped in intermediate networks, congestion will have to be inferred
by the receiver and information past the point where packets are dropped
will be incomplete. In a well-designed network, we believe this will be
rare enough to not be an issue.

> Are the hardware guys involved in the discussions so we know how hard
> things are to implement?

   I'll let "the hardware guys" speak for themselves: we do not intend
to ask them for anything beyond marking for congestion experienced, and
even that will be optional except for the congested links to and from
end-users.

   Deployment at those links will likely take years, true; but we're
chartered for experimental uses, and we believe we can get enough
deployment for experiments quickly enough.

--
John Leslie <john@jlc.net>

From swmike@swm.pp.se  Thu Aug 12 04:44:38 2010
Return-Path: <swmike@swm.pp.se>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 4898B3A6866 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 04:44:38 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.588
X-Spam-Level: 
X-Spam-Status: No, score=-2.588 tagged_above=-999 required=5 tests=[AWL=0.011,  BAYES_00=-2.599]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nQ9BgA05q5vG for <conex@core3.amsl.com>; Thu, 12 Aug 2010 04:44:37 -0700 (PDT)
Received: from uplift.swm.pp.se (ipv6.swm.pp.se [IPv6:2a00:801::f]) by core3.amsl.com (Postfix) with ESMTP id 3AF473A69FA for <conex@ietf.org>; Thu, 12 Aug 2010 04:44:37 -0700 (PDT)
Received: by uplift.swm.pp.se (Postfix, from userid 501) id 438BFA5; Thu, 12 Aug 2010 13:45:12 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1]) by uplift.swm.pp.se (Postfix) with ESMTP id 4308C9E for <conex@ietf.org>; Thu, 12 Aug 2010 13:45:12 +0200 (CEST)
Date: Thu, 12 Aug 2010 13:45:12 +0200 (CEST)
From: Mikael Abrahamsson <swmike@swm.pp.se>
To: conex@ietf.org
Message-ID: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se>
User-Agent: Alpine 1.10 (DEB 962 2008-03-14)
Organization: People's Front Against WWW
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII
Subject: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 11:44:38 -0000

1. "Users are increasingly seeing congestion at peak times"

I disagree with this. In my market this was a problem 5-10 years ago. 
There are testing tools available so that people can test their connection 
and if they don't get ok speed they have a right to cancel their 
subscription. In Sweden the ISPs nowadays advertise speeds with a max and 
lower bound guarantee speed, and people use their right to change 
providers when they don't work well. This basically means very few 
providers have been underprovisioning their networks, thus there is not 
much congestion outside of the customer access port.

Also, calling congestion "unforseen" is a stretch, it very seldom is. It's 
when the engineering side clashes with the marketing side which wants to 
sell high bandwidth service but doesn't want to spend the money on 
upgrading the network to accomodate this increased traffic.

I agree that implementing LEDBAT will make it really hard to get a grip on 
how the network is behaving. Looking at an MRTG graph that is flatlining 5 
hours per day makes it really hard to understand what the user experience 
during those 5 hours is. Metrics like average queue depth and alike might 
be needed, but again we run into the problems of equipment with very small 
buffers and very little L3 features.

"But while flat-rate
    pricing avoids billing uncertainty, it creates performance
    uncertainty: users cannot know whether the performance of their
    connection is being altered or degraded based on how the network
    operator manages congestion."

I'd say that a token based system where you have a monthly cap and then 
you have to actively log in to a system and purchase additional credits to 
get a new "cap" is very obvious and understandable to the user. When the 
user is out of their cap they might be ratelimited to a very low access 
speed, like 64 kilobits/s or alike. This will remove and uncertainty about 
what's going on, otherwise the user might be moved to a service vlan where 
a captive portal requires purchase of additional credits to resume the 
server.

I also feel that the talk about "congestion charges" is dystopian. The 
Internet succeeded because it was basically "bill and keep", and it relies 
on each end of the communication to make their own customer happy. 
Implementing a billing system for congestion would be approximately as 
hard as implementing a QoS system end-to-end and charging extra for high 
priority traffic, so why wouldn't we do that instead? Anyway, I don't 
think this is the way forward, it's much too complex.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

From john@jlc.net  Thu Aug 12 05:08:21 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 0D6193A68BB for <conex@core3.amsl.com>; Thu, 12 Aug 2010 05:08:21 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -104.145
X-Spam-Level: 
X-Spam-Status: No, score=-104.145 tagged_above=-999 required=5 tests=[AWL=-0.146, BAYES_50=0.001, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id gouS13nNjJjQ for <conex@core3.amsl.com>; Thu, 12 Aug 2010 05:08:20 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id EE6343A6359 for <conex@ietf.org>; Thu, 12 Aug 2010 05:08:19 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id CB4C033C51; Thu, 12 Aug 2010 08:08:56 -0400 (EDT)
Date: Thu, 12 Aug 2010 08:08:56 -0400
From: John Leslie <john@jlc.net>
To: Mikael Abrahamsson <swmike@swm.pp.se>
Message-ID: <20100812120856.GE16820@verdi>
References: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se> <000f01cb3a0e$a447c070$ecd74150$@com> <alpine.DEB.1.10.1008121314400.8562@uplift.swm.pp.se>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.1.10.1008121314400.8562@uplift.swm.pp.se>
User-Agent: Mutt/1.4.1i
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-conex-mechanism-00.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 12:08:21 -0000

Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> 
> Well, I know of a lot of networks that are built in a hierarchy like this, 
> connected by dark/grey fiber:
> 
> Residential household
> (10/100)
> Basement switches connected in a ring
> (gig)
> Larger L3 switch
> (10GE)
> Core, which might consist of just mor larger L3 switches.
> (multiple 10GEs)

   This is (IMHO) a reasonable thing to do, starting from scratch. I'd
love to see this be the rule for developing countries.

   But in the world I know, there's existing copper-pair, existing cable,
and existing cell towers -- all of which have a bottleneck between user
and ISP.

> In the above, taking Cisco as an example, these would be ME3400 for 
> basement, 6500/7600 for Larger 10GE switch, and cores could be that as 
> well. Most of the WS-67xx linecards have < 5ms of buffers and very little 
> queue management. In this kind of network there are very little ways of 
> congestion signalling.

   I would disagree -- though if you mean current installed software,
you're likely right.

   It is premature to say just what the congestion signaling for ConEx
will be, but even ECN could be tuned to the "knee" so as to signal
buffers filling sufficiently before drop to be useful.

   We should perhaps also consider congestion-signaling de novo: what
would be statistically useful measures of pre-congestion (which do
not necessarily call for multiplicative-decrease of sending rate)?

> My thinking is that exposing this information to the user will make them 
> aware of what's going on and can give metrics in quality measurement of 
> different ISPs, which can improve the ability of the users to choose an 
> ISP which actually is low or free of congestion in their network.

   By all means... But for that purpose, users would want a very simple
review covering perhaps an entire month...

> Congestion isn't a force of nature, it's when the ISP wants to oversell 
> their network

   I take exception to the claim that all ISPs "want to oversell".

   The fact is that ISP marketers discover that consumers _object_ to
too much information. The "overselling" you see comes from giving
customers the information "most customers" want in an environment 
where competition is limited.

   (Cable providers, of cours, face a particularly difficult tradeoff
here, since they _must_ have neighborhood aggregation points competing
for truly rare "channel" space. I sincerely hope that the rarity of
"channel" space may be overcome soon!)

> to be able to supply a lower cost service and/or increase profit
> margins.

   Low-cost is a critical customer consideration -- especially when
selling to folks who don't (yet) place a high value on Internet service.
Profit margins are driven by some weird economics, which IMHO would
disappear in an actually competitive market, but are endemic to
large monopoly structure.

> This might be fine, but it should be done openly and users should
> know about it and be able to act accordingly.

   You'll get no argument from me, there!

--
John Leslie <john@jlc.net>

From john@jlc.net  Thu Aug 12 05:37:39 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id F3B6F3A6A27 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 05:37:38 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.435
X-Spam-Level: 
X-Spam-Status: No, score=-105.435 tagged_above=-999 required=5 tests=[AWL=1.164, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id mk-VNOCc71mI for <conex@core3.amsl.com>; Thu, 12 Aug 2010 05:37:38 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id 87F093A68FA for <conex@ietf.org>; Thu, 12 Aug 2010 05:37:37 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id A803D33C80; Thu, 12 Aug 2010 08:38:14 -0400 (EDT)
Date: Thu, 12 Aug 2010 08:38:14 -0400
From: John Leslie <john@jlc.net>
To: Mikael Abrahamsson <swmike@swm.pp.se>
Message-ID: <20100812123814.GF16820@verdi>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se>
User-Agent: Mutt/1.4.1i
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 12:37:39 -0000

Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> 
> 1. "Users are increasingly seeing congestion at peak times"
> 
> I disagree with this. In my market this was a problem 5-10 years ago. 

   In US markets, it's a continuing problem.

> There are testing tools available so that people can test their
> connection and if they don't get ok speed they have a right to cancel
> their subscription.

   In monopoly-heavy US, this "right" often brings heavy cancellation
charges.

> In Sweden the ISPs nowadays advertise speeds with a max and lower bound
> guarantee speed, and people use their right to change providers when
> they don't work well.

   This is not the forum to argue national policies...

> This basically means very few providers have been underprovisioning
> their networks, thus there is not much congestion outside of the
> customer access port.

   I wish we could avoid arguing to what extent providers "underprovision"
their networks. ConEx could be useful in an all-fibre distribution system
where the bottleneck is the cheap ethernet switch the customer adds
between "modem" and his computers.

> Also, calling congestion "unforseen" is a stretch, it very seldom is.

   Actually, it is often "unforseen" by network engineers, who lack
information of what customers will want tomorrow.

> It's when the engineering side clashes with the marketing side which
> wants to sell high bandwidth service but doesn't want to spend the
> money on upgrading the network to accomodate this increased traffic.

   That's largely true, but rather unfair, because the neither department
makes the upgrade-spending decisions. The department which _does_ make
those decisions rightly wants justification for the costs.

> I agree that implementing LEDBAT will make it really hard to get a
> grip on how the network is behaving.

   Good!

> Looking at an MRTG graph that is flatlining 5 hours per day makes it
> really hard to understand what the user experience during those 5 hours
> is. Metrics like average queue depth and alike might be needed, but
> again we run into the problems of equipment with very small buffers
> and very little L3 features.

   Those metrics are unlikely to help -- you need to know what traffic
the _user_ considers interactive and what the effect of congestion is
on that traffic.

> "But while flat-rate pricing avoids billing uncertainty, it creates
>  performance uncertainty: users cannot know whether the performance of
>  their connection is being altered or degraded based on how the network
>  operator manages congestion."
> 
> I'd say that a token based system where you have a monthly cap and then 
> you have to actively log in to a system and purchase additional credits to 
> get a new "cap" is very obvious and understandable to the user.

   But most users won't "log in to a system and purchase additional
credits". They'll call support and claim not to understand how this
could happen.

> When the user is out of their cap they might be ratelimited to a very
> low access speed, like 64 kilobits/s or alike. This will remove and
> uncertainty about what's going on, otherwise the user might be moved to
> a service vlan where a captive portal requires purchase of additional
> credits to resume the server.

   You're welcome to try this business model in your own business. But
I wouldn't try it in mine. You're proposing to limit overall rate when
there may be no reason to limit it, and when the user is probably
unaware of any action they took to "deserve" this "punishment". My users
would be on the phone by now...

> I also feel that the talk about "congestion charges" is dystopian.

   That is a widespread belief, which we probably need to discuss.

> The Internet succeeded because it was basically "bill and keep",

   True -- simplicity of billing is central to our success.

> and it relies on each end of the communication to make their own
> customer happy. 

   Unfortunately, there's too little under our control to do so. :^(

> Implementing a billing system for congestion would be approximately
> as hard as implementing a QoS system end-to-end and charging extra
> for high priority traffic, so why wouldn't we do that instead?

   Implementing a "billing system" for congestion isn't hard at all.
To the user, an ISP sets a congestion allowance, different for different
service levels: any service call is an opportunity to sell a higher
service level.

   To its peers, each ISP has a system of settlements: if congestion
volume matches "closely enough" no money changes hands; if you want
to send more congestion volume than you receive, it's a business
decision whether to pay the agreed price. This is simple.

   Implementing a QoS end-to-end system requires agreements at every
interchange point -- to an arbitrarily large number of potential paths.
I wouldn't bother to try. :^( (Of course, QoS end-to-end within a
single provider is simple enough...)

> Anyway, I don't think this is the way forward, it's much too complex.

   Question to clarify: is what I outlined above "too complex" or is
it somehow incomplete?

--
John Leslie <john@jlc.net>

From Dirk.Kutscher@neclab.eu  Thu Aug 12 05:48:25 2010
Return-Path: <Dirk.Kutscher@neclab.eu>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 658F83A6828 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 05:48:25 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id aQ3Rt3gWmSnc for <conex@core3.amsl.com>; Thu, 12 Aug 2010 05:48:25 -0700 (PDT)
Received: from smtp0.netlab.nec.de (smtp0.netlab.nec.de [195.37.70.40]) by core3.amsl.com (Postfix) with ESMTP id EFC603A6781 for <conex@ietf.org>; Thu, 12 Aug 2010 05:48:24 -0700 (PDT)
Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp0.netlab.nec.de (Postfix) with ESMTP id A63652800017F; Thu, 12 Aug 2010 14:49:01 +0200 (CEST)
X-Virus-Scanned: Amavisd on Debian GNU/Linux (atlas1.office.hd)
Received: from smtp0.netlab.nec.de ([127.0.0.1]) by localhost (atlas1.office.hd [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7La3ydUh0ufd; Thu, 12 Aug 2010 14:49:01 +0200 (CEST)
Received: from ENCELADUS.office.hd (Enceladus.office.hd [192.168.24.52]) by smtp0.netlab.nec.de (Postfix) with ESMTP id 84D7E2800015D; Thu, 12 Aug 2010 14:48:51 +0200 (CEST)
Received: from PALLENE.office.hd ([169.254.1.28]) by ENCELADUS.office.hd ([192.168.24.52]) with mapi; Thu, 12 Aug 2010 14:47:28 +0200
From: Dirk Kutscher <Dirk.Kutscher@neclab.eu>
To: Mikael Abrahamsson <swmike@swm.pp.se>, "conex@ietf.org" <conex@ietf.org>
Thread-Topic: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
Thread-Index: AQHLOhOrp8HnNYGDXUKQ2icjYdiqTZLdt6eQ
Date: Thu, 12 Aug 2010 12:47:26 +0000
Message-ID: <82AB329A76E2484D934BBCA77E9F524950F054@PALLENE.office.hd>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se>
In-Reply-To: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se>
Accept-Language: de-DE, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 12:48:25 -0000

Hi,

Interesting discussion.

I would like to add that congestion exposure is a tool that can be used in =
different ways, congestion charges not necessarily being the most desirable=
 one, neither assessing and comparing ISP performance.=20

Another way to see it to use congestion exposure for incentivizing better a=
pplication and transport layer responsiveness to congestion indication, i.e=
., giving users a tool for capacity sharing without giving up
end-to-end congestion control. Users would be able to have certain sessions=
 throttle down, based on their own decision/configuration, leveraging indic=
ations about current contribution to path congestion.

Now, in such a setting, the actual incentive is not the congestion indicati=
on (otherwise we could just use ECN), but the capability of the network to =
assess path congestion and to police traffic and/or account for congestion =
accordingly. Policing and accounting could take different variants -- in fa=
ct some form of rate-limiting could apply after a certain congestion budget=
 has been used.

Mikael is saying that congestion is mainly caused by overselling and should=
 not occur in well-managed networks. In an ideal world, this might be true.=
 A specific example where it is not true is best-effort traffic in networks=
 with shared media, e.g., wireless networks. But even without that, I don't=
 think that we can expect all fixed networks to be designed in way that bac=
khaul capacity always matches aggregated access network capacity, even if t=
echnically feasible.

Best regards,

Dirk


> -----Original Message-----
> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf
> Of Mikael Abrahamsson
> Sent: Thursday, August 12, 2010 1:45 PM
> To: conex@ietf.org
> Subject: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
>=20
>=20
> 1. "Users are increasingly seeing congestion at peak times"
>=20
> I disagree with this. In my market this was a problem 5-10 years ago.
> There are testing tools available so that people can test their
> connection
> and if they don't get ok speed they have a right to cancel their
> subscription. In Sweden the ISPs nowadays advertise speeds with a max
> and
> lower bound guarantee speed, and people use their right to change
> providers when they don't work well. This basically means very few
> providers have been underprovisioning their networks, thus there is not
> much congestion outside of the customer access port.
>=20
> Also, calling congestion "unforseen" is a stretch, it very seldom is.
> It's
> when the engineering side clashes with the marketing side which wants
> to
> sell high bandwidth service but doesn't want to spend the money on
> upgrading the network to accomodate this increased traffic.
>=20
> I agree that implementing LEDBAT will make it really hard to get a grip
> on
> how the network is behaving. Looking at an MRTG graph that is
> flatlining 5
> hours per day makes it really hard to understand what the user
> experience
> during those 5 hours is. Metrics like average queue depth and alike
> might
> be needed, but again we run into the problems of equipment with very
> small
> buffers and very little L3 features.
>=20
> "But while flat-rate
>     pricing avoids billing uncertainty, it creates performance
>     uncertainty: users cannot know whether the performance of their
>     connection is being altered or degraded based on how the network
>     operator manages congestion."
>=20
> I'd say that a token based system where you have a monthly cap and then
> you have to actively log in to a system and purchase additional credits
> to
> get a new "cap" is very obvious and understandable to the user. When
> the
> user is out of their cap they might be ratelimited to a very low access
> speed, like 64 kilobits/s or alike. This will remove and uncertainty
> about
> what's going on, otherwise the user might be moved to a service vlan
> where
> a captive portal requires purchase of additional credits to resume the
> server.
>=20
> I also feel that the talk about "congestion charges" is dystopian. The
> Internet succeeded because it was basically "bill and keep", and it
> relies
> on each end of the communication to make their own customer happy.
> Implementing a billing system for congestion would be approximately as
> hard as implementing a QoS system end-to-end and charging extra for
> high
> priority traffic, so why wouldn't we do that instead? Anyway, I don't
> think this is the way forward, it's much too complex.
>=20
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
> _______________________________________________
> conex mailing list
> conex@ietf.org
> https://www.ietf.org/mailman/listinfo/conex

From mirja.kuehlewind@ikr.uni-stuttgart.de  Thu Aug 12 06:48:38 2010
Return-Path: <mirja.kuehlewind@ikr.uni-stuttgart.de>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id BD3E03A68B2 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 06:48:38 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.612
X-Spam-Level: 
X-Spam-Status: No, score=-1.612 tagged_above=-999 required=5 tests=[AWL=0.637,  BAYES_00=-2.599, HELO_EQ_DE=0.35]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id FqE91CQ6lZCL for <conex@core3.amsl.com>; Thu, 12 Aug 2010 06:48:37 -0700 (PDT)
Received: from mailsrv.ikr.uni-stuttgart.de (mailsrv.ikr.uni-stuttgart.de [129.69.170.2]) by core3.amsl.com (Postfix) with ESMTP id 452753A6838 for <conex@ietf.org>; Thu, 12 Aug 2010 06:48:36 -0700 (PDT)
Received: from netsrv1.ikr.uni-stuttgart.de (netsrv1-c [10.11.12.12]) by mailsrv.ikr.uni-stuttgart.de (Postfix) with ESMTP id 92AAF633F7 for <conex@ietf.org>; Thu, 12 Aug 2010 15:49:12 +0200 (CEST)
Received: from vpn-1-cl02 (vpn-1-cl02 [10.88.11.12]) by netsrv1.ikr.uni-stuttgart.de (Postfix) with ESMTP id 82C4EBC07E for <conex@ietf.org>; Thu, 12 Aug 2010 15:49:12 +0200 (CEST)
From: Mirja Kuehlewind <mirja.kuehlewind@ikr.uni-stuttgart.de>
Organization: University of Stuttgart (Germany), IKR
To: conex@ietf.org
Date: Thu, 12 Aug 2010 15:49:08 +0200
User-Agent: KMail/1.9.10 (enterprise35 0.20090731.1005176)
References: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se> <alpine.DEB.1.10.1008121314400.8562@uplift.swm.pp.se> <20100812120856.GE16820@verdi>
In-Reply-To: <20100812120856.GE16820@verdi>
X-KMail-QuotePrefix: > 
MIME-Version: 1.0
Content-Type: Text/Plain; charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Message-Id: <201008121549.08306.mirja.kuehlewind@ikr.uni-stuttgart.de>
Subject: Re: [conex] comments on draft-conex-mechanism-00.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 13:48:38 -0000

Hi,

regarding the congestion marking/signaling capability of network devices, 
there are for sure several open questions. But in fact, although i would like 
to seen more discussion on the conex list about what is feasible here, is not 
really a conex topic.
If someone/an ISP want to use conex, he should be able to give any kind of 
congestion signal (loss/markings) to the endsystem. On the one hand  the 
mechanism used should signal enough congestion to achieve some kind of 
traffic management in the network and on the other hand it should not signal 
more congestion than needed because other entities will recognize that this 
specific network is congested and try to avoid to forward their traffic 
though (in the ideal case). That could mean a costumer will change its 
provider or an ISP will try to reroute its traffic.
But how the actual mechanism used looks like can differ a lot. This might be 
RED, this might be some kind of precongestion notification. This might be 
deployed just in nodes where it it likely to have congestion or just in 
border-routers or everywhere... And note, also if everybody would use RED, it 
is quite hard to find the right parametrization.
So regarding conex, if someone what to use conex information he has to make 
sure to signal the right amount on congestion at the right places... whatever 
that means every ISP has to decide for his own.

Mirja


On Thursday 12 August 2010 14:08:56 John Leslie wrote:
> Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> > Well, I know of a lot of networks that are built in a hierarchy like
> > this, connected by dark/grey fiber:
> >
> > Residential household
> > (10/100)
> > Basement switches connected in a ring
> > (gig)
> > Larger L3 switch
> > (10GE)
> > Core, which might consist of just mor larger L3 switches.
> > (multiple 10GEs)
>
>    This is (IMHO) a reasonable thing to do, starting from scratch. I'd
> love to see this be the rule for developing countries.
>
>    But in the world I know, there's existing copper-pair, existing cable,
> and existing cell towers -- all of which have a bottleneck between user
> and ISP.
>
> > In the above, taking Cisco as an example, these would be ME3400 for
> > basement, 6500/7600 for Larger 10GE switch, and cores could be that as
> > well. Most of the WS-67xx linecards have < 5ms of buffers and very little
> > queue management. In this kind of network there are very little ways of
> > congestion signalling.
>
>    I would disagree -- though if you mean current installed software,
> you're likely right.
>
>    It is premature to say just what the congestion signaling for ConEx
> will be, but even ECN could be tuned to the "knee" so as to signal
> buffers filling sufficiently before drop to be useful.
>
>    We should perhaps also consider congestion-signaling de novo: what
> would be statistically useful measures of pre-congestion (which do
> not necessarily call for multiplicative-decrease of sending rate)?

From toby@moncaster.com  Thu Aug 12 07:34:11 2010
Return-Path: <toby@moncaster.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id B4AD03A689B for <conex@core3.amsl.com>; Thu, 12 Aug 2010 07:34:11 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.949
X-Spam-Level: 
X-Spam-Status: No, score=-1.949 tagged_above=-999 required=5 tests=[AWL=-0.300, BAYES_00=-2.599, HELO_EQ_DE=0.35, J_CHICKENPOX_29=0.6]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id B-Yu9zv51tIS for <conex@core3.amsl.com>; Thu, 12 Aug 2010 07:34:10 -0700 (PDT)
Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.17.10]) by core3.amsl.com (Postfix) with ESMTP id D8EF93A67C2 for <conex@ietf.org>; Thu, 12 Aug 2010 07:34:09 -0700 (PDT)
Received: from TobysHP (host86-170-52-148.range86-170.btcentralplus.com [86.170.52.148]) by mrelayeu.kundenserver.de (node=mreu0) with ESMTP (Nemesis) id 0M1TP3-1P3qo80Z0z-00tXNB; Thu, 12 Aug 2010 16:34:45 +0200
From: "Toby Moncaster" <toby@moncaster.com>
To: "'John Leslie'" <john@jlc.net>, "'Mikael Abrahamsson'" <swmike@swm.pp.se>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi>
In-Reply-To: <20100812123814.GF16820@verdi>
Date: Thu, 12 Aug 2010 15:34:42 +0100
Message-ID: <001b01cb3a2b$80f47420$82dd5c60$@com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Office Outlook 12.0
thread-index: Acs6Gz6pSHTCVIP/TiWoLufAsTja1AADmuRA
Content-Language: en-gb
X-Provags-ID: V02:K0:Zc0mGvjObQT5wGBIFiMkI0hTXT1SWmq5Ta4fMtnJXnW xTxj4sea7WD1urc61W9ttjjLefghI8NmFoVQjOx68NmLdrQqn8 JbalqCD9MuHiZG1zZXl8V09PzKAqU57JBhfU3M6FCaAlPY4OjV PwvdIc+kkrorLhVnk4PGzR3iu8iOypIC72MhdFAfRm2+VQecYP rNKOS9IdJa3exQ/I3F+3YO20ugHlftZqgLG1/Ss/p0=
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 14:34:11 -0000

> -----Original Message-----
> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf
> Of John Leslie
> Sent: 12 August 2010 13:38
> To: Mikael Abrahamsson
> Cc: conex@ietf.org
> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
> 01.txt
> 
> Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> >
> > 1. "Users are increasingly seeing congestion at peak times"
> >
> > I disagree with this. In my market this was a problem 5-10 years ago.
> 
>    In US markets, it's a continuing problem.

And in the UK - indeed in some ways it is getting worse as more and more
users turn to VoD as a primary source of visual entertainment - Here the UK
may be leading the trend with the ever increasing popularity of iPlayer,
Skyplayer, 4oD, Demand Five, etc all of which offer some form of streaming
VoD TV catchup service.

> 
> > There are testing tools available so that people can test their
> > connection and if they don't get ok speed they have a right to cancel
> > their subscription.
> 
>    In monopoly-heavy US, this "right" often brings heavy cancellation
> charges.

This seems to be an example of Sweden being a leading light in championing
consumer rights... Anyway, it is not an argument against ConEx...

> 
> > In Sweden the ISPs nowadays advertise speeds with a max and lower
> bound
> > guarantee speed, and people use their right to change providers when
> > they don't work well.
> 
>    This is not the forum to argue national policies...

Indeed not

> 
> > This basically means very few providers have been underprovisioning
> > their networks, thus there is not much congestion outside of the
> > customer access port.
> 
>    I wish we could avoid arguing to what extent providers
> "underprovision"
> their networks. ConEx could be useful in an all-fibre distribution
> system
> where the bottleneck is the cheap ethernet switch the customer adds
> between "modem" and his computers.

The problem is providers can always get caught out. At the moment there is
often a huge spare capacity of unlit dark fibre so some operators can
re=provision on very short timescales (hours even), but that may not last...

> 
> > Also, calling congestion "unforseen" is a stretch, it very seldom is.
> 
>    Actually, it is often "unforseen" by network engineers, who lack
> information of what customers will want tomorrow.

Indeed, and it is certainly unforeseen by the users that have caused it...

> 
> > It's when the engineering side clashes with the marketing side which
> > wants to sell high bandwidth service but doesn't want to spend the
> > money on upgrading the network to accomodate this increased traffic.
> 
>    That's largely true, but rather unfair, because the neither
> department
> makes the upgrade-spending decisions. The department which _does_ make
> those decisions rightly wants justification for the costs.
> 
> > I agree that implementing LEDBAT will make it really hard to get a
> > grip on how the network is behaving.
> 
>    Good!
> 
> > Looking at an MRTG graph that is flatlining 5 hours per day makes it
> > really hard to understand what the user experience during those 5
> hours
> > is. Metrics like average queue depth and alike might be needed, but
> > again we run into the problems of equipment with very small buffers
> > and very little L3 features.
> 
>    Those metrics are unlikely to help -- you need to know what traffic
> the _user_ considers interactive and what the effect of congestion is
> on that traffic.

That is the crux of the problem - knowing what the user really wants, rather
than guessing... 

> 
> > "But while flat-rate pricing avoids billing uncertainty, it creates
> >  performance uncertainty: users cannot know whether the performance
> of
> >  their connection is being altered or degraded based on how the
> network
> >  operator manages congestion."
> >
> > I'd say that a token based system where you have a monthly cap and
> then
> > you have to actively log in to a system and purchase additional
> credits to
> > get a new "cap" is very obvious and understandable to the user.
> 
>    But most users won't "log in to a system and purchase additional
> credits". They'll call support and claim not to understand how this
> could happen.

However, this is not unlike the congestion rationing system that we
discussed in the paper I referred to last rely
http://bobbriscoe.net/projects/refb/polfree_rearch08.pdf 

I (speaking personally) think most users can understand the concept of a
variable bill for mobile phone services - they get a "free" allowance but if
they go over then they will get a bill for it...


> 
> > When the user is out of their cap they might be ratelimited to a very
> > low access speed, like 64 kilobits/s or alike. This will remove and
> > uncertainty about what's going on, otherwise the user might be moved
> to
> > a service vlan where a captive portal requires purchase of additional
> > credits to resume the server.
> 
>    You're welcome to try this business model in your own business. But
> I wouldn't try it in mine. You're proposing to limit overall rate when
> there may be no reason to limit it, and when the user is probably
> unaware of any action they took to "deserve" this "punishment". My
> users
> would be on the phone by now...

So the answer there is that the ISP has to provide the user with a means to
control this (ideally a simple application that does it all for them!). But
you are right John, there are going to be lots of different business models
out there. ConEx is simply a metric/tool to enable such things.

> 
> > I also feel that the talk about "congestion charges" is dystopian.
> 
>    That is a widespread belief, which we probably need to discuss.

It almost deserves a discussion forum of its own as it will swamp any normal
WG discussions...

> 
> > The Internet succeeded because it was basically "bill and keep",
> 
>    True -- simplicity of billing is central to our success.

Yes... But we (techy types at the IETF) are not specialists in consumer
behaviour and it really is not our place to discuss these sort of things -
they are anyway orthogonal to the issue of whether and how to standardise a
ConEx mechanism...

> 
> > and it relies on each end of the communication to make their own
> > customer happy.
> 
>    Unfortunately, there's too little under our control to do so. :^(
> 
> > Implementing a billing system for congestion would be approximately
> > as hard as implementing a QoS system end-to-end and charging extra
> > for high priority traffic, so why wouldn't we do that instead?

Two reasons, firstly it is far more flexible in how it is used, secondly it
is more future-proof...

> 
>    Implementing a "billing system" for congestion isn't hard at all.
> To the user, an ISP sets a congestion allowance, different for
> different
> service levels: any service call is an opportunity to sell a higher
> service level.
> 
>    To its peers, each ISP has a system of settlements: if congestion
> volume matches "closely enough" no money changes hands; if you want
> to send more congestion volume than you receive, it's a business
> decision whether to pay the agreed price. This is simple.
> 
>    Implementing a QoS end-to-end system requires agreements at every
> interchange point -- to an arbitrarily large number of potential paths.
> I wouldn't bother to try. :^( (Of course, QoS end-to-end within a
> single provider is simple enough...)
> 
> > Anyway, I don't think this is the way forward, it's much too complex.
> 
>    Question to clarify: is what I outlined above "too complex" or is
> it somehow incomplete?

Just to clarify - that is one way to use ConEx to implement a system wide
congestion-based billing system. It is also well beyond the charter as it
stands

> 
> --
> John Leslie <john@jlc.net>
> _______________________________________________
> conex mailing list
> conex@ietf.org
> https://www.ietf.org/mailman/listinfo/conex


From swmike@swm.pp.se  Thu Aug 12 12:34:35 2010
Return-Path: <swmike@swm.pp.se>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 6C00A3A69E4 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 12:34:35 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.589
X-Spam-Level: 
X-Spam-Status: No, score=-1.589 tagged_above=-999 required=5 tests=[AWL=-0.990, BAYES_00=-2.599, GB_MUTUALBENEFIT=2]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BPsAXrEB0Psn for <conex@core3.amsl.com>; Thu, 12 Aug 2010 12:34:34 -0700 (PDT)
Received: from uplift.swm.pp.se (ipv6.swm.pp.se [IPv6:2a00:801::f]) by core3.amsl.com (Postfix) with ESMTP id E810A3A6903 for <conex@ietf.org>; Thu, 12 Aug 2010 12:34:30 -0700 (PDT)
Received: by uplift.swm.pp.se (Postfix, from userid 501) id D915CA5; Thu, 12 Aug 2010 21:35:05 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1]) by uplift.swm.pp.se (Postfix) with ESMTP id D6D789E; Thu, 12 Aug 2010 21:35:05 +0200 (CEST)
Date: Thu, 12 Aug 2010 21:35:05 +0200 (CEST)
From: Mikael Abrahamsson <swmike@swm.pp.se>
To: John Leslie <john@jlc.net>
In-Reply-To: <20100812123814.GF16820@verdi>
Message-ID: <alpine.DEB.1.10.1008122125080.8562@uplift.swm.pp.se>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi>
User-Agent: Alpine 1.10 (DEB 962 2008-03-14)
Organization: People's Front Against WWW
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 19:34:35 -0000

On Thu, 12 Aug 2010, John Leslie wrote:

>   I wish we could avoid arguing to what extent providers 
> "underprovision" their networks. ConEx could be useful in an all-fibre 
> distribution system where the bottleneck is the cheap ethernet switch 
> the customer adds between "modem" and his computers.

But is this really a device which will be ECN capable? It's typically L2 
only.

>> Also, calling congestion "unforseen" is a stretch, it very seldom is.
>
>   Actually, it is often "unforseen" by network engineers, who lack
> information of what customers will want tomorrow.

I guess we can agree to disagree then.

>> It's when the engineering side clashes with the marketing side which
>> wants to sell high bandwidth service but doesn't want to spend the
>> money on upgrading the network to accomodate this increased traffic.
>
>   That's largely true, but rather unfair, because the neither department
> makes the upgrade-spending decisions. The department which _does_ make
> those decisions rightly wants justification for the costs.

So if the marketing department makes decisions in pricing where the income 
can't cover the network buildout but they still want to advertise high bw 
access speeds to customers, you get distribution/core congestion. It's 
still the ISP doing the wrong thing and it's perfectly avoidable.

>> When the user is out of their cap they might be ratelimited to a very
>> low access speed, like 64 kilobits/s or alike. This will remove and
>> uncertainty about what's going on, otherwise the user might be moved to
>> a service vlan where a captive portal requires purchase of additional
>> credits to resume the server.
>
>   You're welcome to try this business model in your own business. But
> I wouldn't try it in mine. You're proposing to limit overall rate when
> there may be no reason to limit it, and when the user is probably
> unaware of any action they took to "deserve" this "punishment". My users
> would be on the phone by now...

We've been doing it for years on our mobile broadband. It's a bit of 
a special case, but I'd say it's more understandable to users (since it's 
actually actively published and advertised, it's not done covertly) than a 
lot of other schemes proposed. It's also net neutral.

>   Implementing a "billing system" for congestion isn't hard at all.
> To the user, an ISP sets a congestion allowance, different for different
> service levels: any service call is an opportunity to sell a higher
> service level.

It's hard when it has to be done inter-provider.

>   To its peers, each ISP has a system of settlements: if congestion
> volume matches "closely enough" no money changes hands; if you want
> to send more congestion volume than you receive, it's a business
> decision whether to pay the agreed price. This is simple.

I don't agree. Most of the time today no money changes hands when doing 
peering, it's done on a mutual benefit basis. Having congestion metrics to 
keep track of means considerable complication compared to what we have 
today.

>   Question to clarify: is what I outlined above "too complex" or is it 
> somehow incomplete?

Well, I don't understand how it would work in real life, so I guess it 
could be both because I'm trying to fill in blanks and there might be 
simpler solutions that I can't think of.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

From swmike@swm.pp.se  Thu Aug 12 12:34:56 2010
Return-Path: <swmike@swm.pp.se>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id B2C863A69A9 for <conex@core3.amsl.com>; Thu, 12 Aug 2010 12:34:56 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.567
X-Spam-Level: 
X-Spam-Status: No, score=-2.567 tagged_above=-999 required=5 tests=[AWL=0.032,  BAYES_00=-2.599]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id VWrHgDc4ziQD for <conex@core3.amsl.com>; Thu, 12 Aug 2010 12:34:55 -0700 (PDT)
Received: from uplift.swm.pp.se (ipv6.swm.pp.se [IPv6:2a00:801::f]) by core3.amsl.com (Postfix) with ESMTP id 666783A6962 for <conex@ietf.org>; Thu, 12 Aug 2010 12:34:53 -0700 (PDT)
Received: by uplift.swm.pp.se (Postfix, from userid 501) id C87CDA6; Thu, 12 Aug 2010 21:24:38 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1]) by uplift.swm.pp.se (Postfix) with ESMTP id C734D9E; Thu, 12 Aug 2010 21:24:38 +0200 (CEST)
Date: Thu, 12 Aug 2010 21:24:38 +0200 (CEST)
From: Mikael Abrahamsson <swmike@swm.pp.se>
To: John Leslie <john@jlc.net>
In-Reply-To: <20100812120856.GE16820@verdi>
Message-ID: <alpine.DEB.1.10.1008122117290.8562@uplift.swm.pp.se>
References: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se> <000f01cb3a0e$a447c070$ecd74150$@com> <alpine.DEB.1.10.1008121314400.8562@uplift.swm.pp.se> <20100812120856.GE16820@verdi>
User-Agent: Alpine 1.10 (DEB 962 2008-03-14)
Organization: People's Front Against WWW
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-conex-mechanism-00.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 19:34:56 -0000

On Thu, 12 Aug 2010, John Leslie wrote:

>   I would disagree -- though if you mean current installed software,
> you're likely right.

On the platforms I mentioned I'd say there isn't much possibility of 
getting this feature out of current hardware platforms.

>   It is premature to say just what the congestion signaling for ConEx
> will be, but even ECN could be tuned to the "knee" so as to signal
> buffers filling sufficiently before drop to be useful.

With 5ms buffers, I don't see how buffer depth is of any use.

>   We should perhaps also consider congestion-signaling de novo: what
> would be statistically useful measures of pre-congestion (which do
> not necessarily call for multiplicative-decrease of sending rate)?

Yes, using a policer and signalling congestion at 90% usage is absolutely 
a way to do this. It would be interesting to hear from hw manufacturers if 
they're interested in this though.

>   By all means... But for that purpose, users would want a very simple 
> review covering perhaps an entire month...

Perhaps, but information about what happened the last 5 minutes is also of 
interest.

>> Congestion isn't a force of nature, it's when the ISP wants to oversell
>> their network
>
>   I take exception to the claim that all ISPs "want to oversell".

Sure, I never made the claim that this was "all" ISPs.

>   The fact is that ISP marketers discover that consumers _object_ to
> too much information. The "overselling" you see comes from giving
> customers the information "most customers" want in an environment
> where competition is limited.
>
>   (Cable providers, of cours, face a particularly difficult tradeoff
> here, since they _must_ have neighborhood aggregation points competing
> for truly rare "channel" space. I sincerely hope that the rarity of
> "channel" space may be overcome soon!)

It's always going to be a scarce resource. Shared media like that should 
be considered aggregation and not access, so cable companies need more 
intelligence in their CPEs (or they should stop building cascade networks, 
that's why we abandonded 10Base-[25].

>   Low-cost is a critical customer consideration -- especially when 
> selling to folks who don't (yet) place a high value on Internet service. 
> Profit margins are driven by some weird economics, which IMHO would 
> disappear in an actually competitive market, but are endemic to large 
> monopoly structure.

... which is exactly my point, congestion in the ISP network (not access) 
is a symptom of poor competition. This cannot be solved by technical means 
in a working group, it's more of a political/economical problem. That's 
why I have problems with the problem description seen in the drafts so 
var.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

From swmike@swm.pp.se  Thu Aug 12 12:37:55 2010
Return-Path: <swmike@swm.pp.se>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id AFF673A6A2E for <conex@core3.amsl.com>; Thu, 12 Aug 2010 12:37:55 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.567
X-Spam-Level: 
X-Spam-Status: No, score=-2.567 tagged_above=-999 required=5 tests=[AWL=0.032,  BAYES_00=-2.599]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id a4-psUy0BLun for <conex@core3.amsl.com>; Thu, 12 Aug 2010 12:37:55 -0700 (PDT)
Received: from uplift.swm.pp.se (ipv6.swm.pp.se [IPv6:2a00:801::f]) by core3.amsl.com (Postfix) with ESMTP id 5D74D3A6A3F for <conex@ietf.org>; Thu, 12 Aug 2010 12:37:54 -0700 (PDT)
Received: by uplift.swm.pp.se (Postfix, from userid 501) id 8A1ACA5; Thu, 12 Aug 2010 21:38:30 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1]) by uplift.swm.pp.se (Postfix) with ESMTP id 88D1F9E for <conex@ietf.org>; Thu, 12 Aug 2010 21:38:30 +0200 (CEST)
Date: Thu, 12 Aug 2010 21:38:30 +0200 (CEST)
From: Mikael Abrahamsson <swmike@swm.pp.se>
To: conex@ietf.org
In-Reply-To: <201008121549.08306.mirja.kuehlewind@ikr.uni-stuttgart.de>
Message-ID: <alpine.DEB.1.10.1008122136320.8562@uplift.swm.pp.se>
References: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se> <alpine.DEB.1.10.1008121314400.8562@uplift.swm.pp.se> <20100812120856.GE16820@verdi> <201008121549.08306.mirja.kuehlewind@ikr.uni-stuttgart.de>
User-Agent: Alpine 1.10 (DEB 962 2008-03-14)
Organization: People's Front Against WWW
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Subject: Re: [conex] comments on draft-conex-mechanism-00.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 19:37:55 -0000

On Thu, 12 Aug 2010, Mirja Kuehlewind wrote:

> So regarding conex, if someone what to use conex information he has to 
> make sure to signal the right amount on congestion at the right 
> places... whatever that means every ISP has to decide for his own.

If we're going to use reserved bits or headers and stuff that routers 
should act on, I really want general consensus that this is actually 
deployable and solves a wide problem in a way the general ISP business 
actually would benefit from.

Considering one bit went to ECN and it's basically 9 years later 
completely unused in networks, I think there is a lesson to be learnt 
here.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

From toby@moncaster.com  Thu Aug 12 13:10:37 2010
Return-Path: <toby@moncaster.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 3C27A3A695E for <conex@core3.amsl.com>; Thu, 12 Aug 2010 13:10:37 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.099
X-Spam-Level: 
X-Spam-Status: No, score=-2.099 tagged_above=-999 required=5 tests=[AWL=0.150,  BAYES_00=-2.599, HELO_EQ_DE=0.35]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ACJjPwNVurLJ for <conex@core3.amsl.com>; Thu, 12 Aug 2010 13:10:36 -0700 (PDT)
Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.187]) by core3.amsl.com (Postfix) with ESMTP id 1DAF63A693B for <conex@ietf.org>; Thu, 12 Aug 2010 13:10:36 -0700 (PDT)
Received: from TobysHP (host86-170-52-148.range86-170.btcentralplus.com [86.170.52.148]) by mrelayeu.kundenserver.de (node=mreu1) with ESMTP (Nemesis) id 0Lmyqm-1POT6l2Hgy-00hhFq; Thu, 12 Aug 2010 22:11:07 +0200
From: "Toby Moncaster" <toby@moncaster.com>
To: "'Mikael Abrahamsson'" <swmike@swm.pp.se>, <conex@ietf.org>
References: <alpine.DEB.1.10.1008121058040.8562@uplift.swm.pp.se>	<alpine.DEB.1.10.1008121314400.8562@uplift.swm.pp.se>	<20100812120856.GE16820@verdi>	<201008121549.08306.mirja.kuehlewind@ikr.uni-stuttgart.de> <alpine.DEB.1.10.1008122136320.8562@uplift.swm.pp.se>
In-Reply-To: <alpine.DEB.1.10.1008122136320.8562@uplift.swm.pp.se>
Date: Thu, 12 Aug 2010 21:11:05 +0100
Message-ID: <001f01cb3a5a$7ece2d60$7c6a8820$@com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Office Outlook 12.0
thread-index: Acs6VfSRUKVXyo6USia88WPv7Yq/EgAA/T1Q
Content-Language: en-gb
X-Provags-ID: V02:K0:lxxuRCaI2GJiiA2NWJzHsFUh4B5YTVpcfB370Rqynsb y4uBYqXVK4VLt4lRrSG8IR1qTxLv6Fr4RjFcneXkDybks5v5SX Gx60qOyHEcrl5KIwV5V6HKJkoAuZNPXEW+Jlo98B7XP4YjF7Ws KSaNHBnEACRfJ3EGfHHuQ10UpAEDfs0aXI3Tz5bAHSxDYCiGP0 wbGsmjGZ2WT74NRd62GUWeUv9rkpzahGCQVElrC4dI=
Subject: Re: [conex] comments on draft-conex-mechanism-00.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 12 Aug 2010 20:10:37 -0000

> -----Original Message-----
> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf
> Of Mikael Abrahamsson
> Sent: 12 August 2010 20:39
> To: conex@ietf.org
> Subject: Re: [conex] comments on draft-conex-mechanism-00.txt
> 
> On Thu, 12 Aug 2010, Mirja Kuehlewind wrote:
> 
> > So regarding conex, if someone what to use conex information he has
> to
> > make sure to signal the right amount on congestion at the right
> > places... whatever that means every ISP has to decide for his own.
> 
> If we're going to use reserved bits or headers and stuff that routers
> should act on, I really want general consensus that this is actually
> deployable and solves a wide problem in a way the general ISP business
> actually would benefit from.
> 
> Considering one bit went to ECN and it's basically 9 years later
> completely unused in networks, I think there is a lesson to be learnt
> here.

Yes - the lesson is that we should ensure there is a proper (commercially
viable and attractive) path to adoption before we commit something to the
standards track! I think there is a much stronger argument for ConEx than
there was for ECN... (for one thing the network need not do anything if it
chooses not to and, unlike ECN, there may still be a gain from the system)

Toby

> 
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se
> _______________________________________________
> conex mailing list
> conex@ietf.org
> https://www.ietf.org/mailman/listinfo/conex


From swmike@swm.pp.se  Fri Aug 13 00:43:25 2010
Return-Path: <swmike@swm.pp.se>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 0B3FF3A6830 for <conex@core3.amsl.com>; Fri, 13 Aug 2010 00:43:25 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.268
X-Spam-Level: 
X-Spam-Status: No, score=-2.268 tagged_above=-999 required=5 tests=[AWL=-0.269, BAYES_00=-2.599, J_CHICKENPOX_29=0.6]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wJhvrBtrBKpI for <conex@core3.amsl.com>; Fri, 13 Aug 2010 00:43:23 -0700 (PDT)
Received: from uplift.swm.pp.se (ipv6.swm.pp.se [IPv6:2a00:801::f]) by core3.amsl.com (Postfix) with ESMTP id 544FE3A681B for <conex@ietf.org>; Fri, 13 Aug 2010 00:43:22 -0700 (PDT)
Received: by uplift.swm.pp.se (Postfix, from userid 501) id 8DF30A5; Fri, 13 Aug 2010 09:43:58 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1]) by uplift.swm.pp.se (Postfix) with ESMTP id 8CD6D9E; Fri, 13 Aug 2010 09:43:58 +0200 (CEST)
Date: Fri, 13 Aug 2010 09:43:58 +0200 (CEST)
From: Mikael Abrahamsson <swmike@swm.pp.se>
To: Toby Moncaster <toby@moncaster.com>
In-Reply-To: <001b01cb3a2b$80f47420$82dd5c60$@com>
Message-ID: <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com>
User-Agent: Alpine 1.10 (DEB 962 2008-03-14)
Organization: People's Front Against WWW
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 07:43:25 -0000

On Thu, 12 Aug 2010, Toby Moncaster wrote:

> This seems to be an example of Sweden being a leading light in 
> championing consumer rights... Anyway, it is not an argument against 
> ConEx...

No, it's not an argument against ConEx but it's an argument against the 
current problem description in the draft of with parts I don't agree with.

> The problem is providers can always get caught out. At the moment there 
> is often a huge spare capacity of unlit dark fibre so some operators can 
> re=provision on very short timescales (hours even), but that may not 
> last...

Well, I guess we can agree to disagree. Most of the time when I've seen 
congestion just "happen" (as some see it), it's been a matter of "I told 
you so" but people took a risk and gambled that something wouldn't happen.

It's not like it came out of nowhere.

>>
>>> Also, calling congestion "unforseen" is a stretch, it very seldom is.
>>
>>    Actually, it is often "unforseen" by network engineers, who lack
>> information of what customers will want tomorrow.
>
> Indeed, and it is certainly unforeseen by the users that have caused it...

That I can agree with, customers don't expect congestion and they expect 
their access speed to be the speed at which they can communicate.

> I (speaking personally) think most users can understand the concept of a 
> variable bill for mobile phone services - they get a "free" allowance 
> but if they go over then they will get a bill for it...

It's my opinion that users want predictability. The EU has instated a 
mandatory demand on mobile phone operators to implement a 50EUR/month cap 
on billable traffic, then the user has to be notified and agree to a 
higher cap, and this is in line with the thought of predictability.

A good compromise of this predictability is to start nagging the user and 
degrading performance to almost unusable state until the user makes a 
decision to pay more.

>>> I also feel that the talk about "congestion charges" is dystopian.
>>
>>    That is a widespread belief, which we probably need to discuss.
>
> It almost deserves a discussion forum of its own as it will swamp any normal
> WG discussions...

Agreed.

>>> Implementing a billing system for congestion would be approximately
>>> as hard as implementing a QoS system end-to-end and charging extra
>>> for high priority traffic, so why wouldn't we do that instead?
>
> Two reasons, firstly it is far more flexible in how it is used, secondly it
> is more future-proof...

I don't think I agree on this point. QoS is done between ISPs without 
really involving end systems (apart from what the systems mark). What 
we're proposing here needs to end up in the IP stack of the end system, 
and we all know how many people still use Windows 95/98/XP. I'd say there 
definitely is parity in how hard both schemes would be to implement.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

From dave.mcdysan@verizon.com  Fri Aug 13 05:47:14 2010
Return-Path: <dave.mcdysan@verizon.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 4EA393A69FA for <conex@core3.amsl.com>; Fri, 13 Aug 2010 05:47:02 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.243
X-Spam-Level: 
X-Spam-Status: No, score=-3.243 tagged_above=-999 required=5 tests=[AWL=0.356,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id FPFobV1ebYt0 for <conex@core3.amsl.com>; Fri, 13 Aug 2010 05:46:59 -0700 (PDT)
Received: from sacmail2.verizon.com (sacmail2.verizon.com [192.76.84.41]) by core3.amsl.com (Postfix) with ESMTP id 8EFFD3A6817 for <conex@ietf.org>; Fri, 13 Aug 2010 05:46:58 -0700 (PDT)
Received: from irvintrmemf3.verizon.com (irvintrmemf3.verizon.com [138.83.34.103]) by sacmail2.verizon.com (8.13.7+Sun/8.13.3) with ESMTP id o7DCkWuY014238; Fri, 13 Aug 2010 08:47:21 -0400 (EDT)
X-AuditID: 8a532267-b7bd7ae0000040b1-a1-4c653ed8d343
Received: from smtptpa4.verizon.com ( [138.83.71.177]) by irvintrmemf3.verizon.com (Symantec Mail Security) with SMTP id A6.4C.16561.8DE356C4; Fri, 13 Aug 2010 07:47:20 -0500 (CDT)
Received: from FHDP1LUMXC7HB04.us.one.verizon.com (fhdp1lumxc7hb04.verizon.com [166.68.59.191]) by smtptpa4.verizon.com (8.13.3/8.13.3) with ESMTP id o7DClKGX014984; Fri, 13 Aug 2010 08:47:20 -0400 (EDT)
Received: from fhdp1lumxc7v11.us.one.verizon.com ([fe80::4c3d:3366:54ab:8118]) by FHDP1LUMXC7HB04.us.one.verizon.com ([2002:a644:3bbf::a644:3bbf]) with mapi; Fri, 13 Aug 2010 08:47:20 -0400
From: "Mcdysan, David E" <dave.mcdysan@verizon.com>
To: Bob Briscoe <rbriscoe@jungle.bt.co.uk>, Christopher Morrow <morrowc.lists@gmail.com>
Date: Fri, 13 Aug 2010 08:47:18 -0400
Thread-Topic: [conex] ConEx & DDoS
Thread-Index: Acs1f0HVn6/7UM7MTAKiunwV0zPVfwFYkg6Q
Message-ID: <2464076D83FAED4D985BF2622111AAC40F03CDF886@FHDP1LUMXC7V11.us.one.verizon.com>
References: <201008031942.o73JgGSW018260@bagheera.jungle.bt.co.uk><AANLkTikZFvkOQjNLuasif+vAjeJSac1E-BqR6pptn=7p@mail.gmail.com> <201008061551.o76FpKPZ010840@bagheera.jungle.bt.co.uk>
In-Reply-To: <201008061551.o76FpKPZ010840@bagheera.jungle.bt.co.uk>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-Brightmail-Tracker: AAAAAA==
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] ConEx & DDoS
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 12:47:14 -0000

=20

> -----Original Message-----
> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org]=20
> On Behalf Of Bob Briscoe
> Sent: Friday, August 06, 2010 11:51 AM
> To: Christopher Morrow
> Cc: conex@ietf.org
> Subject: Re: [conex] ConEx & DDoS
>=20
> Chris,
>=20
> At 04:37 04/08/2010, Christopher Morrow wrote:
> >(I think I sub'd to the list with this address...)
> >
> >On Tue, Aug 3, 2010 at 3:42 PM, Bob Briscoe=20
> <rbriscoe@jungle.bt.co.uk> wrote:
> > > Chris,
> > >
> > > During the ConEx w-g session last Tuesday in Maastricht you=20
> > > suggested we should not include DDoS mitigation as a use-case for=20
> > > ConEx. I was
> > willing to
> > > agree as we don't need to court controversy.
> >
> >yup, no use ratholing if it's not central to the discussion.
> >

Several pages snipped.

On 8/6/2010 Bob Briscoe wrote:

>=20
> I guess what I'm saying is: We don't just change IP for chuckles. The=20
> job of ConEx is to limit congestion caused by profligacy, malice and=20
> accidents. If it can only do that against people who don't try too=20
> hard to push back, we should not spend our precious time on ConEx.

Please translate "hard to push back."  Does this mean respond to an exposed=
 (indication) of congestion?

>=20
> IOW, we /should/ have some level of argument about whether ConEx can=20
> achieve what it aims to achieve.=20

Hopefully, we can reach agreement and not continue arguing about this. I am=
 not sure there is yet a shared vision on what Conext aims to achieve. :-)

I am looking for the use case document to capture an agreement on what Cone=
x aims to achieve. Do others in the group have a different view?

> Just not too much at this early=20
> stage, when we haven't even defined the ConEx protocol (as opposed to=20
> the re-ECN protocol).
>=20
>=20
> Bob
>=20
>=20
> >-chris
> >
> > > Bob
> > >

From christopher.morrow@gmail.com  Fri Aug 13 06:48:38 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id F3A2E3A6912 for <conex@core3.amsl.com>; Fri, 13 Aug 2010 06:48:37 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -102.599
X-Spam-Level: 
X-Spam-Status: No, score=-102.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id d-oudswtViXd for <conex@core3.amsl.com>; Fri, 13 Aug 2010 06:48:37 -0700 (PDT)
Received: from mail-gw0-f44.google.com (mail-gw0-f44.google.com [74.125.83.44]) by core3.amsl.com (Postfix) with ESMTP id CF3353A68F2 for <conex@ietf.org>; Fri, 13 Aug 2010 06:48:36 -0700 (PDT)
Received: by gwaa18 with SMTP id a18so1126749gwa.31 for <conex@ietf.org>; Fri, 13 Aug 2010 06:49:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type; bh=yBfSwD8M4XivSNqvlSo33KpSb4LJPA1ZfTXWWqvvtQI=; b=HN//nxa07uBwVOPcwkICMw9wyQZMwdT/SwLqD0olj0jYVhqZ5vYDCpASh92IcSfJu8 /DciNqdK48/ZPismOdTVP2IO2TAeT/Ct+1NNdM2ZMl/b4hRlM22Ss0HwfGTs5YjYGYkK vQFJiTHqm8qBCy3ciPifC4OdG7G65UPu75/wg=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=uqMLfNp7qznOiCplLvizKNNH8yX7YxeamUveTF5h9W4+uuWgOf/10yKTCPQhwh3bDW wRBGALzMGno4hsgArHrofmLiiPgmTusflHJkGNn418TjqIgl/6eVa8SuICb+7a488RpO m0FTRlkMAKHEc5F5Po7drX3oPdqWeOvajgepQ=
MIME-Version: 1.0
Received: by 10.90.117.18 with SMTP id p18mr1212228agc.114.1281707343169; Fri, 13 Aug 2010 06:49:03 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.156.203 with HTTP; Fri, 13 Aug 2010 06:49:03 -0700 (PDT)
In-Reply-To: <2464076D83FAED4D985BF2622111AAC40F03CDF886@FHDP1LUMXC7V11.us.one.verizon.com>
References: <201008031942.o73JgGSW018260@bagheera.jungle.bt.co.uk> <AANLkTikZFvkOQjNLuasif+vAjeJSac1E-BqR6pptn=7p@mail.gmail.com> <201008061551.o76FpKPZ010840@bagheera.jungle.bt.co.uk> <2464076D83FAED4D985BF2622111AAC40F03CDF886@FHDP1LUMXC7V11.us.one.verizon.com>
Date: Fri, 13 Aug 2010 09:49:03 -0400
X-Google-Sender-Auth: m5ZoLdxdlplyzZ75RSZdWKOcWEs
Message-ID: <AANLkTi=Rvb+e4iFS8XyaUGovm2NFeun_b69wbxp3pBkt@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: "Mcdysan, David E" <dave.mcdysan@verizon.com>
Content-Type: text/plain; charset=ISO-8859-1
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] ConEx & DDoS
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 13:48:38 -0000

(still need to reply to bob...)

On Fri, Aug 13, 2010 at 8:47 AM, Mcdysan, David E
<dave.mcdysan@verizon.com> wrote:
>
>> IOW, we /should/ have some level of argument about whether ConEx can
>> achieve what it aims to achieve.
>
> Hopefully, we can reach agreement and not continue arguing about this. I am not sure there is yet a shared vision on what Conext aims to achieve. :-)
>
> I am looking for the use case document to capture an agreement on what Conex aims to achieve. Do others in the group have a different view?
>

I would also like to see this... now back to reading bob's message (for me).

-chris

From john@jlc.net  Fri Aug 13 06:55:25 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 9E9FB3A6912 for <conex@core3.amsl.com>; Fri, 13 Aug 2010 06:55:25 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -103.663
X-Spam-Level: 
X-Spam-Status: No, score=-103.663 tagged_above=-999 required=5 tests=[AWL=-0.923, BAYES_20=-0.74, GB_MUTUALBENEFIT=2, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8ngpp2LQOxf8 for <conex@core3.amsl.com>; Fri, 13 Aug 2010 06:55:24 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id 7AF323A6950 for <conex@ietf.org>; Fri, 13 Aug 2010 06:55:24 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id 8BE9D33C4B; Fri, 13 Aug 2010 09:56:01 -0400 (EDT)
Date: Fri, 13 Aug 2010 09:56:01 -0400
From: John Leslie <john@jlc.net>
To: Mikael Abrahamsson <swmike@swm.pp.se>
Message-ID: <20100813135601.GJ16820@verdi>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <alpine.DEB.1.10.1008122125080.8562@uplift.swm.pp.se>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.1.10.1008122125080.8562@uplift.swm.pp.se>
User-Agent: Mutt/1.4.1i
Cc: conex@ietf.org
Subject: [conex] Off-Topic: Congestion Settlements
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 13:55:25 -0000

Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 12 Aug 2010, John Leslie wrote:
> 
>> Implementing a "billing system" for congestion isn't hard at all.
>> To the user, an ISP sets a congestion allowance, different for
>> different service levels: any service call is an opportunity to sell
>> a higher service level.
> 
> It's hard when it has to be done inter-provider.

   That part _isn't_ inter-provider: it's strictly ISP-to-customer.

>> To its peers, each ISP has a system of settlements: if congestion
>> volume matches "closely enough" no money changes hands; if you want
>> to send more congestion volume than you receive, it's a business
>> decision whether to pay the agreed price. This is simple.
> 
> I don't agree. Most of the time today no money changes hands when doing 
> peering,

   "Close enough" is in the eye of the beholder. For a large fraction
of peering, I'd expect 2:1 ratios to be "close enough".

> it's done on a mutual benefit basis. Having congestion metrics to 
> keep track of means considerable complication compared to what we have 
> today.

   Tracking congestion volume is certainly simple enough.

   Any complexity arises from what to do when it gets out of balance.

>> Question to clarify: is what I outlined above "too complex" or is it 
>> somehow incomplete?
> 
> Well, I don't understand how it would work in real life, so I guess it 
> could be both because I'm trying to fill in blanks and there might be 
> simpler solutions that I can't think of.

   There certainly could be simpler solutions: I'm not claiming otherwise.

   As for the "blanks" to fill in, basically it's what to charge, and
perhaps there's an implied blank in how quickly charges may change.

   I think we can safely set an upper bound of, say, $1 per gigabyte of
congestion-volume: I doubt anyone will agree to pay more than that. The
lower bound is clearly zero. (Just keeping track is sufficient for many
peering cases now.)

   As for what constitutes "close enough", I'd guess an upper bound of
2:1 and a lower bound of 1.1:1 -- I don't believe anyone's interested in
exceeding those bounds.

   As for how quickly charges may change, I'd say an upper bound of
10-cents per year (increase, per gigabyte) and a lower bound of 1-cent
per year -- beyond that business planning becomes too hard. (But of
course, forbearance of charging will be common.) ISPs will arrange
their customer service levels to cover the potential increases.

   Are there other "blanks" you're concerned with?

--
John Leslie <john@jlc.net>

From john@jlc.net  Fri Aug 13 07:02:23 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id DAA173A693E for <conex@core3.amsl.com>; Fri, 13 Aug 2010 07:02:23 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.544
X-Spam-Level: 
X-Spam-Status: No, score=-105.544 tagged_above=-999 required=5 tests=[AWL=1.055, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id y8PxSJSHrq1T for <conex@core3.amsl.com>; Fri, 13 Aug 2010 07:02:22 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id 486FD3A69FB for <conex@ietf.org>; Fri, 13 Aug 2010 07:02:22 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id 184EC33C7F; Fri, 13 Aug 2010 10:02:59 -0400 (EDT)
Date: Fri, 13 Aug 2010 10:02:59 -0400
From: John Leslie <john@jlc.net>
To: Mikael Abrahamsson <swmike@swm.pp.se>
Message-ID: <20100813140259.GK16820@verdi>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se>
User-Agent: Mutt/1.4.1i
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 14:02:24 -0000

Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Thu, 12 Aug 2010, Toby Moncaster wrote:
>> Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> 
>>>> Implementing a billing system for congestion would be approximately
>>>> as hard as implementing a QoS system end-to-end and charging extra
>>>> for high priority traffic, so why wouldn't we do that instead?
>>
>> Two reasons, firstly it is far more flexible in how it is used, secondly
>> it is more future-proof...
> 
> I don't think I agree on this point. QoS is done between ISPs without 
> really involving end systems (apart from what the systems mark). What 
> we're proposing here needs to end up in the IP stack of the end system, 
> and we all know how many people still use Windows 95/98/XP. I'd say there 
> definitely is parity in how hard both schemes would be to implement.

   "Parity" is a funny name for this...

   QoS has had plenty of time to deploy; and has shown essentially zero
deployment beyond single-provider uses. I see no reason to believe it
will deploy any faster in dual-provider uses, or at all in three-or-more-
provider uses.

   Operating system software turns over with great regularity; and the
types of changes in end-user systems we're talking about don't even need
OS turnover. Recall that non-ConEx traffic is expected to co-exist with
ConEx traffic for the forseeable future.

--
John Leslie <john@jlc.net>

From christopher.morrow@gmail.com  Fri Aug 13 07:07:45 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id C761D3A6846 for <conex@core3.amsl.com>; Fri, 13 Aug 2010 07:07:45 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -102.339
X-Spam-Level: 
X-Spam-Status: No, score=-102.339 tagged_above=-999 required=5 tests=[AWL=0.260, BAYES_00=-2.599, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pg1qkoolV46o for <conex@core3.amsl.com>; Fri, 13 Aug 2010 07:07:44 -0700 (PDT)
Received: from mail-gy0-f172.google.com (mail-gy0-f172.google.com [209.85.160.172]) by core3.amsl.com (Postfix) with ESMTP id 9F0443A67AC for <conex@ietf.org>; Fri, 13 Aug 2010 07:07:44 -0700 (PDT)
Received: by gyg8 with SMTP id 8so1113484gyg.31 for <conex@ietf.org>; Fri, 13 Aug 2010 07:08:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=nIE84BkmVCuqs1zZ3vAUnAesJyWZfODTiNdJI2D57Rw=; b=ret90Mov9q0o6p2YAltQlC1LvP2PFJ+K6yMMzyO56t5ZPWQwFwPSLn9uc4ynSf8/Xr l9FDjb8ugqnGwAWNtl4u6vWUF33wimu05JJFAq+C/HHkjZZ2uy58529bd5cXyEPfd/Ij mZj5M2I8GjwifYM4afh+erGgmVrmnaUIBjqO8=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=GFx5nKwOlwfmz82J8+dvEIOGbkuDjBpZUaUq6YaZr3k2kiniXyohIxvn9cmBHHUZSw ELbT1C4M4FoavK5pAY5VgObB0vWobgvVdXXth0z6Ws+q+jdFzPEQC+MXHYvP66GtRZsn fvMjP5h0FLb5Ajlk18kRLGPXPGic8dShpOyvU=
MIME-Version: 1.0
Received: by 10.150.170.7 with SMTP id s7mr2123406ybe.93.1281708501332; Fri, 13 Aug 2010 07:08:21 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.156.203 with HTTP; Fri, 13 Aug 2010 07:08:21 -0700 (PDT)
In-Reply-To: <20100813135601.GJ16820@verdi>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <alpine.DEB.1.10.1008122125080.8562@uplift.swm.pp.se> <20100813135601.GJ16820@verdi>
Date: Fri, 13 Aug 2010 10:08:21 -0400
X-Google-Sender-Auth: yAyIw3l9GfcspWTsz_tOk769HKo
Message-ID: <AANLkTinZdXqJyA878vAamxjzQ2D2wfj9CZs-WsOQ+TvR@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: John Leslie <john@jlc.net>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: conex@ietf.org
Subject: Re: [conex] Off-Topic: Congestion Settlements
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 14:07:45 -0000

On Fri, Aug 13, 2010 at 9:56 AM, John Leslie <john@jlc.net> wrote:
> Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>> On Thu, 12 Aug 2010, John Leslie wrote:
>>
>>> Implementing a "billing system" for congestion isn't hard at all.
>>> To the user, an ISP sets a congestion allowance, different for
>>> different service levels: any service call is an opportunity to sell
>>> a higher service level.
>>
>> It's hard when it has to be done inter-provider.
>
> =A0 That part _isn't_ inter-provider: it's strictly ISP-to-customer.
>
>>> To its peers, each ISP has a system of settlements: if congestion
>>> volume matches "closely enough" no money changes hands; if you want
>>> to send more congestion volume than you receive, it's a business
>>> decision whether to pay the agreed price. This is simple.
>>
>> I don't agree. Most of the time today no money changes hands when doing
>> peering,

it's possible that the difference here is 'peer' vs 'customer' and
that time when someone changes from 'peer' to 'customer' (when their
ratio gets out of whack) There are implications beyond $$ changing
hands though, routing policy changes and expected traffic destinations
will change.

When conex leans toward 'settlement' I tend to look at it askance...
For ISP -> Customer discussions today the 'settlement' already
happens, the bits above (and what I snipped below) seem to be what my
first paragraph outlines: DePeering.

-chris

From john@jlc.net  Fri Aug 13 07:30:15 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 896523A6A03 for <conex@core3.amsl.com>; Fri, 13 Aug 2010 07:30:15 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.597
X-Spam-Level: 
X-Spam-Status: No, score=-105.597 tagged_above=-999 required=5 tests=[AWL=1.002, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Vj5s9-okfP4K for <conex@core3.amsl.com>; Fri, 13 Aug 2010 07:30:13 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id 269733A686C for <conex@ietf.org>; Fri, 13 Aug 2010 07:30:13 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id D482633C3E; Fri, 13 Aug 2010 10:30:43 -0400 (EDT)
Date: Fri, 13 Aug 2010 10:30:43 -0400
From: John Leslie <john@jlc.net>
To: Christopher Morrow <morrowc.lists@gmail.com>
Message-ID: <20100813143043.GL16820@verdi>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <alpine.DEB.1.10.1008122125080.8562@uplift.swm.pp.se> <20100813135601.GJ16820@verdi> <AANLkTinZdXqJyA878vAamxjzQ2D2wfj9CZs-WsOQ+TvR@mail.gmail.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <AANLkTinZdXqJyA878vAamxjzQ2D2wfj9CZs-WsOQ+TvR@mail.gmail.com>
User-Agent: Mutt/1.4.1i
Cc: conex@ietf.org
Subject: Re: [conex] Off-Topic: Congestion Settlements
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 14:30:15 -0000

Christopher Morrow <morrowc.lists@gmail.com> wrote:
> On Fri, Aug 13, 2010 at 9:56 AM, John Leslie <john@jlc.net> wrote:
>> Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>>> On Thu, 12 Aug 2010, John Leslie wrote:
>>>
>>>> To its peers, each ISP has a system of settlements: if congestion
>>>> volume matches "closely enough" no money changes hands; if you want
>>>> to send more congestion volume than you receive, it's a business
>>>> decision whether to pay the agreed price. This is simple.
>>>
>>> I don't agree. Most of the time today no money changes hands when doing
>>> peering,
> 
> it's possible that the difference here is 'peer' vs 'customer' and
> that time when someone changes from 'peer' to 'customer' (when their
> ratio gets out of whack)

   I'm not sure we all understand what Christopher is saying...

   In current ISP-ISP relations, "peering" refers to an arrangement
where routes are exchanged, and no traffic is default-routed in either
direction. (There are other limits as well.) Generally, no money changes
hands in a "peering" arrangement (except for "fabric" over which traffic
flows to and from the peering point).

   When "peering" conditions (which may include balance of traffic in
one direction vs. the other) are exceeded, one of the peers refuses to
continue the "peering" arrangement, and offers the other peer a
"customer" arrangment, typically including the right to default-route
customer traffic to the "transit provider".

> There are implications beyond $$ changing hands though, routing policy
> changes and expected traffic destinations will change.

   Exactly! ...if the relationship changes to "customer".

   It is certainly premature to say whether congestion-volume will become
a "condition" of "peering", but I won't claim it necessarily won't.

   If, in fact, an ISP is offered a "peering" arrangement which _limits_
congestion-volume (instead of charging for the imbalance), that ISP
will need to limit the congestion-volume _sent_to_that_peer.

   This suggests that ConEx "congestion-expected" traffic might travel
a different route than non-ConEx traffic, which I agree is possible.
This is probably sub-optimal, but not as bad as it sounds. The ISP that
declines this traffic at one peering point will have to accept it at
another, and any increased latency will disadvantage its customers at
least as much as the sender. I don't expect this case will prevail for
long.

> When conex leans toward 'settlement' I tend to look at it askance...
> For ISP -> Customer discussions today the 'settlement' already
> happens, the bits above (and what I snipped below) seem to be what my
> first paragraph outlines: DePeering.

   We have been through the history of de-peering based on total volume.
It took a while, but saner heads _did_ (mostly) prevail.

   (BTW, I don't blame folks for worrying about the possibility of
changing peering agreements: it tends to be painful!)

--
John Leslie <john@jlc.net>

From stuart.venters@adtran.com  Fri Aug 13 07:31:48 2010
Return-Path: <stuart.venters@adtran.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 0FC643A68AC for <conex@core3.amsl.com>; Fri, 13 Aug 2010 07:31:44 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.299
X-Spam-Level: 
X-Spam-Status: No, score=-5.299 tagged_above=-999 required=5 tests=[AWL=-1.300, BAYES_50=0.001, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id N6HcJr9WvIMq for <conex@core3.amsl.com>; Fri, 13 Aug 2010 07:31:32 -0700 (PDT)
Received: from p02c12o141.mxlogic.net (p02c12o141.mxlogic.net [208.65.145.74]) by core3.amsl.com (Postfix) with ESMTP id 5D20A3A6847 for <conex@ietf.org>; Fri, 13 Aug 2010 07:31:30 -0700 (PDT)
Received: from unknown [208.61.208.10] by p02c12o141.mxlogic.net(mxl_mta-6.7.0-0) with SMTP id 467556c4.0.146298.00-300.301805.p02c12o141.mxlogic.net (envelope-from <stuart.venters@adtran.com>);  Fri, 13 Aug 2010 08:32:08 -0600 (MDT)
X-MXL-Hash: 4c6557687e2f734a-9bd68e6155a25de1a5f0223920571b0b4b9a72c2
Received: from EXV1.corp.adtran.com ([172.22.48.215]) by corp-exfr2.corp.adtran.com with Microsoft SMTPSVC(6.0.3790.3959);  Fri, 13 Aug 2010 09:31:47 -0500
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Date: Fri, 13 Aug 2010 09:31:47 -0500
Message-ID: <8F242B230AD6474C8E7815DE0B4982D7179FB77C@EXV1.corp.adtran.com>
In-Reply-To: <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
Thread-Index: Acs6u04rWBX2FQfNQAKuMBQ/++wwvAALrJ8Q
From: "STUART VENTERS" <stuart.venters@adtran.com>
To: "Mikael Abrahamsson" <swmike@swm.pp.se>
X-OriginalArrivalTime: 13 Aug 2010 14:31:47.0149 (UTC) FILETIME=[427387D0:01CB3AF4]
X-Spam: [F=0.2000000000; CM=0.500; S=0.200(2010073001)]
X-MAIL-FROM: <stuart.venters@adtran.com>
X-SOURCE-IP: [208.61.208.10]
X-AnalysisOut: [v=1.0 c=1 a=unNN7i_3j6UA:10 a=VphdPIyG4kEA:10 a=8nJEP1OIZ-]
X-AnalysisOut: [IA:10 a=+7HwXwTrdIRToc/YTA6+uA==:17 a=Pt9pIbhoEgc_jLmZiToA]
X-AnalysisOut: [:9 a=eVoRnX-LPkpV5wVXxVnSBwbfzPcA:4 a=wPNLvfGTeEIA:10]
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 14:31:48 -0000

>>>
>>>> Also, calling congestion "unforseen" is a stretch, it very seldom =
is.
>>>
>>>    Actually, it is often "unforseen" by network engineers, who lack
>>> information of what customers will want tomorrow.
>>
>> Indeed, and it is certainly unforeseen by the users that have caused =
it...
>
>That I can agree with, customers don't expect congestion and they =
expect=20
>their access speed to be the speed at which they can communicate.

It seem to me that an expectation of congestion is a good thing if it =
lets you usually run at a speed much higher than you could without it.  =
The keys are an acceptable definition of 'usually' and how well the =
network shares during congestion.


{Begin thought experiment}
If you dreamed of building an Internet without congestion...

In order to meet an absolutely congestionless expectation you would have =
to build the Internet backbone without using statistical multiplexing.  =
That is, the available rate would be limited by the access rate and use =
severe topology constraints in the backbone mesh.

Checking the economics on the back of an envelope:
   If a port on a backbone router costs 1$/Mbit/sec and on average 10 =
hops are required, then the backbone switching cost would be $10/Mbit.  =
$100 backbone switching cost for 10Meg service seems possible assuming =
the starting numbers are right and you don't put in much path =
redundancy.  (Perhaps someone here has an idea how the operating and =
fiber costs would fit in here.)

>From a practicality standpoint in the US, if 100M customers each get 10M =
of service, that's 10**15 bits/sec or 10,000 100GigE's.  That seems a =
lot, but maybe possible if backbone switching technology improves.

History shows that if you did build such a backbone, some folks would =
notice that with 10M access links, it's idle most of the time.  A =
reasonable response is to switch to 100M access for some new killer app =
which puts us back where we started.  Which keeps the definition of =
'usually' and sharing during congestion on the table.
{End thought experiment}




From christopher.morrow@gmail.com  Fri Aug 13 13:18:54 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 719DA3A6852 for <conex@core3.amsl.com>; Fri, 13 Aug 2010 13:18:54 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -102.382
X-Spam-Level: 
X-Spam-Status: No, score=-102.382 tagged_above=-999 required=5 tests=[AWL=0.217, BAYES_00=-2.599, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id y47J7tDpro6W for <conex@core3.amsl.com>; Fri, 13 Aug 2010 13:18:52 -0700 (PDT)
Received: from mail-iw0-f172.google.com (mail-iw0-f172.google.com [209.85.214.172]) by core3.amsl.com (Postfix) with ESMTP id 2B5F13A6A36 for <conex@ietf.org>; Fri, 13 Aug 2010 13:18:52 -0700 (PDT)
Received: by iwn3 with SMTP id 3so168096iwn.31 for <conex@ietf.org>; Fri, 13 Aug 2010 13:19:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=ecXRc05hgnHzWWbSxyeqadobnSMXc2fylfGVJbdJHU0=; b=UUwGqXsZJT0R3ahDR0kV1Eu1Pe2qBKnTRu+mvNNR/mSZVquNzHShyPNTndQVwdIoBA HyxmnJAyq51g5muUopq1T1VrJsDF2Rm2DeubU5jZMvQ7G3TYP+A4YxRDHuurNuykyteP P9OX+9O9vzDu1zRZLcOkpOFFsbd8lItAtS4vc=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=Yj8dzpa4BLJkl2ijStuJpx0OXUY84dcIhffMRE9xUvrFxekLiZd/nc4/sczfHVSuFS +FyOxq4X21Z2Y4gDy3gP+YgfofZsm4ree+QqRrllGQFURj+nDfWZfFgiHwAiuVa6OKcJ yxAEjdHcCEaS4NfUntogxGBbEcD89ll7BRAtc=
MIME-Version: 1.0
Received: by 10.231.59.15 with SMTP id j15mr2098179ibh.172.1281730768815; Fri, 13 Aug 2010 13:19:28 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.156.203 with HTTP; Fri, 13 Aug 2010 13:19:28 -0700 (PDT)
In-Reply-To: <20100813143043.GL16820@verdi>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <alpine.DEB.1.10.1008122125080.8562@uplift.swm.pp.se> <20100813135601.GJ16820@verdi> <AANLkTinZdXqJyA878vAamxjzQ2D2wfj9CZs-WsOQ+TvR@mail.gmail.com> <20100813143043.GL16820@verdi>
Date: Fri, 13 Aug 2010 16:19:28 -0400
X-Google-Sender-Auth: W_ftmuUnP83-NKmsRjPuBXFoIos
Message-ID: <AANLkTinQEZ4KdJqLsK4H9jW-e8=69+n7B9q0ZCFi+cgr@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: John Leslie <john@jlc.net>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: conex@ietf.org
Subject: Re: [conex] Off-Topic: Congestion Settlements
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 20:18:54 -0000

On Fri, Aug 13, 2010 at 10:30 AM, John Leslie <john@jlc.net> wrote:
> Christopher Morrow <morrowc.lists@gmail.com> wrote:
>> On Fri, Aug 13, 2010 at 9:56 AM, John Leslie <john@jlc.net> wrote:
>>> Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>>>> On Thu, 12 Aug 2010, John Leslie wrote:
>>>>
>>>>> To its peers, each ISP has a system of settlements: if congestion
>>>>> volume matches "closely enough" no money changes hands; if you want
>>>>> to send more congestion volume than you receive, it's a business
>>>>> decision whether to pay the agreed price. This is simple.
>>>>
>>>> I don't agree. Most of the time today no money changes hands when doin=
g
>>>> peering,
>>
>> it's possible that the difference here is 'peer' vs 'customer' and
>> that time when someone changes from 'peer' to 'customer' (when their
>> ratio gets out of whack)
>
> =A0 I'm not sure we all understand what Christopher is saying...

apologies, sometimes I forget not everyone uses the same terms, and
peering is certainly overloaded in the networking world :)

> =A0 In current ISP-ISP relations, "peering" refers to an arrangement
> where routes are exchanged, and no traffic is default-routed in either
> direction. (There are other limits as well.) Generally, no money changes
> hands in a "peering" arrangement (except for "fabric" over which traffic
> flows to and from the peering point).
>
> =A0 When "peering" conditions (which may include balance of traffic in
> one direction vs. the other) are exceeded, one of the peers refuses to
> continue the "peering" arrangement, and offers the other peer a
> "customer" arrangment, typically including the right to default-route
> customer traffic to the "transit provider".
>
>> There are implications beyond $$ changing hands though, routing policy
>> changes and expected traffic destinations will change.
>
> =A0 Exactly! ...if the relationship changes to "customer".

Which I think is a perfectly fair way to look at this discussion...
keep in mind that most 'peering' arrangements (between larger networks
at least) have other factors aside from 'i send you 1bit, you can send
me 2bits' to keep track of, things like:
  o latency improvements
  o cost to ship the bits across transit/pay links
  o other business relationships (telephone call exchange)
  o ceo-golf-timez
(you get the picture)

Adding congestion cause/effect isn't unreasonable

>
> =A0 It is certainly premature to say whether congestion-volume will becom=
e
> a "condition" of "peering", but I won't claim it necessarily won't.
>
> =A0 If, in fact, an ISP is offered a "peering" arrangement which _limits_
> congestion-volume (instead of charging for the imbalance), that ISP
> will need to limit the congestion-volume _sent_to_that_peer.
>
> =A0 This suggests that ConEx "congestion-expected" traffic might travel
> a different route than non-ConEx traffic, which I agree is possible.

this gets hairy, the current routing protocols don't really have a
method to account for this... so you'd also have to affect
bgp/isis/etc and that strikes me as very dangerous, more state in the
network here is a step in the wrong direction. (plus what does that do
to paths and routes and ... yikes no.)

> This is probably sub-optimal, but not as bad as it sounds. The ISP that
> declines this traffic at one peering point will have to accept it at
> another, and any increased latency will disadvantage its customers at
> least as much as the sender. I don't expect this case will prevail for
> long.
>
>> When conex leans toward 'settlement' I tend to look at it askance...
>> For ISP -> Customer discussions today the 'settlement' already
>> happens, the bits above (and what I snipped below) seem to be what my
>> first paragraph outlines: DePeering.
>
> =A0 We have been through the history of de-peering based on total volume.
> It took a while, but saner heads _did_ (mostly) prevail.

we still have these events... about every 9-12 months some large
provider depeers another. It's painful, sometimes it gets reverted,
sometimes not. The 'settlement' portion I was getting at wasn't really
about de-peering as much as deciding if/when 2 networks should decide
to pay each other for 'congestion causing' traffic.

To me that sounds a whole lot like the settlement system (which takes
months/years to come to resolution, apparently) which exists in the
long-distance telephone world. That sort of thing is a non-starter I
believe. Using the congestion contribution data in peering agreements
isn't as far fetched, I suppose.

> =A0 (BTW, I don't blame folks for worrying about the possibility of
> changing peering agreements: it tends to be painful!)

and take a very long time, so... unless a peer is constantly causing
problems they tend to not get addressed... 'constantly' means 24/7 for
months or +year. it's very hard to re-arrange things once you set up
peering (settlement-free), and the cost to upgrade a 'peer' port is
likely cheaper than upgrading transit + transit costs would be for the
same traffic flows reverting to transit paths.

-chris

From carlberg@g11.org.uk  Fri Aug 13 13:35:45 2010
Return-Path: <carlberg@g11.org.uk>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 0827C3A694A for <conex@core3.amsl.com>; Fri, 13 Aug 2010 13:35:45 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.112
X-Spam-Level: 
X-Spam-Status: No, score=-2.112 tagged_above=-999 required=5 tests=[AWL=0.488,  BAYES_00=-2.599]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ZkoV441McDmC for <conex@core3.amsl.com>; Fri, 13 Aug 2010 13:35:36 -0700 (PDT)
Received: from portland.eukhosting.net (portland.eukhosting.net [92.48.97.5]) by core3.amsl.com (Postfix) with ESMTP id 816DA3A6A01 for <conex@ietf.org>; Fri, 13 Aug 2010 13:35:29 -0700 (PDT)
Received: from c-76-111-69-4.hsd1.va.comcast.net ([76.111.69.4]:62586 helo=[192.168.0.20]) by portland.eukhosting.net with esmtpa (Exim 4.69) (envelope-from <carlberg@g11.org.uk>) id 1Ok0ym-0003se-9j; Fri, 13 Aug 2010 20:35:52 +0000
Mime-Version: 1.0 (Apple Message framework v1081)
Content-Type: text/plain; charset=us-ascii
From: ken carlberg <carlberg@g11.org.uk>
In-Reply-To: <AANLkTinQEZ4KdJqLsK4H9jW-e8=69+n7B9q0ZCFi+cgr@mail.gmail.com>
Date: Fri, 13 Aug 2010 16:36:03 -0400
Content-Transfer-Encoding: quoted-printable
Message-Id: <78204118-D976-4E7D-ABAD-F2D87A038848@g11.org.uk>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <alpine.DEB.1.10.1008122125080.8562@uplift.swm.pp.se> <20100813135601.GJ16820@verdi> <AANLkTinZdXqJyA878vAamxjzQ2D2wfj9CZs-WsOQ+TvR@mail.gmail.com> <20100813143043.GL16820@verdi> <AANLkTinQEZ4KdJqLsK4H9jW-e8=69+n7B9q0ZCFi+cgr@mail.gmail.com>
To: Christopher Morrow <morrowc.lists@gmail.com>
X-Mailer: Apple Mail (2.1081)
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - portland.eukhosting.net
X-AntiAbuse: Original Domain - ietf.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - g11.org.uk
X-Source: 
X-Source-Args: 
X-Source-Dir: 
Cc: conex@ietf.org
Subject: Re: [conex] Off-Topic: Congestion Settlements
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 20:35:45 -0000

On Aug 13, 2010, at 4:19 PM, Christopher Morrow wrote:

>>   If, in fact, an ISP is offered a "peering" arrangement which =
_limits_
>> congestion-volume (instead of charging for the imbalance), that ISP
>> will need to limit the congestion-volume _sent_to_that_peer.
>>=20
>>   This suggests that ConEx "congestion-expected" traffic might travel
>> a different route than non-ConEx traffic, which I agree is possible.
>=20
> this gets hairy, the current routing protocols don't really have a
> method to account for this... so you'd also have to affect
> bgp/isis/etc and that strikes me as very dangerous, more state in the
> network here is a step in the wrong direction. (plus what does that do
> to paths and routes and ... yikes no.)

what about the case of more than one equal cost path via ISIS/OSPF, =
where in the "congested-expected" traffic travels a different path than =
the one normally selected through a simple hash?  ie, change the tuple =
for another path (or even LSP if we're thinking mpls) of the ingress =
router so that perhaps a minor change in the route may help alleviate =
the congestive condition.

-ken


From john@jlc.net  Fri Aug 13 16:54:22 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id E954D3A67EC for <conex@core3.amsl.com>; Fri, 13 Aug 2010 16:54:22 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.645
X-Spam-Level: 
X-Spam-Status: No, score=-105.645 tagged_above=-999 required=5 tests=[AWL=0.954, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2WJ4wSlROR8C for <conex@core3.amsl.com>; Fri, 13 Aug 2010 16:54:21 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id DA7D13A67B3 for <conex@ietf.org>; Fri, 13 Aug 2010 16:54:19 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id 9A6FD33C46; Fri, 13 Aug 2010 19:54:56 -0400 (EDT)
Date: Fri, 13 Aug 2010 19:54:56 -0400
From: John Leslie <john@jlc.net>
To: Christopher Morrow <morrowc.lists@gmail.com>
Message-ID: <20100813235456.GO16820@verdi>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <alpine.DEB.1.10.1008122125080.8562@uplift.swm.pp.se> <20100813135601.GJ16820@verdi> <AANLkTinZdXqJyA878vAamxjzQ2D2wfj9CZs-WsOQ+TvR@mail.gmail.com> <20100813143043.GL16820@verdi> <AANLkTinQEZ4KdJqLsK4H9jW-e8=69+n7B9q0ZCFi+cgr@mail.gmail.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <AANLkTinQEZ4KdJqLsK4H9jW-e8=69+n7B9q0ZCFi+cgr@mail.gmail.com>
User-Agent: Mutt/1.4.1i
Cc: conex@ietf.org
Subject: Re: [conex] Off-Topic: Congestion Settlements
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 13 Aug 2010 23:54:23 -0000

Christopher Morrow <morrowc.lists@gmail.com> wrote:
> On Fri, Aug 13, 2010 at 10:30 AM, John Leslie <john@jlc.net> wrote:
> 
>> It is certainly premature to say whether congestion-volume will become
>> a "condition" of "peering", but I won't claim it necessarily won't.

   To be clear, I agree this is "possible"; but I didn't say it was
"reasonable", or even "likely"...

>> If, in fact, an ISP is offered a "peering" arrangement which _limits_
>> congestion-volume (instead of charging for the imbalance), that ISP
>> will need to limit the congestion-volume _sent_to_that_peer.
>>
>> This suggests that ConEx "congestion-expected" traffic might travel
>> a different route than non-ConEx traffic, which I agree is possible.
>
> this gets hairy, the current routing protocols don't really have a
> method to account for this...

   All the more reason it shouldn't happen!

   Note also, that under current peering arrangements, congestion-volume
won't be counted at all, so the question doesn't even arise.

> so you'd also have to affect bgp/isis/etc and that strikes me as very
> dangerous, more state in the network here is a step in the wrong
> direction.

   Agreed.

   Indeed, faced with this unbalance, and a threat to de-peer, you'd
probably arrange for more congestion on your side.

   This is not quite as silly as it sounds: I've been causing intentional
congestion on incoming port 25 for years -- from IP ranges that send
too much spam. I'd never ask folks to pay me for that "congestion", but
it does give me some wiggle room. (If folks _did_ pay me for it, I'd most
likely reduce the amount of congestion for sources that pay...)

   But the real point is, this is simply a silly proposition: it's far
more likely your peering arrangements won't actually penalize you until
the peers see it as a profit-center and set a settlement rate.

> (plus what does that do to paths and routes and ... yikes no.)

   It's not _really_ that scary... Aggregating the congestion-volume by
CIDR block would give a limited number of routes to that peer that you
can disincent. It probably would even be stable enough to do almost
manually.

   But again, this eventuality is silly-season and shouldn't happen at
all in a one-tenth reasonable universe.

>>> When conex leans toward 'settlement' I tend to look at it askance...
>>> For ISP -> Customer discussions today the 'settlement' already
>>> happens, the bits above (and what I snipped below) seem to be what my
>>> first paragraph outlines: DePeering.
>>
>> We have been through the history of de-peering based on total volume.
>> It took a while, but saner heads _did_ (mostly) prevail.
> 
> we still have these events... about every 9-12 months some large
> provider depeers another.

   (Notice, I didn't ask for a "half-way reasonable" universe... :^(

> It's painful, sometimes it gets reverted, sometimes not. The 'settlement'
> portion I was getting at wasn't really about de-peering as much as
> deciding if/when 2 networks should decide to pay each other for
> 'congestion causing' traffic.

   This is all new territory. I have posted my best guesses in a separate
email.

   The ISPs with the greatest congestion towards their users will see
the greatest benefit from settlements, but they'll find their peers less
than anxious to pay -- absent actual evidence that payments will lead
to significant improvements. This will slow down any actual payments.

   To the extent such settlement payments _actually_ lead to improvents
the charges become somewhat self-liquidating...

> To me that sounds a whole lot like the settlement system (which takes
> months/years to come to resolution, apparently) which exists in the
> long-distance telephone world. That sort of thing is a non-starter I
> believe. Using the congestion contribution data in peering agreements
> isn't as far fetched, I suppose.

   In that market, there is a government mandate to interconnect, which
leads to lawyers arguing rather than engineers upgrading.

   I see no reason for government mandates for ConEx settlements, nor
even for lawyers. Folks who don't want to pay simply don't pay; and
they get regular Best Effort service. Folks who do pay get priority
for their congestion-marked packets.

   Rather than threaten to de-peer, a sane policy would be to tax
incoming ConEx packets (of their remaining congestion allowance) to
restore balance from any peer that chooses not to pay a settlement.

   But frankly, we are rather a ways from that issue even arising; and
it is, after all, not within the scope of ConEx to specify what to do
if there's a perceived imbalance.

>> (BTW, I don't blame folks for worrying about the possibility of
>> changing peering agreements: it tends to be painful!)
> 
> and take a very long time, so... unless a peer is constantly causing
> problems they tend to not get addressed... 'constantly' means 24/7 for
> months or +year. it's very hard to re-arrange things once you set up
> peering (settlement-free), and the cost to upgrade a 'peer' port is
> likely cheaper than upgrading transit + transit costs would be for the
> same traffic flows reverting to transit paths.

   Exactly!

--
John Leslie <john@jlc.net>

From christopher.morrow@gmail.com  Fri Aug 13 18:06:44 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 5829B3A6A4F for <conex@core3.amsl.com>; Fri, 13 Aug 2010 18:06:44 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -102.413
X-Spam-Level: 
X-Spam-Status: No, score=-102.413 tagged_above=-999 required=5 tests=[AWL=0.186, BAYES_00=-2.599, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BjEZ7p1E8Z0l for <conex@core3.amsl.com>; Fri, 13 Aug 2010 18:06:43 -0700 (PDT)
Received: from mail-yw0-f44.google.com (mail-yw0-f44.google.com [209.85.213.44]) by core3.amsl.com (Postfix) with ESMTP id 465593A6A49 for <conex@ietf.org>; Fri, 13 Aug 2010 18:06:43 -0700 (PDT)
Received: by ywa8 with SMTP id 8so1380562ywa.31 for <conex@ietf.org>; Fri, 13 Aug 2010 18:07:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=LPexf4lTCzC1NXb03u2sD0/Ln48p9J1NFevu/W1b9WM=; b=g0RWPzKfkbWueeYJzg41Ntv4WkGTCLRK5zlRsfL+pGp2ZJGFlarekovMWv4df8Ikgp Kef8hyQ4borcGt0Pt80yoMVtU4BcWXOiK6ILYTwGVDvbEiEG7Sltm/cW0AjHPC5gmxyx D8m6nNxQIcrH/kYqihHfSD8LayaZK5T6c9bX0=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=BOprS9QXN1+Z4cPDnNMC8loqYJzDxyddDcXwN3xnXMqM6m/tVIUnRqe9tkgdwarldF 1XvfW4ktHXZI1AfmCThTRcT8E1iaFF9wl4Q8BdsGizQVk4kPRDIszo3pnTc1RnN/LFFd nq72cw9QP36lBibURRd/uMrm/4fLn9oviAs5I=
MIME-Version: 1.0
Received: by 10.231.35.10 with SMTP id n10mr2317794ibd.161.1281748039669; Fri, 13 Aug 2010 18:07:19 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.156.203 with HTTP; Fri, 13 Aug 2010 18:07:19 -0700 (PDT)
In-Reply-To: <78204118-D976-4E7D-ABAD-F2D87A038848@g11.org.uk>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <alpine.DEB.1.10.1008122125080.8562@uplift.swm.pp.se> <20100813135601.GJ16820@verdi> <AANLkTinZdXqJyA878vAamxjzQ2D2wfj9CZs-WsOQ+TvR@mail.gmail.com> <20100813143043.GL16820@verdi> <AANLkTinQEZ4KdJqLsK4H9jW-e8=69+n7B9q0ZCFi+cgr@mail.gmail.com> <78204118-D976-4E7D-ABAD-F2D87A038848@g11.org.uk>
Date: Fri, 13 Aug 2010 21:07:19 -0400
X-Google-Sender-Auth: zdXffH6A6ERF1RK8LJ8cyNM4_7o
Message-ID: <AANLkTin_iUhZDKuJ+HsyMnc=5LzFw-bPHWo9Ne_CRXoK@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: ken carlberg <carlberg@g11.org.uk>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: conex@ietf.org
Subject: Re: [conex] Off-Topic: Congestion Settlements
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Aug 2010 01:06:44 -0000

On Fri, Aug 13, 2010 at 4:36 PM, ken carlberg <carlberg@g11.org.uk> wrote:
> On Aug 13, 2010, at 4:19 PM, Christopher Morrow wrote:
>
>>> =A0 If, in fact, an ISP is offered a "peering" arrangement which _limit=
s_
>>> congestion-volume (instead of charging for the imbalance), that ISP
>>> will need to limit the congestion-volume _sent_to_that_peer.
>>>
>>> =A0 This suggests that ConEx "congestion-expected" traffic might travel
>>> a different route than non-ConEx traffic, which I agree is possible.
>>
>> this gets hairy, the current routing protocols don't really have a
>> method to account for this... so you'd also have to affect
>> bgp/isis/etc and that strikes me as very dangerous, more state in the
>> network here is a step in the wrong direction. (plus what does that do
>> to paths and routes and ... yikes no.)
>
> what about the case of more than one equal cost path via ISIS/OSPF, where=
 in the "congested-expected" traffic travels a different path than the one =
normally selected through a simple hash? =A0ie, change the tuple for anothe=
r path (or even LSP if we're thinking mpls) of the ingress router so that p=
erhaps a minor change in the route may help alleviate the congestive condit=
ion.

care to explain what part of the 5tuple I use today for hashing would
be used in this decision process? in today's hardware this isn't
feasible, today's routing protocols essentially select a single
destination (which may be ecmp'd ec here =3D=3D 'equal cost').

If you had the magic of mpls you could decide to put traffic on
different colored LSP's based on some packet marking (tos/dscp bits).
I bet you can't do that fast enough to matter in a network large
enough to matter. You'd still have to maintain this new/large/fast
state in your network/routing protocols, that cost is not
insubstantial.

-chris

From christopher.morrow@gmail.com  Fri Aug 13 18:21:19 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id BC56B3A6A56 for <conex@core3.amsl.com>; Fri, 13 Aug 2010 18:21:17 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -102.437
X-Spam-Level: 
X-Spam-Status: No, score=-102.437 tagged_above=-999 required=5 tests=[AWL=0.163, BAYES_00=-2.599, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sMw0MLAx738m for <conex@core3.amsl.com>; Fri, 13 Aug 2010 18:21:13 -0700 (PDT)
Received: from mail-yx0-f172.google.com (mail-yx0-f172.google.com [209.85.213.172]) by core3.amsl.com (Postfix) with ESMTP id 77A9E3A6A6E for <conex@ietf.org>; Fri, 13 Aug 2010 18:21:09 -0700 (PDT)
Received: by yxp4 with SMTP id 4so887185yxp.31 for <conex@ietf.org>; Fri, 13 Aug 2010 18:21:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=o7QVSOZMCF5OriJgVvfr1q0/LcqH6g3C22C0ndb8u94=; b=HpPxSQKkAOY1fPVRZOhooZ5++1p9a0l3P5XbDuEtdvzZOca8nnuj6OlPDQcqLvhtm4 GKTHcpz7Aldu5/a2JZu2b1Oed9150LRx7ClfnK1zQ/W67d0dDc+Mk9foMW2SGheFJFnt BjTlVdKAR+bKGumB7Cma9QP3amZCJlsJBBK/c=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=kUNE406BiHgaVEYcOIQyYP8OPcXbxtYIhNqFoTcj8x7gJBen2zZK/168W3b0EDZLce EDt3o+dU0XNROkxwiK8neGIrlRW6gg/hjeod8gD/LLDihQkcS5khFFXQ8b+/EgwkVr98 Ap8GNzPS1x+wPO4xg3AScFiGyOTlQbIWb9Aws=
MIME-Version: 1.0
Received: by 10.231.35.10 with SMTP id n10mr2324888ibd.161.1281748905912; Fri, 13 Aug 2010 18:21:45 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.156.203 with HTTP; Fri, 13 Aug 2010 18:21:45 -0700 (PDT)
In-Reply-To: <20100813235456.GO16820@verdi>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <alpine.DEB.1.10.1008122125080.8562@uplift.swm.pp.se> <20100813135601.GJ16820@verdi> <AANLkTinZdXqJyA878vAamxjzQ2D2wfj9CZs-WsOQ+TvR@mail.gmail.com> <20100813143043.GL16820@verdi> <AANLkTinQEZ4KdJqLsK4H9jW-e8=69+n7B9q0ZCFi+cgr@mail.gmail.com> <20100813235456.GO16820@verdi>
Date: Fri, 13 Aug 2010 21:21:45 -0400
X-Google-Sender-Auth: P6l6j9DSclCwtYp_E6YjIjrk0EY
Message-ID: <AANLkTin9N0if6N7LgaFwLbsZ3PggpqD-irOpiOwQJDF1@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: John Leslie <john@jlc.net>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: conex@ietf.org
Subject: Re: [conex] Off-Topic: Congestion Settlements
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Aug 2010 01:21:20 -0000

On Fri, Aug 13, 2010 at 7:54 PM, John Leslie <john@jlc.net> wrote:
> Christopher Morrow <morrowc.lists@gmail.com> wrote:
>> On Fri, Aug 13, 2010 at 10:30 AM, John Leslie <john@jlc.net> wrote:
>
> =A0 This is not quite as silly as it sounds: I've been causing intentiona=
l
> congestion on incoming port 25 for years -- from IP ranges that send

at the end system level folks are free to do as they please, if the
want to enable 'nice mode' they surely can. Routers in the core of the
network are a completely different story, as are BRAS type devices
(your first L3 hop after your dsl-modem).

> =A0 But the real point is, this is simply a silly proposition: it's far
> more likely your peering arrangements won't actually penalize you until
> the peers see it as a profit-center and set a settlement rate.

agreed, until they feel they can turn you from a 'peer' to a 'customer'.

>
>> (plus what does that do to paths and routes and ... yikes no.)
>
> =A0 It's not _really_ that scary... Aggregating the congestion-volume by
> CIDR block would give a limited number of routes to that peer that you
> can disincent. It probably would even be stable enough to do almost
> manually.

on a core router at a large ISP today (a 'tier-1' isp) there are
upwards of 4million paths, maintaining that state isn't trivial,
adding to that a potentially quickly changing (as congestion
arrives/dissipates) new set of state for even the advertised netblocks
which hold 'conex sources' (or destinations, the core router doesn't
actually know which side) is really not going to be good.

I'd actually say that you don't want to use the covering prefix here
either, you really want to use the actual contributors, my home dsl
traffic isn't causing the congestion, the akamai server in the same
/20 as me is....

> =A0 But again, this eventuality is silly-season and shouldn't happen at
> all in a one-tenth reasonable universe.

agreed.

>> It's painful, sometimes it gets reverted, sometimes not. The 'settlement=
'
>> portion I was getting at wasn't really about de-peering as much as
>> deciding if/when 2 networks should decide to pay each other for
>> 'congestion causing' traffic.
>
> =A0 This is all new territory. I have posted my best guesses in a separat=
e
> email.
>
> =A0 The ISPs with the greatest congestion towards their users will see
> the greatest benefit from settlements, but they'll find their peers less

at some point 'toward the user' means 'last mile' and whatever physics
(and customer money) constraints there are on that last-mile will
rule. Customers may chose to enable 'nice mode' (pay attention to
conex/re-ecn data) if they feel there is benefit to THEM to do that.
I'd say that a default 'on' for this at the CPE/customer-OS sounds
great, people will get a better internet experience with the pipe they
have.

> than anxious to pay -- absent actual evidence that payments will lead
> to significant improvements. This will slow down any actual payments.

in the last-mile case, the ISP really can't do anything here, there
are constraints beyond what money will solve, and their customers are
seemingly happy to pay for the pipe-size they have. no incentive
exists here.

>
>> To me that sounds a whole lot like the settlement system (which takes
>> months/years to come to resolution, apparently) which exists in the
>> long-distance telephone world. That sort of thing is a non-starter I
>> believe. Using the congestion contribution data in peering agreements
>> isn't as far fetched, I suppose.
>
> =A0 In that market, there is a government mandate to interconnect, which
> leads to lawyers arguing rather than engineers upgrading.
>
> =A0 I see no reason for government mandates for ConEx settlements, nor
> even for lawyers. Folks who don't want to pay simply don't pay; and
> they get regular Best Effort service. Folks who do pay get priority
> for their congestion-marked packets.

money =3D=3D contracts =3D=3D lawyers, there won't be payments made across =
the
local bar, there will be 'bills' sent. Or in the case of 'peer' ->
'customer' shifts... lots of fun times, then payments along with
contracts and the usual customer things.

> =A0 But frankly, we are rather a ways from that issue even arising; and
> it is, after all, not within the scope of ConEx to specify what to do
> if there's a perceived imbalance.

agreed.

again, thanks.
-chris

From prvs=484505B9D5=Kevin.Mason@telecom.co.nz  Mon Aug 16 22:34:10 2010
Return-Path: <prvs=484505B9D5=Kevin.Mason@telecom.co.nz>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 576FA3A6912 for <conex@core3.amsl.com>; Mon, 16 Aug 2010 22:34:10 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.392
X-Spam-Level: 
X-Spam-Status: No, score=-2.392 tagged_above=-999 required=5 tests=[AWL=-1.207, BAYES_40=-0.185, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nEKH8sbRhGPq for <conex@core3.amsl.com>; Mon, 16 Aug 2010 22:34:08 -0700 (PDT)
Received: from mgate2.telecom.co.nz (envoy-out.telecom.co.nz [146.171.15.100]) by core3.amsl.com (Postfix) with ESMTP id 5BC883A6924 for <conex@ietf.org>; Mon, 16 Aug 2010 22:34:07 -0700 (PDT)
Received: from mgate4.telecom.co.nz (unknown [146.171.1.21]) by mgate2.telecom.co.nz (Tumbleweed MailGate 3.7.1) with ESMTP id 2199D102B59F for <conex@ietf.org>; Tue, 17 Aug 2010 17:34:38 +1200 (NZST)
X-WSS-ID: 0L7A7HO-04-18A-02
X-M-MSG: 
Received: from hp2847.telecom.tcnz.net (hp2847.telecom.tcnz.net [146.171.228.249]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mgate4.telecom.co.nz (Postfix) with ESMTP id 1F1D633CA122 for <conex@ietf.org>; Tue, 17 Aug 2010 17:34:35 +1200 (NZST)
Received: from hp3120.telecom.tcnz.net (146.171.212.205) by hp2847.telecom.tcnz.net (146.171.228.249) with Microsoft SMTP Server (TLS) id 8.2.234.1; Tue, 17 Aug 2010 17:34:36 +1200
Received: from WNEXMBX01.telecom.tcnz.net ([146.171.212.201]) by hp3120.telecom.tcnz.net ([146.171.212.205]) with mapi; Tue, 17 Aug 2010 17:34:36 +1200
From: Kevin Mason <Kevin.Mason@telecom.co.nz>
To: "conex@ietf.org" <conex@ietf.org>
Date: Tue, 17 Aug 2010 17:34:35 +1200
Thread-Topic: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
Thread-Index: Acs6u1E/9V3Zkbv3SxC1bLGI/IQHzQDBJ+Sw
Message-ID: <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi>	<001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se>
In-Reply-To: <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se>
Accept-Language: en-US, en-NZ
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US, en-NZ
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: [conex]  comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 17 Aug 2010 05:34:10 -0000

A  comment on the draft.

On accouinting approaches to using congtion information

I would challenge the view that another service provider "causes" congestio=
n on another ISP's network. Data is only passed between sender and receiver=
 so it is the ISP's customer that is requesting the data from customers of =
the other provider, not the provider "sending it".

So if the "sending" provider is not causing the congestion (because it is t=
he receiving provider's customer the requested it) then arguing the "sendin=
g" provider should pay for any congestion the resulted might be difficult. =
I can see endless legal arguments as to why one or the other party is culpa=
ble and therefore who should endure any commercial consequences.

On network uses of the information I think there is a general concept that =
is not being captured well.=20

In ISP networks there are, very simply, two parts. Firstly there is the con=
nectivity between each account holders demarcation (UNI) and a IP edge devi=
ce (BRAS/BNG in Broadband Forum speak). The IP edge device typically faciti=
tes AAA functions as well as user based policy enforcement and by necessity=
 is fully aware of what flows below to what UNI's (e.g. because they are al=
l on a single authenticated VLAN or PPP tunnel).

Beyond that point between the IP edge and any peering point, the network do=
es not maintain an specific awareness of individual end points. Routers cou=
ld  theoretically maintain information about each source/destination pair A=
ND consult some database to relate that to a end user profile, but this is =
not very scaleable.=20

So in the core ISP network, if a forwarding next hop is approaching overloa=
d, then the egress config of the router must deal with the aggregate flow a=
nd act on information that is in the packet header alone. Maintaining knowl=
edge of which "users" have "caused" the most congestion in recent times is =
too hard.

So it would appear that a possible scheme might be that a two stage queue m=
anagement regime might be desirable, whereby at a lower queue size packets =
begin to be congestion marked if ECN capable and maybe discarded if not, bu=
t at a slightly higher queue depth, packets that are ECN capable but with a=
 high positive congestion value get discarded, on the basis that these have=
 a lower chance of reaching their destination than packets not declaring an=
 expectation of congestion on the rest of the path. I don't see a "border m=
onitor" as described in the draft being very practical or useful in this pa=
rt of the network.

When packets arrive at the BRAS destined for the user, then user based poli=
cy could be applied. One such policy might be to discard all packets with a=
 congestion deficit of more than x. This is the safety net against dishones=
ty by the sender. An additional policy might be to discard packets that hav=
e experienced congestion above a threshiold (which may be different for dif=
ferent user profiles) so far AND that are destined to a user that has a rec=
ent history of high congestion marked packets. If previous congestion marks=
 have result in the user backing off then this policy would not be invoked,=
 so it would only apply to users that are persistently contributing to cong=
estion somewhere on the path traversed (on the same provider or any precedi=
ng providers network).

These policy actions could constitute the "edge monitor" functions referred=
 to in the draft but would actually be part of the policy functions of the =
edge device itself, not any independent function.

Other may have diffeent views of how the revealed congestion information mi=
ght be used but I believer it is useful to at least consider the two parts =
of an ISP network when discussing possible used for the information.

Cheers
Kevin Mason

From stuart.venters@adtran.com  Tue Aug 17 07:15:12 2010
Return-Path: <stuart.venters@adtran.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 422733A6979 for <conex@core3.amsl.com>; Tue, 17 Aug 2010 07:15:12 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.949
X-Spam-Level: 
X-Spam-Status: No, score=-5.949 tagged_above=-999 required=5 tests=[AWL=0.650,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id MbJcHsIlEzon for <conex@core3.amsl.com>; Tue, 17 Aug 2010 07:15:10 -0700 (PDT)
Received: from p02c12o144.mxlogic.net (p02c12o144.mxlogic.net [208.65.145.77]) by core3.amsl.com (Postfix) with ESMTP id E2DF03A67E6 for <conex@ietf.org>; Tue, 17 Aug 2010 07:15:09 -0700 (PDT)
Received: from unknown [208.61.208.9] by p02c12o144.mxlogic.net(mxl_mta-6.7.0-0) with SMTP id 1999a6c4.0.207722.00-266.414742.p02c12o144.mxlogic.net (envelope-from <stuart.venters@adtran.com>);  Tue, 17 Aug 2010 08:15:45 -0600 (MDT)
X-MXL-Hash: 4c6a99913a49e97f-22db266f8c6a5fcc92575bec7caaa114884606a9
Received: from EXV1.corp.adtran.com ([172.22.48.215]) by corp-exfr1.corp.adtran.com with Microsoft SMTPSVC(6.0.3790.3959);  Tue, 17 Aug 2010 09:15:41 -0500
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 17 Aug 2010 09:15:40 -0500
Message-ID: <8F242B230AD6474C8E7815DE0B4982D7179FB781@EXV1.corp.adtran.com>
In-Reply-To: <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
Thread-Index: Acs6u1E/9V3Zkbv3SxC1bLGI/IQHzQDBJ+SwABRz8+A=
From: "STUART VENTERS" <stuart.venters@adtran.com>
To: <conex@ietf.org>
X-OriginalArrivalTime: 17 Aug 2010 14:15:41.0806 (UTC) FILETIME=[ACB6E0E0:01CB3E16]
X-Spam: [F=0.2000000000; CM=0.500; S=0.200(2010073001)]
X-MAIL-FROM: <stuart.venters@adtran.com>
X-SOURCE-IP: [208.61.208.9]
X-AnalysisOut: [v=1.0 c=1 a=S2j2TF0e7AUA:10 a=VphdPIyG4kEA:10 a=8nJEP1OIZ-]
X-AnalysisOut: [IA:10 a=jiDxcTmC9w82J9X6Erp5uA==:17 a=ma-OzvheBIBrETSchNEA]
X-AnalysisOut: [:9 a=vAwiB0_HvHl3AXR0AgoA:7 a=arX4vuwTbWf_bYux2URlCdVcg8IA]
X-AnalysisOut: [:4 a=wPNLvfGTeEIA:10]
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 17 Aug 2010 14:15:12 -0000

Kevin Said:

>On accounting approaches to using congestion information
>
>I would challenge the view that another service provider "causes" =
congestion on another ISP's network. Data is only passed between sender =
and receiver so it is the ISP's customer that is requesting the data =
from customers of the other provider, not the provider "sending it".

>So if the "sending" provider is not causing the congestion (because it =
is the receiving provider's customer the requested it) then arguing the =
"sending" provider should pay for any congestion the resulted might be =
difficult. I can see endless legal arguments as to why one or the other =
party is culpable and therefore who should endure any commercial =
consequences.


Two cents worth:

On a packet by packet basis, it seems like congestion is directly caused =
by the customer sending the packets.  On a longer term, perhaps network =
engineering and marketing have something to do with it as well.  =
Exposing congestion could provide good, constructive feedback to the =
customers causing it.  I wonder if the sign of the feedback for the =
service providers could be in the opposite direction.

With regards to a more indirect cause of congestion, the idea of =
requester versus sender being responsible for congestion is interesting. =
 If a customer requests a web page, it doesn't seem to me that he also =
desires all the ads that come with it.  That choice seems to be coming =
from the web server.


From christopher.morrow@gmail.com  Tue Aug 17 11:17:57 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 10EE73A6846 for <conex@core3.amsl.com>; Tue, 17 Aug 2010 11:17:57 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -98.669
X-Spam-Level: 
X-Spam-Status: No, score=-98.669 tagged_above=-999 required=5 tests=[AWL=-3.670, BAYES_50=0.001, GB_SUMOF=5, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id gczB8asxTfqJ for <conex@core3.amsl.com>; Tue, 17 Aug 2010 11:17:54 -0700 (PDT)
Received: from mail-px0-f172.google.com (mail-px0-f172.google.com [209.85.212.172]) by core3.amsl.com (Postfix) with ESMTP id AA9C13A6AC0 for <conex@ietf.org>; Tue, 17 Aug 2010 11:17:39 -0700 (PDT)
Received: by pxi6 with SMTP id 6so2871411pxi.31 for <conex@ietf.org>; Tue, 17 Aug 2010 11:18:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=vilpB7gbnq5VnCOk2rVAA+Bq+dNAu5HWk2j28TnHRKs=; b=ZU/wbBDoHLI29HOJuoJWt21pbdZCPKVJpmVmuCsxt6pTJ+anlAwbd7vdt2z+9DaH+9 ui9wZXcIynX7BeUFOk3shfUlfjoWVOEEmeoQEJbXRCCukuP7tukMxXS2Lbog9oRHSS/H a+1JsuC+yTfPhsvO2tG2vkRpzqLkgnCjUOElQ=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=MDLD/e1JOmfS/7xc5Lsn0+3+QmxDK4/M1zKg3E2a9Z8RZhpqg/CQZxBPjHT4ka1Gec WSNgsoBIg7Pma0daO5ntXGtkDClvwt+wK35oQXacQQXsxRIOeUyEb2fDye2RZuVy42pX OT9WAydnz1OxF0lN1TlMEr4tve8i+l7W6wPuw=
MIME-Version: 1.0
Received: by 10.114.109.6 with SMTP id h6mr8316369wac.75.1282069088807; Tue, 17 Aug 2010 11:18:08 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.156.203 with HTTP; Tue, 17 Aug 2010 11:18:08 -0700 (PDT)
In-Reply-To: <201008061551.o76FpKPZ010840@bagheera.jungle.bt.co.uk>
References: <201008031942.o73JgGSW018260@bagheera.jungle.bt.co.uk> <AANLkTikZFvkOQjNLuasif+vAjeJSac1E-BqR6pptn=7p@mail.gmail.com> <201008061551.o76FpKPZ010840@bagheera.jungle.bt.co.uk>
Date: Tue, 17 Aug 2010 14:18:08 -0400
X-Google-Sender-Auth: _QxIlKq7vRpKJ1eI2DQOWH_2Xvs
Message-ID: <AANLkTi=ALJa0BpqnnBVH9XSUd61mYFOiQSGo_AzQXEE9@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: Bob Briscoe <rbriscoe@jungle.bt.co.uk>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: conex@ietf.org
Subject: Re: [conex] ConEx & DDoS
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 17 Aug 2010 18:17:57 -0000

it would help (me at least) in this discussion if we think of a
notional network like:

           user
             |
           cpe
            |
         bras
        /     \
    core1    core2 (inside a single ASN)
      \        /
       peer/transit edge (exit/entrance to another ASN)
         |
       PE
         |
        CE
         |
       server

(apologies for bad ascii art)

On Fri, Aug 6, 2010 at 11:51 AM, Bob Briscoe <rbriscoe@jungle.bt.co.uk> wro=
te:
> Chris,
>
> At 04:37 04/08/2010, Christopher Morrow wrote:
>>
>> (I think I sub'd to the list with this address...)
>>
>> On Tue, Aug 3, 2010 at 3:42 PM, Bob Briscoe <rbriscoe@jungle.bt.co.uk>
>> wrote:
>> > Chris,
>> >
>> > During the ConEx w-g session last Tuesday in Maastricht you suggested =
we
>> > should not include DDoS mitigation as a use-case for ConEx. I was
>> > willing to
>> > agree as we don't need to court controversy.
>>
>> yup, no use rat-holing if it's not central to the discussion.

oops, we rat-holed... this sort of proves my point about ddos
discussions wrt conex though.
I would stick with (for conex): "Expose information about congestion
on the network path, permit better/more-intelligent discard profiles
to be used along the network paths."

Where 'better/more-intelligent' is really: "Drop traffic that's less
important to the end-users."

Lee/Rich I think explained this (to me) as: "make sure my Internet
gaming works perfectly, it is OK to slow down a large video file
transfer." I believe plus-net in the UK does some of this today? (I
may have the wrong provider... but it's a competitor of BT's I
believe)

>> > However, the co-authors of draft-moncaster-conex-concepts-uses have
>> > asked me
>> > to float the idea that, altho we won't include DDoS as a use-case in i=
ts
>> > own
>> > right, we should mention it as an extreme case of two other use-cases.
>> > Let
>> > me explain...
>> >
>> > The use-cases we plan to include are:
>> > /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
>> > 5. Use cases (Highlighting that this is neither an exhaustive list nor=
 a
>> > prescriptive list...)
>> > =A05.1 =A0ConEx for better traffic Control
>> > =A0a. Targeting the right traffic
>> > =A0b. Encouraging (and eventually enforcing) better CC
>> > =A05.2 ConEx for better traffic monitoring
>> > =A0a. For compliance with SLAs
>> > =A0b. For assessing performance of your provider
>> > =A0c. Monitoring congestion hotspots for targeted upgrades
>> > =A0d. Monitoring congestion anomalies (Equipment problems or helping
>> > identify
>> > DDoS)
>> > /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
>> >
>> > We could include mentions of DDoS along the following lines (no need t=
o
>> > word-smith - I'm just trying to outline the concepts).
>> >
>> > 5.1b Encouraging (and eventually enforcing) better CC
>> > " =A0ConEx information can be used as a control metric for making
>> > =A0 traffic control decisions, such as deciding which traffic to
>> > =A0 prioritise or to identify and block sources of persistent and
>> > =A0 damaging congestion.
>>
>> this, I think, falls into the category of things that Lee from TWC is
>> interested in (and maybe his buddies at comcast/cox as well),
>> understanding what traffic can suffer more loss without causing more
>> end-user pain, and shifting that traffic to said profile.
>
> It's no coincidence that Rich Woundy is a co-author.
>
> Sure, Comcast have a current solution, but Rich is the first to say that =
it
> gives no encouragement (and actually punishes) approaches like LEDBAT,
> because it attributes blame for high utilisation by volume rather than
> congestion-volume. There's a list of other things Rich presented in the
> ConEx BoF that makes ConEx worth doing beyond what Comcast currently do.
> <http://www.ietf.org/proceedings/76/slides/conex-3.pdf>
>
> Whatever, I only included this text as a lead-up to the DDoS text later..=
.

better understanding of what contributes to congestion in 'your
network' and what traffic your customers don't mind missing a few
packets of (at the benefit of their real-time needs) is good.

>
>> > =A0 Simple ingress policer mechanisms, such as those described in
>> > =A0 [Policing-freedom] and [re-ecn-motive], could control the
>> > =A0 overall volume of congestion entering a network from each user.
>> > =A0 Such a policer could lead to a number of beneficial outcomes:
>>
>> these seem to be of the flavor of things Comcast 'powerboost' does, at
>> the CPE actually (if I understand their brand of magic correctly).
>
> No, powerboost is very different.
>
> [BTW, I and others did a start-up called Qariba back in 2001 which built =
a
> powerboost-like feature for cable networks, with a network API so it coul=
d
> be initiated from a CDN - we called it Broadband-800 by analogy to 800 ph=
one
> calls, because the server end was effectively temporarily buying more acc=
ess
> capacity on the end-user's behalf. We had a lot of interest in the US cab=
le
> industry at the time, but we got hit by the bubble bursting.]
>
> Powerboost gives you access to extra capacity irrespective of whether you
> will contribute more to congestion. ConEx is much simpler and more generi=
c.

so... 'PowerBoost' provides higher QOS/traffic-rate for a period of
time, it doesn't account for what damage that change has to other
traffic on my cable-link, the link to the head-end or the other links
in the network... it can be triggered at the CPE or the BRAS (I'd bet
that anywhere else in the Comcast network, to take a specific example,
probably doesn't matter).

I think that's the same thing I said earlier.

>> > =A0 o Heavy users might be encouraged to shift their usage away from
>> > =A0 =A0 peak times in order to be able to transfer more data without
>> > =A0 =A0 triggering a response from the policer;
>> > =A0 o Users might be encouraged to use software that shifts its usage =
away
>> > =A0 =A0 from congestion peaks (shifting in time whether by hours or se=
conds
>> > =A0 =A0 [LEDBAT] or shifting to less congested routes [MPTCP]), again =
to
>> > =A0 =A0 transfer more data without triggering the policer;
>> > =A0 o Developers of operating systems might be encouraged to supply su=
ch
>> > =A0 =A0 software as the default;
>> > =A0 o If certain applications did not use a congestion responsive
>> > transport
>> > =A0 =A0 and caused high levels of congestion-bit-rate, the policer wou=
ld
>> > =A0 =A0 eventually force the bit-rate to reduce in response to congest=
ion.
>>
>> Today this is (the last bullet) the equivalent of classifying a type
>> of traffic (port/protocol/src/dst as classification hints) into a
>> 'high loss' bucket/queue and just dropping faster/more of it. If the
>> traffic is TCP you should get drops, backoff, sawtooth behaviour until
>> you reach steady-state (maybe?) slower transfers.
>>
>> > =A0 It is believed that a system of ConEx policers could be built that
>> > =A0 can verify the integrity of ConEx information and remove traffic
>> > =A0 that does not comply with the protocol [re-ecn-motive]. If such
>> > =A0 robustness against cheating is indeed possible, a ConEx policer wo=
uld
>> > =A0 mitigate DDoS flooding attacks, at least to some extent, merely as=
 a
>> > =A0 function of its ability to enforce a response to persistent and
>> > =A0 excessive congestion.
>>
>> So, there are surely cases of 'ddos' (or DoS) that include loud
>> speakers... There are also many instances of DDoS that are many
>> hundreds of thousands of (or millions) of very quiet voices that in
>> total cause the DoS/DDoS effect.
>>
>> If you look at a system of marking of congestion information,
>> depending upon where that marking happens, and on the traffic in
>> question, there's no guarantee at all that the sources will be
>> squelched.
>
> In the following I'm going to talk about re-ECN, rather than ConEx, becau=
se
> we haven't defined ConEx yet...
>
>
>> For example, imagine a DDoS of 1 RST pkt (could also do this with 1
>> icmp-error-type message) per second from 1 million hosts across the
>> whole of the network (a 'botnet' for instance). There will never be
>> any packet sent back to the originators,
>
> (BTW, re-ECN doesn't need any packet sent back to the originators - you
> might have misunderstood the design.)
>
>> the rate will be low enough
>> that unless there is coincident traffic to the victim from these hosts
>> (in the same protocol/port profile probably) no signal will ever be
>> seen that leads to squelching of traffic.
>
> Yes, of course we've thought of this sort of attack. Even if we hadn't
> thought of this, since 2005 the security/DoS community have been thinking=
 up
> attacks against re-ECN (pre-ConEx), so it would have been hard to miss su=
ch
> an obvious one.
>
> If each bot was behind a 100Mb/s link, and everyday congestion levels wer=
e
> typically 0.2%, a reasonable config of basic re-ECN policer (I can give

where is this congestion level? on the 100mbps link? or somewhere in
the local ASN? or external to that ASN?
Looking at the ascii-network-art at the top:

user -> cpe?
cpe -> bras?
bras -> core?
core -> peering-edge?
other-net -> other-net ? (ran out of space/time on the art, call this
'middle of the Internet')
PE -> CE ?
CE -> server?

where the congestion is will determine what sort of measures can be
applied today. In the future though if the end-stations know there is
congestion they could choose to modify their behavior to better
utilize the network...

> typical numbers if you want) would allow a congestion-bit-rate of 10Mb/s =
of
> *marked* packets for a minute, then 30kb/s of sustained marked traffic af=
ter
> that.
>
> If your attack were a flooding attack with MTU size data packets (12,000b=
),
> rather than an RST attack (which can be detected and ignored), each bot

the 'and ignored' part is a little misleading since 1 rst/second at
the source isn't noticeable, 1m rst/second at the victim is another
story. There will be effects on the network at the victim end,
'ignore' here is an oversimplification. (don't think it's worthwhile
rat-holing here)

> would have to congestion mark all the packets to have any flooding effect=
.
> If each bot could draw congestion from its policer allowance at 30kb/s (s=
ee
> above), a basic re-ECN policer would allow it to sustain an attack at 1 p=
kt
> every 0.4sec - a little more than twice as fast as your attack.
>
> Agreeing on the 'largest' botnet ever seen isn't easy, but the Mariposa
> botnet dismantled earlier this year involved ~12M separate IP addresses.
> No-one can know whether that implied 12M separate machines, but I doubt i=
t -
> source address spoofing could have been used. This is relevant because a

the numbers for mariposa I think were from C&C check-ins, so these
required 3-way handshakes and weren't spoofed. I believe there's some
rotation in the address usage (dhcp-like effects) but in the end 1m or
12m isn't really important. (also not important to rat-hole here)

> re-ECN policer would sit at the physical access - nothing in re-ECN depen=
ds
> on valid source addresses. If each machine was spoofing 24 other addresse=
s
> (a total guess), that's about 500,000 real machines.
>
> Let's assume it gets much harder to marshall a larger army than the one s=
aid
> to be the largest yet. Then, a basic re-ECN deployment would contain an
> attack to about 30kb/s x 500,000 =3D 15Gb/s (which is similar to your sce=
nario
> of 12kb/s x 1M =3D 12Gb/s).

it's getting easier to marshal more hosts, not harder, again though
not important to rathole.

> In summary, just a basic re-ECN policer that isn't even looking for DoS c=
an

where is the policer deployed?

o if at the source-host I still say it's hard to tell 1pps from
another, if there is no congestion between src and BRAS there's no
reason to mark anything there.

o if at the destination/victim, then policing traffic to the victim
can be done if the traffic is significantly different from normal
traffic. Effects on legitimate traffic will be hard to avoid if the
attack traffic looks enough like real/legitimate traffic.

I don't think this is really even important enough to rathole on...

> raise the bar to require a botnet about 3,000 times bigger than without
> re-ECN [because that's the ratio of the peak traffic each site can send
> (100Mb/s) divided by the averaged rate of losses in normal traffic
> (30kb/s)].
>
> Yes, this isn't particularly impressive. But it's not meant to be. This i=
s
> just what you get without even trying to deal with DDoS specifically.
>
> i) The most obvious thing to add to a basic re-ECN policer would be a sim=
ple
> anomaly detector triggered by packets to one destination prefix with >10%=
 of
> them marked for expected congestion. That could trigger a much more
> stringent limit just for that destination prefix.

so, more state on the router? or is this another inline device doing
the work? Factor in cost in opex/capex to implement/maintain this...
at verizon studies/eng work showed ~500k/BRAS would be required to
implement in-line 'dpi' systems on 1g links, today those links are
likely 10g and I don't think the cost has reduced. at 500k/BRAS I
really should just turn up another 20g of capacity to the BRAS.
Downstream from the BRAS I am limited by physics/last-mile issues so
we're back to the top of the thread: "If you can mark/know what
traffic is more relevant to real-time ops at the customer-end then you
can make a better decision about what packets to drop"

>
> ii) If the victim network deployed ConEx monitors around its border, atta=
ck
> traffic would really stand out obviously because nearly all attack packet=
s
> would be marked for rest-of-path congestion. These monitors could then
> trigger similar tight limits on congestion-bit-rate to that destination.

this is costly, very costly, see above. It's not always obvious which
traffic is contributing to the actual problem, the traffic most likely
won't be marked until I mark it at ingress (at the peering edge),
which means I need to know what to mark, which the victim
host/customer-network will have to work out, or have me help them work
out.

> iii) I've assumed a homogeneous botnet. In practice many would be on slow=
er
> access links, and many would be within larger sites (e.g. campus nets)
> sharing an aggregated congestion allowance at the access from the campus =
to
> the Internet. I think my rough calculation is closer to 'worst case' than
> 'typical'.

perhaps, not worth rat-holing though.

>
>> The victim still suffers
>> ~1mpps of traffic, and actually conex/CC just makes the problem far
>> worse for the victim as all of his real-user traffic will be marked
>> (over time) and thus squelched out while the DDoS continues to flow
>> unabated.
>
> Nope. Importantly, normal /unsustained/ traffic to that destination would
> take a while (1 minute in my example) to hit the tighter congestion polic=
er
> limits. So regular usage would be reasonably unaffected. Whereas the bots
> have to sustain the attack to be effective.

botnets sustain attacks, it's what they do (among sending spam or
other things, not relevant to this discussion). Having sat through
many multi-gigabit per second attacks over the last many years...
sustaining a dos/ddos isn't really a problem for the attacker. User
traffic is always affected in these cases, if the traffic from the
attack looks like the normal customer traffic. If there's a method to
differentiate attack from customer traffic (rst packet, icmp packet)
the upstream can just drop the odd-traffic. They don't need anything
except the tools they have today to do this, and maintenance of more
state isn't a help.

Unless the victim can signal the upstream that 'traffic of this type'
or 'packets fitting this profile' should be marked for discard/drop
before other packets ... there's no real win. This, though, does
resemble the start of the thread: "Tell me what traffic is important
to your real-time ops, I'll apply a drop profile to all other traffic
when congestion occurs."

>
>> >
>> > =A0 It would be foolhardy to claim that it will be possible to make Co=
nEx
>> > =A0 invulnerable to all cheats and attacks, even though it has been ha=
rd
>> > to
>> > =A0 attack it so far. Nonetheless, ConEx would still be useful even if=
 a
>> > =A0 specific deployment of ConEx policers contained vulnerabilities. C=
onEx
>> > =A0 could still prevent congestion collapse due to careless ommission =
of
>> > =A0 congestion control, or due to release of software containing an
>> > =A0 accidental congestion control bug.
>>
>> I think that I nullified the above paragraph actually... except for a
>> software bug case, though I'd argue that in the case of the UW NTP
>> server incident ConEx/CC wouldn't have helped there either as the
>> broken software would still have been broken and the real-users of the
>> system would have just been CC"d out of existence.
>
> Nope. ConEx congestion policers would be run by the network operator and
> effectively take over congestion control if hosts fail to do it for a
> sustained period.

where are these run? at the BRAS? at the Peering-Edge? on the CPE?

> Yes, you could get bugs in ConEx policers. But it's unlikely (though not
> impossible) that would happen at the same time as a widespread bug in hos=
ts.
> Safety in diversity.
>
> Yes, the ConEx marking process on the hosts might be bugged. But, as with
> DoS protection, if a host or router is being flooded and has to drop stuf=
f,
> it is easy to preferentially drop unmarked ConEx packets (or non-ConEx
> packets) first.

sure.

> Agreed, this takes a leap-of-faith to believe a network might deploy all
> this, but it only has to deploy its own protections for itself, which is =
a
> good deployment property.

where are the protections deployed, cost and maintenance will be
important to the deployment path...

>> > "
>> >
>> > 5.2d: Monitoring congestion anomalies
>> > " =A0One of the most useful things ConEx provides is the ability to
>> > =A0 monitor the amount of congestion entering a network. Thus ConEx
>> > =A0 would add congestion to the information used by existing anomaly
>> > =A0 detection systems, thus greatly improving their ability to
>> > discriminate
>> > =A0 between pathological and benign anomalies. Such congestion-carryin=
g
>> > =A0 anomalies might be due to accidental misconfigurations in another
>> > =A0 network, or deliberate malicious attacks.
>
> I must also add that re-ECN is only handling the flooding aspect of an
> attack, not the /information/ in the packets (the RST flag in your exampl=
e).
> Of course a DDoS protection system needs to take this information into
> account.
>
> What 5.2d says is that all ConEx aims to do is add rest-of-path congestio=
n
> to that information, which is a powerful addition to the network's
> visibility. Particularly because info in the payload need not be visible =
to
> the network at all.

ok, what 'rest of path'? Is this per-asn "congestion" or per-hop
"congestion" information? how many bits (which bits?) in the ip-header
are going to carry this information?

>
>> I'm on board with letting folks know there is congestion, I'm not
>> convinced making decisions based on this is simple/helpful, yet.
>> Adding this information to packets, provided I don't need new
>> silicon/cpu-cycles/state to do this doesn't strike me as hard,
>
> Good
>
>> I can
>> imagine that most backbone providers wont' care and probably won't
>> implement the marking, but... I could be wrong.
>
> No network has to implement anything if they don't want to. It still work=
s
> for others who do. I also don't imagine backbone providers will care abou=
t
> this.
>
> It's easier for a backbone to throw capacity at the problem when they're
> running a few large links rather than loads of smaller links. So the smal=
ler
> links around the network edge are always going to bottleneck congestion o=
n
> behalf of cores and backbones.
>
>
>> Marking something purely 'congested path' or no though isn't very
>> helpful (there are almost always) many hops along a path and many
>> paths a packet/flow may take. At a single point in time one part of a
>> path may be congested, but there isn't any guarantee that part will be
>> affected even by the 1RTT time required to do something. (last-mile
>> problems aside, nothing fixes a 56k modem except... more bandwidth)
>
> I think you might have misunderstood. ConEx isn't intended to change who
> does congestion control. Hosts still do that on fast timescales.

part of this confusion is that the wording (even in this thread)
points to networks doing the congestion control. That's one reason I
put the horrid ascii art at the top :) "Where does this congestion
control happen?"

> ConEx is merely intended to allow the network to count up how much
> congestion is still contributed to by hosts, so the network can judge how
> effectively the host is doing congestion control. That can ultimately be
> used to take over control (via a network-based congestion policer) if the
> host is persistently being profligate (whether through selfishness, malic=
e
> or accident).

ok, but is the host causing just itself problems? or all users on a
shared link? if all users on a shared link discarding packets only
helps if the end-host obeys the drop effects. In the end, if you
operate a large shared media last-mile (wireless network) ... the end
stations still must compete for last-mile access. If everyone is well
behaved then of course more marking data and more polite
machines/users will permit better utilization of the link(s).

In the case of a wireless network (forgetting the cases where the
"network" can un-enroll/force-disconnect an end-station/user)
loud/impolite end-users can still cause problems for their shared
network breathren, there's nothing that can stop that, sadly... so
maybe we shouldn't rat-hole here either. For most last-mile cases the
goal ought to be to carry more good traffic on a link, reduce
retransmits and other overhead. ConEx should, I think, be able to
signal the end-user/station that it can do better, it should also help
the network (BRAS or peering-edge maybe) what traffic to prefer to
discard in times of stress.

>
>> Being able to use information about congestion along a path to more
>> efficiently use the available bandwidth is a fine plan, less
>> re-transmits is a good plan.
>>
>> > =A0 ConEx provides the additional benefit that it exposes congestion
>> > =A0 information as packets enter a network, not only at the point of
>> > =A0 congestion. Therefore it could be feasible for anomaly detection
>>
>> ok, this is a point I need clarification on. Where is the congestion
>> it exposes?
>> =A0'there is congestion'
>> =A0 =A0 or
>> =A0'at hop 12 there was congestion, you are at hop 22, fyi' (honestly
>> it'd be better to tell me 'as 3 is congested' I think)
>
> Re-ECN deliberately doesn't tell anyone exactly where the congestion is -
> that would reveal too much. Whatever your viewing point, it only tells yo=
u
> how much congestion there is upstream of you and how much downstream.
>
> Rather than re-write all that's ever been written on ConEx in one email, =
can
> I point you at a section of a paper to get this?:
> =A0 =A0 =A0 =A0Section 4.2 of
> =A0 =A0 =A0 =A0"Using Self-interest to Prevent Malice;
> =A0 =A0 =A0 =A0Fixing the Denial of Service Flaw of the Internet"
> =A0 =A0 =A0 =A0<http://www.bobbriscoe.net/projects/refb/index.html#refb_d=
plinc>

I'll go have a read.

> This is the only paper where the full re-ECN protocol is described but in
> outline form, to save you having to wade through the detailed protocol sp=
ec.
> It's also the only paper about re-ECN and DDoS (other than my later PhD
> thesis).
>
> Plenty of other papers describe the two main codepoints of the protocol:
> - Expected whole path congestion, W [inserted by the sender and immutable=
]
> - (ECN) congestion experienced upstream so far, U [marks added by routers=
]
> These are enough to answer your question above: a monitor at any point on
> the path can meter both W & U, so it can calculate expected downstream
> congestion on the rest-of-the-path as well: D =3D W - U.
>
> But only the above DDoS paper explains the third and last part of the
> protocol (initial credit) which enables re-ECN to work without any feedba=
ck
> at all - you need that for the DDoS case.
>
>
>> If the information as traffic enters my network is 'somewhere there is
>> congestion!' I'm not sure I can do much aside from buffer (not going
>> to happen for very long buffers are expensive) or WRED/drop packets.
>> If the information is that at a router-hop 10 hops back there was
>> congestion I may be able to either WRED traffic to that destination or
>> maybe shuttle packets on slightly longer paths if they are available
>> (though that seems overly complex as a solution, to me).
>
> See above: We're not expecting the network to do fine-grained congestion
> control per flow - that's for the host to decide. We're wanting to look a=
t
> the sum of all the congestion that traffic from a site (household, campus=
)
> is contributing to; everything on the Internet side of its access boundar=
y.

'access boundary' =3D=3D BRAS?

> Then at the next network border, the pair of networks at that border can =
see
> how much congestion is each side of that border - for all traffic, wheree=
ver
> it is destined. And so on.

ok, clearly this is the Peering-Edge, fine.

> That's all you need - necessary and sufficient - to regulate profligate
> users (or bugged, or malicious) and to regulate networks that allow their
> users to be profligate (or bugged or malicious).

'allow their users' is a little pejorative... I may have a network
with 500k users all with 1g links (full duplex). You may have a 1G
link to me, if your customer turns up a 'popular' site (say the
victoria-secret fashion show live stream, for a canonical example) ...
there will be issues between our networks. The only path to resolution
is to force traffic on alternate paths or upgrade links between these
2 networks, or to just chose to drop traffic and force 'bad service'
on 2 parts of the population. VictoriaSecret happened to pick the last
option in 2000/2001 (I forget the right year).

I suppose with better marking/exposure of the problem one/both sides
could have chosen to drop other traffic, that may have improved
things... pathological cases aside though, I still say that better use
of the network by end-stations is good. Better info about what to drop
first is also good.

>> > =A0 systems to use ConEx information to detect and shut down dangerous
>> > =A0 floods of congestion at the point where traffic enters a network.
>> > "
>> >
>> > Do you think circumspect wording about DDoS like the above would still
>> > trigger an allergic reaction from some readers? Would this sort of tex=
t
>> > allay your concerns? Or are you adamant that there should be no mentio=
n
>> > of
>> > DDoS at all?
>>
>> it's too easy to rathole on ddos, there are far too many ways to
>> create problems with it, and today it's not really that much of a
>> problem. There are tools in existence today to deal with the vast
>> majority of ddos problems, it really isn't a huge problem provided you
>> prepare and understand the threat(s).
>>
>> I suppose in summary: I'm not adamant about it one way or the other,
>> but I can see that it'll end up distracting from your actual
>> point/topic and thus cost you cycles you could have spent explaining
>> why conex/cc isn't just gussied up 'longdistance toll settlement for
>> IP' (for instance, though I think conex itself just marking packets
>> doesn't really fall into the quoted phrase).
>
> Yes, I understand you're trying to help us avoid interminable arguments.

yes, though this thread wasn't my best effort :(

> I guess what I'm saying is: We don't just change IP for chuckles. The job=
 of
> ConEx is to limit congestion caused by profligacy, malice and accidents. =
If
> it can only do that against people who don't try too hard to push back, w=
e
> should not spend our precious time on ConEx.

or rephrased a tad: "If we can help make the utilization of the
network more optimal.." (by letting the network and users of the
network decide which packets to discard first)

> IOW, we /should/ have some level of argument about whether ConEx can achi=
eve
> what it aims to achieve. Just not too much at this early stage, when we
> haven't even defined the ConEx protocol (as opposed to the re-ECN protoco=
l).

ok.

-chris

> Bob
>
>
>> -chris
>>
>> > Bob
>> >
>> >
>> >
>> > ________________________________________________________________
>> > Bob Briscoe, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0BT Innovate & Design
>> >
>
> ________________________________________________________________
> Bob Briscoe, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0BT Innovate & Design
>

From richard_woundy@cable.comcast.com  Tue Aug 17 12:17:37 2010
Return-Path: <richard_woundy@cable.comcast.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 21F4C3A6804 for <conex@core3.amsl.com>; Tue, 17 Aug 2010 12:17:37 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -108.22
X-Spam-Level: 
X-Spam-Status: No, score=-108.22 tagged_above=-999 required=5 tests=[AWL=0.243, BAYES_00=-2.599, HELO_EQ_MODEMCABLE=0.768, HOST_EQ_MODEMCABLE=1.368, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6nyuiHakad0v for <conex@core3.amsl.com>; Tue, 17 Aug 2010 12:17:34 -0700 (PDT)
Received: from pacdcimo01.cable.comcast.com (PacdcIMO01.cable.comcast.com [24.40.8.145]) by core3.amsl.com (Postfix) with ESMTP id AFD4B3A67FB for <conex@ietf.org>; Tue, 17 Aug 2010 12:17:33 -0700 (PDT)
Received: from ([24.40.55.40]) by pacdcimo01.cable.comcast.com with ESMTP with TLS id 5503620.89845607; Tue, 17 Aug 2010 15:18:07 -0400
Received: from PACDCEXCSMTP04.cable.comcast.com (24.40.15.118) by pacdcexhub03.cable.comcast.com (24.40.55.40) with Microsoft SMTP Server id 14.0.702.0; Tue, 17 Aug 2010 15:18:07 -0400
Received: from pacdcexcmb05.cable.comcast.com ([24.40.15.116]) by PACDCEXCSMTP04.cable.comcast.com with Microsoft SMTPSVC(6.0.3790.4675); Tue, 17 Aug 2010 15:18:05 -0400
x-mimeole: Produced By Microsoft Exchange V6.5
Content-Class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 17 Aug 2010 15:17:19 -0400
Message-ID: <EE00404438E9444D90AEA84210DC4067019325EA@pacdcexcmb05.cable.comcast.com>
In-Reply-To: <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [conex]  comments on draft-moncaster-conex-concepts-uses-01.txt
Thread-Index: Acs6u1E/9V3Zkbv3SxC1bLGI/IQHzQDBJ+SwABw2zZA=
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se><20100812123814.GF16820@verdi>	<001b01cb3a2b$80f47420$82dd5c60$@com><alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net>
From: "Woundy, Richard" <Richard_Woundy@cable.comcast.com>
To: Kevin Mason <Kevin.Mason@telecom.co.nz>, <conex@ietf.org>
X-OriginalArrivalTime: 17 Aug 2010 19:18:05.0159 (UTC) FILETIME=[EAFED770:01CB3E40]
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 17 Aug 2010 19:17:37 -0000

>>I would challenge the view that another service provider "causes"
congestion on another ISP's network. Data is only passed between sender
and receiver so it is the ISP's customer that is requesting the data
from customers of the other provider, not the provider "sending it".

Kevin, you make a good point (customer pulls rather than provider
pushes), but I don't think that is the entire story.

Consider a hypothetical example (an *extreme* case to be sure, but not
totally disconnected from reality) in which a service provider changes
its routing policy in the following manner: from forwarding traffic
somewhat equally over 10 interconnects to a downstream ISP, to
forwarding all traffic over a single interconnect to the downstream ISP
(such as changing policy to "hot potato routing" from a central hosting
center to the downstream ISP). That change in policy is very likely to
cause a lot of congestion over the single interconnect link, even as the
overall consumer behavior doesn't change at all.

There are a lot less extreme, real-world examples of routing policy
changes that would have a similar network impact.

-- Rich

-----Original Message-----
From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf
Of Kevin Mason
Sent: Tuesday, August 17, 2010 1:35 AM
To: conex@ietf.org
Subject: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt

A  comment on the draft.

On accouinting approaches to using congtion information

I would challenge the view that another service provider "causes"
congestion on another ISP's network. Data is only passed between sender
and receiver so it is the ISP's customer that is requesting the data
from customers of the other provider, not the provider "sending it".

So if the "sending" provider is not causing the congestion (because it
is the receiving provider's customer the requested it) then arguing the
"sending" provider should pay for any congestion the resulted might be
difficult. I can see endless legal arguments as to why one or the other
party is culpable and therefore who should endure any commercial
consequences.

On network uses of the information I think there is a general concept
that is not being captured well.=20

In ISP networks there are, very simply, two parts. Firstly there is the
connectivity between each account holders demarcation (UNI) and a IP
edge device (BRAS/BNG in Broadband Forum speak). The IP edge device
typically facitites AAA functions as well as user based policy
enforcement and by necessity is fully aware of what flows below to what
UNI's (e.g. because they are all on a single authenticated VLAN or PPP
tunnel).

Beyond that point between the IP edge and any peering point, the network
does not maintain an specific awareness of individual end points.
Routers could  theoretically maintain information about each
source/destination pair AND consult some database to relate that to a
end user profile, but this is not very scaleable.=20

So in the core ISP network, if a forwarding next hop is approaching
overload, then the egress config of the router must deal with the
aggregate flow and act on information that is in the packet header
alone. Maintaining knowledge of which "users" have "caused" the most
congestion in recent times is too hard.

So it would appear that a possible scheme might be that a two stage
queue management regime might be desirable, whereby at a lower queue
size packets begin to be congestion marked if ECN capable and maybe
discarded if not, but at a slightly higher queue depth, packets that are
ECN capable but with a high positive congestion value get discarded, on
the basis that these have a lower chance of reaching their destination
than packets not declaring an expectation of congestion on the rest of
the path. I don't see a "border monitor" as described in the draft being
very practical or useful in this part of the network.

When packets arrive at the BRAS destined for the user, then user based
policy could be applied. One such policy might be to discard all packets
with a congestion deficit of more than x. This is the safety net against
dishonesty by the sender. An additional policy might be to discard
packets that have experienced congestion above a threshiold (which may
be different for different user profiles) so far AND that are destined
to a user that has a recent history of high congestion marked packets.
If previous congestion marks have result in the user backing off then
this policy would not be invoked, so it would only apply to users that
are persistently contributing to congestion somewhere on the path
traversed (on the same provider or any preceding providers network).

These policy actions could constitute the "edge monitor" functions
referred to in the draft but would actually be part of the policy
functions of the edge device itself, not any independent function.

Other may have diffeent views of how the revealed congestion information
might be used but I believer it is useful to at least consider the two
parts of an ISP network when discussing possible used for the
information.

Cheers
Kevin Mason
_______________________________________________
conex mailing list
conex@ietf.org
https://www.ietf.org/mailman/listinfo/conex

From prvs=484505B9D5=Kevin.Mason@telecom.co.nz  Tue Aug 17 16:40:20 2010
Return-Path: <prvs=484505B9D5=Kevin.Mason@telecom.co.nz>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 4F7613A6830 for <conex@core3.amsl.com>; Tue, 17 Aug 2010 16:40:20 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.995
X-Spam-Level: 
X-Spam-Status: No, score=-2.995 tagged_above=-999 required=5 tests=[AWL=0.604,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PFOI+L1Lv8WK for <conex@core3.amsl.com>; Tue, 17 Aug 2010 16:40:17 -0700 (PDT)
Received: from mgate2.telecom.co.nz (envoy-out.telecom.co.nz [146.171.15.100]) by core3.amsl.com (Postfix) with ESMTP id 49E4A3A6817 for <conex@ietf.org>; Tue, 17 Aug 2010 16:40:16 -0700 (PDT)
Received: from mgate6.telecom.co.nz (unknown [146.171.1.21]) by mgate2.telecom.co.nz (Tumbleweed MailGate 3.7.1) with ESMTP id 2D29310404F1; Wed, 18 Aug 2010 11:40:46 +1200 (NZST)
X-WSS-ID: 0L7BLRV-09-40I-02
X-M-MSG: 
Received: from hp2848.telecom.tcnz.net (hp2848.telecom.tcnz.net [146.171.228.250]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mgate6.telecom.co.nz (Postfix) with ESMTP id 13EBE5B316C1; Wed, 18 Aug 2010 11:40:43 +1200 (NZST)
Received: from hp3119.telecom.tcnz.net (146.171.212.204) by hp2848.telecom.tcnz.net (146.171.228.250) with Microsoft SMTP Server (TLS) id 8.2.234.1; Wed, 18 Aug 2010 11:40:47 +1200
Received: from WNEXMBX01.telecom.tcnz.net ([146.171.212.201]) by hp3119.telecom.tcnz.net ([146.171.212.204]) with mapi; Wed, 18 Aug 2010 11:40:46 +1200
From: Kevin Mason <Kevin.Mason@telecom.co.nz>
To: "Woundy, Richard" <Richard_Woundy@cable.comcast.com>
Date: Wed, 18 Aug 2010 11:40:45 +1200
Thread-Topic: [conex]  comments on draft-moncaster-conex-concepts-uses-01.txt
Thread-Index: Acs6u1E/9V3Zkbv3SxC1bLGI/IQHzQDBJ+SwABw2zZAADGMdcA==
Message-ID: <563C162F43D1B14E9FD2BC0A776C1E9127EF3A66C7@WNEXMBX01.telecom.tcnz.net>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se><20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com><alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net> <EE00404438E9444D90AEA84210DC4067019325EA@pacdcexcmb05.cable.comcast.com>
In-Reply-To: <EE00404438E9444D90AEA84210DC4067019325EA@pacdcexcmb05.cable.comcast.com>
Accept-Language: en-US, en-NZ
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US, en-NZ
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 17 Aug 2010 23:40:20 -0000

Cheers
Kevin Mason

> -----Original Message-----
> From: Woundy, Richard [mailto:Richard_Woundy@cable.comcast.com]
> Sent: Wednesday, 18 August 2010 7:17 a.m.
> To: Kevin Mason; conex@ietf.org
> Subject: RE: [conex] comments on draft-moncaster-conex-concepts-uses-
> 01.txt
>=20
> >>I would challenge the view that another service provider "causes"
> congestion on another ISP's network. Data is only passed between sender
> and receiver so it is the ISP's customer that is requesting the data
> from customers of the other provider, not the provider "sending it".
>=20
> Kevin, you make a good point (customer pulls rather than provider
> pushes), but I don't think that is the entire story.
>=20
> Consider a hypothetical example (an *extreme* case to be sure, but not
> totally disconnected from reality) in which a service provider changes
> its routing policy in the following manner: from forwarding traffic
> somewhat equally over 10 interconnects to a downstream ISP, to
> forwarding all traffic over a single interconnect to the downstream ISP
> (such as changing policy to "hot potato routing" from a central hosting
> center to the downstream ISP). That change in policy is very likely to
> cause a lot of congestion over the single interconnect link, even as the
> overall consumer behavior doesn't change at all.=20

[Kevin Mason] The culpability for congestion on the interconnecting hop is =
very dependent on who dimensions it and the commercials that underpin it. I=
f the "sending" provider dimensions it then congestion on this hop is solel=
y the accountability of the sending proivder, no need for congestion exposu=
re here as they can directly measure it today (queuing and discards). IF th=
e receiving provider dimensions it then they have no current visibility of =
the congestion on the preceding link, but congestion exposure will potentia=
lly only tell them that congestion has already been experienced, not where.=
 So if there is any SLA around performance of the interconnection hop then =
the receiving party still has to get info from the sending provider to asce=
rtain that it is the interconnecting hop that is the problem and not a hop =
deeper in the sending provder's network.

Capturing the information for network management purposes at the interprovi=
der level may well be very useful for overall network planning purposes if =
practical, but using it to underpin payment between providers is very diffe=
rent.

However I do not think we need to get too hung up on this, the point is tha=
t it is debateable who "causes" the congestion in an interprovider context,=
 and therefore getting agreement on who might "pay" for it has the potentia=
l only to enrich the legal industry. I am however in favour of using conges=
tion information for accounting purposes at a individual ISP customer accou=
nt level to recognise and reward cooperative end user behaviour (e.g. conge=
stion caps)


To my mind the power of Conex is to provide a forwarding device a richer se=
t of information to make better decisions on how to manage their queues for=
 the greater good.

>=20
> There are a lot less extreme, real-world examples of routing policy
> changes that would have a similar network impact.
>=20
> -- Rich
>=20
> -----Original Message-----
> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf
> Of Kevin Mason
> Sent: Tuesday, August 17, 2010 1:35 AM
> To: conex@ietf.org
> Subject: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
>=20
> A  comment on the draft.
>=20
> On accouinting approaches to using congtion information
>=20
> I would challenge the view that another service provider "causes"
> congestion on another ISP's network. Data is only passed between sender
> and receiver so it is the ISP's customer that is requesting the data
> from customers of the other provider, not the provider "sending it".
>=20
> So if the "sending" provider is not causing the congestion (because it
> is the receiving provider's customer the requested it) then arguing the
> "sending" provider should pay for any congestion the resulted might be
> difficult. I can see endless legal arguments as to why one or the other
> party is culpable and therefore who should endure any commercial
> consequences.
>=20
> On network uses of the information I think there is a general concept
> that is not being captured well.
>=20
> In ISP networks there are, very simply, two parts. Firstly there is the
> connectivity between each account holders demarcation (UNI) and a IP
> edge device (BRAS/BNG in Broadband Forum speak). The IP edge device
> typically facitites AAA functions as well as user based policy
> enforcement and by necessity is fully aware of what flows below to what
> UNI's (e.g. because they are all on a single authenticated VLAN or PPP
> tunnel).
>=20
> Beyond that point between the IP edge and any peering point, the network
> does not maintain an specific awareness of individual end points.
> Routers could  theoretically maintain information about each
> source/destination pair AND consult some database to relate that to a
> end user profile, but this is not very scaleable.
>=20
> So in the core ISP network, if a forwarding next hop is approaching
> overload, then the egress config of the router must deal with the
> aggregate flow and act on information that is in the packet header
> alone. Maintaining knowledge of which "users" have "caused" the most
> congestion in recent times is too hard.
>=20
> So it would appear that a possible scheme might be that a two stage
> queue management regime might be desirable, whereby at a lower queue
> size packets begin to be congestion marked if ECN capable and maybe
> discarded if not, but at a slightly higher queue depth, packets that are
> ECN capable but with a high positive congestion value get discarded, on
> the basis that these have a lower chance of reaching their destination
> than packets not declaring an expectation of congestion on the rest of
> the path. I don't see a "border monitor" as described in the draft being
> very practical or useful in this part of the network.
>=20
> When packets arrive at the BRAS destined for the user, then user based
> policy could be applied. One such policy might be to discard all packets
> with a congestion deficit of more than x. This is the safety net against
> dishonesty by the sender. An additional policy might be to discard
> packets that have experienced congestion above a threshiold (which may
> be different for different user profiles) so far AND that are destined
> to a user that has a recent history of high congestion marked packets.
> If previous congestion marks have result in the user backing off then
> this policy would not be invoked, so it would only apply to users that
> are persistently contributing to congestion somewhere on the path
> traversed (on the same provider or any preceding providers network).
>=20
> These policy actions could constitute the "edge monitor" functions
> referred to in the draft but would actually be part of the policy
> functions of the edge device itself, not any independent function.
>=20
> Other may have diffeent views of how the revealed congestion information
> might be used but I believer it is useful to at least consider the two
> parts of an ISP network when discussing possible used for the
> information.
>=20
> Cheers
> Kevin Mason
> _______________________________________________
> conex mailing list
> conex@ietf.org
> https://www.ietf.org/mailman/listinfo/conex

From prvs=484623993E=Kevin.Mason@telecom.co.nz  Tue Aug 17 17:03:24 2010
Return-Path: <prvs=484623993E=Kevin.Mason@telecom.co.nz>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 2E1963A635F for <conex@core3.amsl.com>; Tue, 17 Aug 2010 17:03:24 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.603
X-Spam-Level: 
X-Spam-Status: No, score=0.603 tagged_above=-999 required=5 tests=[AWL=-3.398,  BAYES_50=0.001, GB_SUMOF=5, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id udkn1fNqEmeU for <conex@core3.amsl.com>; Tue, 17 Aug 2010 17:03:21 -0700 (PDT)
Received: from mgate2.telecom.co.nz (envoy-out.telecom.co.nz [146.171.15.100]) by core3.amsl.com (Postfix) with ESMTP id 477583A680A for <conex@ietf.org>; Tue, 17 Aug 2010 17:03:19 -0700 (PDT)
Received: from mgate3.telecom.co.nz (unknown [146.171.1.21]) by mgate2.telecom.co.nz (Tumbleweed MailGate 3.7.1) with ESMTP id 24C4910407F4; Wed, 18 Aug 2010 12:03:51 +1200 (NZST)
X-WSS-ID: 0L7BMUG-03-17I-02
X-M-MSG: 
Received: from hp2846.telecom.tcnz.net (hp2846.telecom.tcnz.net [146.171.228.248]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mgate3.telecom.co.nz (Postfix) with ESMTP id 1760B49F8FA5; Wed, 18 Aug 2010 12:03:51 +1200 (NZST)
Received: from hp3120.telecom.tcnz.net (146.171.212.205) by hp2846.telecom.tcnz.net (146.171.228.248) with Microsoft SMTP Server (TLS) id 8.2.234.1; Wed, 18 Aug 2010 12:03:51 +1200
Received: from WNEXMBX01.telecom.tcnz.net ([146.171.212.201]) by hp3120.telecom.tcnz.net ([146.171.212.205]) with mapi; Wed, 18 Aug 2010 12:03:51 +1200
From: Kevin Mason <Kevin.Mason@telecom.co.nz>
To: Christopher Morrow <morrowc.lists@gmail.com>
Date: Wed, 18 Aug 2010 12:03:50 +1200
Thread-Topic: [conex] ConEx & DDoS
Thread-Index: Acs+OKn+ESd4LmC2Sk6qwu5pFKeCIgAL/G9A
Message-ID: <563C162F43D1B14E9FD2BC0A776C1E9127EF3A6931@WNEXMBX01.telecom.tcnz.net>
References: <201008031942.o73JgGSW018260@bagheera.jungle.bt.co.uk> <AANLkTikZFvkOQjNLuasif+vAjeJSac1E-BqR6pptn=7p@mail.gmail.com> <201008061551.o76FpKPZ010840@bagheera.jungle.bt.co.uk> <AANLkTi=ALJa0BpqnnBVH9XSUd61mYFOiQSGo_AzQXEE9@mail.gmail.com>
In-Reply-To: <AANLkTi=ALJa0BpqnnBVH9XSUd61mYFOiQSGo_AzQXEE9@mail.gmail.com>
Accept-Language: en-US, en-NZ
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US, en-NZ
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] ConEx & DDoS
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 18 Aug 2010 00:03:24 -0000

Cheers
Kevin Mason


> -----Original Message-----
> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf Of
> Christopher Morrow
> Sent: Wednesday, 18 August 2010 6:18 a.m.
> To: Bob Briscoe
> Cc: conex@ietf.org
> Subject: Re: [conex] ConEx & DDoS
>
> it would help (me at least) in this discussion if we think of a
> notional network like:
>

[Kevin Mason] This would be helpful, but I suggest we change the labels and=
 add a core on the other side to make it a little more generic. There may b=
e some debate about the actual labels.
>
>            receiver
>              |
>            cpe
>             |
        Customer policy point (e.g. BRAS, GGSN, DOCSIS equivalent)
>         /     \
>     coreR1    coreR2 (inside a single ASN)
>       \        /
>        peer/transit edge (exit/entrance to another ASN)
          /    \
      coreS1   coreS2
         \     /
        Customer policy point (e.g. PE, another BRAS for peer to peer)
>          |
>         CE
>          |
>        sender>
> (apologies for bad ascii art)
>
> On Fri, Aug 6, 2010 at 11:51 AM, Bob Briscoe <rbriscoe@jungle.bt.co.uk>
> wrote:
> > Chris,
> >
> > At 04:37 04/08/2010, Christopher Morrow wrote:
> >>
> >> (I think I sub'd to the list with this address...)
> >>
> >> On Tue, Aug 3, 2010 at 3:42 PM, Bob Briscoe <rbriscoe@jungle.bt.co.uk>
> >> wrote:
> >> > Chris,
> >> >
> >> > During the ConEx w-g session last Tuesday in Maastricht you suggeste=
d
> we
> >> > should not include DDoS mitigation as a use-case for ConEx. I was
> >> > willing to
> >> > agree as we don't need to court controversy.
> >>
> >> yup, no use rat-holing if it's not central to the discussion.
>
> oops, we rat-holed... this sort of proves my point about ddos
> discussions wrt conex though.
> I would stick with (for conex): "Expose information about congestion
> on the network path, permit better/more-intelligent discard profiles
> to be used along the network paths."
>
> Where 'better/more-intelligent' is really: "Drop traffic that's less
> important to the end-users."
>
> Lee/Rich I think explained this (to me) as: "make sure my Internet
> gaming works perfectly, it is OK to slow down a large video file
> transfer." I believe plus-net in the UK does some of this today? (I
> may have the wrong provider... but it's a competitor of BT's I
> believe)
>
> >> > However, the co-authors of draft-moncaster-conex-concepts-uses have
> >> > asked me
> >> > to float the idea that, altho we won't include DDoS as a use-case in
> its
> >> > own
> >> > right, we should mention it as an extreme case of two other use-
> cases.
> >> > Let
> >> > me explain...
> >> >
> >> > The use-cases we plan to include are:
> >> >
> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
> >> > 5. Use cases (Highlighting that this is neither an exhaustive list
> nor a
> >> > prescriptive list...)
> >> >  5.1  ConEx for better traffic Control
> >> >  a. Targeting the right traffic
> >> >  b. Encouraging (and eventually enforcing) better CC
> >> >  5.2 ConEx for better traffic monitoring
> >> >  a. For compliance with SLAs
> >> >  b. For assessing performance of your provider
> >> >  c. Monitoring congestion hotspots for targeted upgrades
> >> >  d. Monitoring congestion anomalies (Equipment problems or helping
> >> > identify
> >> > DDoS)
> >> >
> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
> >> >
> >> > We could include mentions of DDoS along the following lines (no need
> to
> >> > word-smith - I'm just trying to outline the concepts).
> >> >
> >> > 5.1b Encouraging (and eventually enforcing) better CC
> >> > "  ConEx information can be used as a control metric for making
> >> >   traffic control decisions, such as deciding which traffic to
> >> >   prioritise or to identify and block sources of persistent and
> >> >   damaging congestion.
> >>
> >> this, I think, falls into the category of things that Lee from TWC is
> >> interested in (and maybe his buddies at comcast/cox as well),
> >> understanding what traffic can suffer more loss without causing more
> >> end-user pain, and shifting that traffic to said profile.
> >
> > It's no coincidence that Rich Woundy is a co-author.
> >
> > Sure, Comcast have a current solution, but Rich is the first to say tha=
t
> it
> > gives no encouragement (and actually punishes) approaches like LEDBAT,
> > because it attributes blame for high utilisation by volume rather than
> > congestion-volume. There's a list of other things Rich presented in the
> > ConEx BoF that makes ConEx worth doing beyond what Comcast currently do=
.
> > <http://www.ietf.org/proceedings/76/slides/conex-3.pdf>
> >
> > Whatever, I only included this text as a lead-up to the DDoS text
> later...
>
> better understanding of what contributes to congestion in 'your
> network' and what traffic your customers don't mind missing a few
> packets of (at the benefit of their real-time needs) is good.
>
> >
> >> >   Simple ingress policer mechanisms, such as those described in
> >> >   [Policing-freedom] and [re-ecn-motive], could control the
> >> >   overall volume of congestion entering a network from each user.
> >> >   Such a policer could lead to a number of beneficial outcomes:
> >>
> >> these seem to be of the flavor of things Comcast 'powerboost' does, at
> >> the CPE actually (if I understand their brand of magic correctly).
> >
> > No, powerboost is very different.
> >
> > [BTW, I and others did a start-up called Qariba back in 2001 which buil=
t
> a
> > powerboost-like feature for cable networks, with a network API so it
> could
> > be initiated from a CDN - we called it Broadband-800 by analogy to 800
> phone
> > calls, because the server end was effectively temporarily buying more
> access
> > capacity on the end-user's behalf. We had a lot of interest in the US
> cable
> > industry at the time, but we got hit by the bubble bursting.]
> >
> > Powerboost gives you access to extra capacity irrespective of whether
> you
> > will contribute more to congestion. ConEx is much simpler and more
> generic.
>
> so... 'PowerBoost' provides higher QOS/traffic-rate for a period of
> time, it doesn't account for what damage that change has to other
> traffic on my cable-link, the link to the head-end or the other links
> in the network... it can be triggered at the CPE or the BRAS (I'd bet
> that anywhere else in the Comcast network, to take a specific example,
> probably doesn't matter).
>
> I think that's the same thing I said earlier.
>
> >> >   o Heavy users might be encouraged to shift their usage away from
> >> >     peak times in order to be able to transfer more data without
> >> >     triggering a response from the policer;
> >> >   o Users might be encouraged to use software that shifts its usage
> away
> >> >     from congestion peaks (shifting in time whether by hours or
> seconds
> >> >     [LEDBAT] or shifting to less congested routes [MPTCP]), again to
> >> >     transfer more data without triggering the policer;
> >> >   o Developers of operating systems might be encouraged to supply
> such
> >> >     software as the default;
> >> >   o If certain applications did not use a congestion responsive
> >> > transport
> >> >     and caused high levels of congestion-bit-rate, the policer would
> >> >     eventually force the bit-rate to reduce in response to
> congestion.
> >>
> >> Today this is (the last bullet) the equivalent of classifying a type
> >> of traffic (port/protocol/src/dst as classification hints) into a
> >> 'high loss' bucket/queue and just dropping faster/more of it. If the
> >> traffic is TCP you should get drops, backoff, sawtooth behaviour until
> >> you reach steady-state (maybe?) slower transfers.
> >>
> >> >   It is believed that a system of ConEx policers could be built that
> >> >   can verify the integrity of ConEx information and remove traffic
> >> >   that does not comply with the protocol [re-ecn-motive]. If such
> >> >   robustness against cheating is indeed possible, a ConEx policer
> would
> >> >   mitigate DDoS flooding attacks, at least to some extent, merely as
> a
> >> >   function of its ability to enforce a response to persistent and
> >> >   excessive congestion.
> >>
> >> So, there are surely cases of 'ddos' (or DoS) that include loud
> >> speakers... There are also many instances of DDoS that are many
> >> hundreds of thousands of (or millions) of very quiet voices that in
> >> total cause the DoS/DDoS effect.
> >>
> >> If you look at a system of marking of congestion information,
> >> depending upon where that marking happens, and on the traffic in
> >> question, there's no guarantee at all that the sources will be
> >> squelched.
> >
> > In the following I'm going to talk about re-ECN, rather than ConEx,
> because
> > we haven't defined ConEx yet...
> >
> >
> >> For example, imagine a DDoS of 1 RST pkt (could also do this with 1
> >> icmp-error-type message) per second from 1 million hosts across the
> >> whole of the network (a 'botnet' for instance). There will never be
> >> any packet sent back to the originators,
> >
> > (BTW, re-ECN doesn't need any packet sent back to the originators - you
> > might have misunderstood the design.)
> >
> >> the rate will be low enough
> >> that unless there is coincident traffic to the victim from these hosts
> >> (in the same protocol/port profile probably) no signal will ever be
> >> seen that leads to squelching of traffic.
> >
> > Yes, of course we've thought of this sort of attack. Even if we hadn't
> > thought of this, since 2005 the security/DoS community have been
> thinking up
> > attacks against re-ECN (pre-ConEx), so it would have been hard to miss
> such
> > an obvious one.
> >
> > If each bot was behind a 100Mb/s link, and everyday congestion levels
> were
> > typically 0.2%, a reasonable config of basic re-ECN policer (I can give
>
> where is this congestion level? on the 100mbps link? or somewhere in
> the local ASN? or external to that ASN?
> Looking at the ascii-network-art at the top:
>
> user -> cpe?
> cpe -> bras?
> bras -> core?
> core -> peering-edge?
> other-net -> other-net ? (ran out of space/time on the art, call this
> 'middle of the Internet')
> PE -> CE ?
> CE -> server?
>
> where the congestion is will determine what sort of measures can be
> applied today. In the future though if the end-stations know there is
> congestion they could choose to modify their behavior to better
> utilize the network...
>
> > typical numbers if you want) would allow a congestion-bit-rate of 10Mb/=
s
> of
> > *marked* packets for a minute, then 30kb/s of sustained marked traffic
> after
> > that.
> >
> > If your attack were a flooding attack with MTU size data packets
> (12,000b),
> > rather than an RST attack (which can be detected and ignored), each bot
>
> the 'and ignored' part is a little misleading since 1 rst/second at
> the source isn't noticeable, 1m rst/second at the victim is another
> story. There will be effects on the network at the victim end,
> 'ignore' here is an oversimplification. (don't think it's worthwhile
> rat-holing here)
>
> > would have to congestion mark all the packets to have any flooding
> effect.
> > If each bot could draw congestion from its policer allowance at 30kb/s
> (see
> > above), a basic re-ECN policer would allow it to sustain an attack at 1
> pkt
> > every 0.4sec - a little more than twice as fast as your attack.
> >
> > Agreeing on the 'largest' botnet ever seen isn't easy, but the Mariposa
> > botnet dismantled earlier this year involved ~12M separate IP addresses=
.
> > No-one can know whether that implied 12M separate machines, but I doubt
> it -
> > source address spoofing could have been used. This is relevant because =
a
>
> the numbers for mariposa I think were from C&C check-ins, so these
> required 3-way handshakes and weren't spoofed. I believe there's some
> rotation in the address usage (dhcp-like effects) but in the end 1m or
> 12m isn't really important. (also not important to rat-hole here)
>
> > re-ECN policer would sit at the physical access - nothing in re-ECN
> depends
> > on valid source addresses. If each machine was spoofing 24 other
> addresses
> > (a total guess), that's about 500,000 real machines.
> >
> > Let's assume it gets much harder to marshall a larger army than the one
> said
> > to be the largest yet. Then, a basic re-ECN deployment would contain an
> > attack to about 30kb/s x 500,000 =3D 15Gb/s (which is similar to your
> scenario
> > of 12kb/s x 1M =3D 12Gb/s).
>
> it's getting easier to marshal more hosts, not harder, again though
> not important to rathole.
>
> > In summary, just a basic re-ECN policer that isn't even looking for DoS
> can
>
> where is the policer deployed?
>
> o if at the source-host I still say it's hard to tell 1pps from
> another, if there is no congestion between src and BRAS there's no
> reason to mark anything there.
>
> o if at the destination/victim, then policing traffic to the victim
> can be done if the traffic is significantly different from normal
> traffic. Effects on legitimate traffic will be hard to avoid if the
> attack traffic looks enough like real/legitimate traffic.
>
> I don't think this is really even important enough to rathole on...
>
> > raise the bar to require a botnet about 3,000 times bigger than without
> > re-ECN [because that's the ratio of the peak traffic each site can send
> > (100Mb/s) divided by the averaged rate of losses in normal traffic
> > (30kb/s)].
> >
> > Yes, this isn't particularly impressive. But it's not meant to be. This
> is
> > just what you get without even trying to deal with DDoS specifically.
> >
> > i) The most obvious thing to add to a basic re-ECN policer would be a
> simple
> > anomaly detector triggered by packets to one destination prefix with
> >10% of
> > them marked for expected congestion. That could trigger a much more
> > stringent limit just for that destination prefix.
>
> so, more state on the router? or is this another inline device doing
> the work? Factor in cost in opex/capex to implement/maintain this...
> at verizon studies/eng work showed ~500k/BRAS would be required to
> implement in-line 'dpi' systems on 1g links, today those links are
> likely 10g and I don't think the cost has reduced. at 500k/BRAS I
> really should just turn up another 20g of capacity to the BRAS.
> Downstream from the BRAS I am limited by physics/last-mile issues so
> we're back to the top of the thread: "If you can mark/know what
> traffic is more relevant to real-time ops at the customer-end then you
> can make a better decision about what packets to drop"
>
> >
> > ii) If the victim network deployed ConEx monitors around its border,
> attack
> > traffic would really stand out obviously because nearly all attack
> packets
> > would be marked for rest-of-path congestion. These monitors could then
> > trigger similar tight limits on congestion-bit-rate to that destination=
.
>
> this is costly, very costly, see above. It's not always obvious which
> traffic is contributing to the actual problem, the traffic most likely
> won't be marked until I mark it at ingress (at the peering edge),
> which means I need to know what to mark, which the victim
> host/customer-network will have to work out, or have me help them work
> out.
>
> > iii) I've assumed a homogeneous botnet. In practice many would be on
> slower
> > access links, and many would be within larger sites (e.g. campus nets)
> > sharing an aggregated congestion allowance at the access from the campu=
s
> to
> > the Internet. I think my rough calculation is closer to 'worst case'
> than
> > 'typical'.
>
> perhaps, not worth rat-holing though.
>
> >
> >> The victim still suffers
> >> ~1mpps of traffic, and actually conex/CC just makes the problem far
> >> worse for the victim as all of his real-user traffic will be marked
> >> (over time) and thus squelched out while the DDoS continues to flow
> >> unabated.
> >
> > Nope. Importantly, normal /unsustained/ traffic to that destination
> would
> > take a while (1 minute in my example) to hit the tighter congestion
> policer
> > limits. So regular usage would be reasonably unaffected. Whereas the
> bots
> > have to sustain the attack to be effective.
>
> botnets sustain attacks, it's what they do (among sending spam or
> other things, not relevant to this discussion). Having sat through
> many multi-gigabit per second attacks over the last many years...
> sustaining a dos/ddos isn't really a problem for the attacker. User
> traffic is always affected in these cases, if the traffic from the
> attack looks like the normal customer traffic. If there's a method to
> differentiate attack from customer traffic (rst packet, icmp packet)
> the upstream can just drop the odd-traffic. They don't need anything
> except the tools they have today to do this, and maintenance of more
> state isn't a help.
>
> Unless the victim can signal the upstream that 'traffic of this type'
> or 'packets fitting this profile' should be marked for discard/drop
> before other packets ... there's no real win. This, though, does
> resemble the start of the thread: "Tell me what traffic is important
> to your real-time ops, I'll apply a drop profile to all other traffic
> when congestion occurs."
>
> >
> >> >
> >> >   It would be foolhardy to claim that it will be possible to make
> ConEx
> >> >   invulnerable to all cheats and attacks, even though it has been
> hard
> >> > to
> >> >   attack it so far. Nonetheless, ConEx would still be useful even if
> a
> >> >   specific deployment of ConEx policers contained vulnerabilities.
> ConEx
> >> >   could still prevent congestion collapse due to careless ommission
> of
> >> >   congestion control, or due to release of software containing an
> >> >   accidental congestion control bug.
> >>
> >> I think that I nullified the above paragraph actually... except for a
> >> software bug case, though I'd argue that in the case of the UW NTP
> >> server incident ConEx/CC wouldn't have helped there either as the
> >> broken software would still have been broken and the real-users of the
> >> system would have just been CC"d out of existence.
> >
> > Nope. ConEx congestion policers would be run by the network operator an=
d
> > effectively take over congestion control if hosts fail to do it for a
> > sustained period.
>
> where are these run? at the BRAS? at the Peering-Edge? on the CPE?
>
> > Yes, you could get bugs in ConEx policers. But it's unlikely (though no=
t
> > impossible) that would happen at the same time as a widespread bug in
> hosts.
> > Safety in diversity.
> >
> > Yes, the ConEx marking process on the hosts might be bugged. But, as
> with
> > DoS protection, if a host or router is being flooded and has to drop
> stuff,
> > it is easy to preferentially drop unmarked ConEx packets (or non-ConEx
> > packets) first.
>
> sure.
>
> > Agreed, this takes a leap-of-faith to believe a network might deploy al=
l
> > this, but it only has to deploy its own protections for itself, which i=
s
> a
> > good deployment property.
>
> where are the protections deployed, cost and maintenance will be
> important to the deployment path...
>
> >> > "
> >> >
> >> > 5.2d: Monitoring congestion anomalies
> >> > "  One of the most useful things ConEx provides is the ability to
> >> >   monitor the amount of congestion entering a network. Thus ConEx
> >> >   would add congestion to the information used by existing anomaly
> >> >   detection systems, thus greatly improving their ability to
> >> > discriminate
> >> >   between pathological and benign anomalies. Such congestion-carryin=
g
> >> >   anomalies might be due to accidental misconfigurations in another
> >> >   network, or deliberate malicious attacks.
> >
> > I must also add that re-ECN is only handling the flooding aspect of an
> > attack, not the /information/ in the packets (the RST flag in your
> example).
> > Of course a DDoS protection system needs to take this information into
> > account.
> >
> > What 5.2d says is that all ConEx aims to do is add rest-of-path
> congestion
> > to that information, which is a powerful addition to the network's
> > visibility. Particularly because info in the payload need not be visibl=
e
> to
> > the network at all.
>
> ok, what 'rest of path'? Is this per-asn "congestion" or per-hop
> "congestion" information? how many bits (which bits?) in the ip-header
> are going to carry this information?
>
> >
> >> I'm on board with letting folks know there is congestion, I'm not
> >> convinced making decisions based on this is simple/helpful, yet.
> >> Adding this information to packets, provided I don't need new
> >> silicon/cpu-cycles/state to do this doesn't strike me as hard,
> >
> > Good
> >
> >> I can
> >> imagine that most backbone providers wont' care and probably won't
> >> implement the marking, but... I could be wrong.
> >
> > No network has to implement anything if they don't want to. It still
> works
> > for others who do. I also don't imagine backbone providers will care
> about
> > this.
> >
> > It's easier for a backbone to throw capacity at the problem when they'r=
e
> > running a few large links rather than loads of smaller links. So the
> smaller
> > links around the network edge are always going to bottleneck congestion
> on
> > behalf of cores and backbones.
> >
> >
> >> Marking something purely 'congested path' or no though isn't very
> >> helpful (there are almost always) many hops along a path and many
> >> paths a packet/flow may take. At a single point in time one part of a
> >> path may be congested, but there isn't any guarantee that part will be
> >> affected even by the 1RTT time required to do something. (last-mile
> >> problems aside, nothing fixes a 56k modem except... more bandwidth)
> >
> > I think you might have misunderstood. ConEx isn't intended to change wh=
o
> > does congestion control. Hosts still do that on fast timescales.
>
> part of this confusion is that the wording (even in this thread)
> points to networks doing the congestion control. That's one reason I
> put the horrid ascii art at the top :) "Where does this congestion
> control happen?"
>
> > ConEx is merely intended to allow the network to count up how much
> > congestion is still contributed to by hosts, so the network can judge
> how
> > effectively the host is doing congestion control. That can ultimately b=
e
> > used to take over control (via a network-based congestion policer) if
> the
> > host is persistently being profligate (whether through selfishness,
> malice
> > or accident).
>
> ok, but is the host causing just itself problems? or all users on a
> shared link? if all users on a shared link discarding packets only
> helps if the end-host obeys the drop effects. In the end, if you
> operate a large shared media last-mile (wireless network) ... the end
> stations still must compete for last-mile access. If everyone is well
> behaved then of course more marking data and more polite
> machines/users will permit better utilization of the link(s).
>
> In the case of a wireless network (forgetting the cases where the
> "network" can un-enroll/force-disconnect an end-station/user)
> loud/impolite end-users can still cause problems for their shared
> network breathren, there's nothing that can stop that, sadly... so
> maybe we shouldn't rat-hole here either. For most last-mile cases the
> goal ought to be to carry more good traffic on a link, reduce
> retransmits and other overhead. ConEx should, I think, be able to
> signal the end-user/station that it can do better, it should also help
> the network (BRAS or peering-edge maybe) what traffic to prefer to
> discard in times of stress.
>
> >
> >> Being able to use information about congestion along a path to more
> >> efficiently use the available bandwidth is a fine plan, less
> >> re-transmits is a good plan.
> >>
> >> >   ConEx provides the additional benefit that it exposes congestion
> >> >   information as packets enter a network, not only at the point of
> >> >   congestion. Therefore it could be feasible for anomaly detection
> >>
> >> ok, this is a point I need clarification on. Where is the congestion
> >> it exposes?
> >>  'there is congestion'
> >>     or
> >>  'at hop 12 there was congestion, you are at hop 22, fyi' (honestly
> >> it'd be better to tell me 'as 3 is congested' I think)
> >
> > Re-ECN deliberately doesn't tell anyone exactly where the congestion is
> -
> > that would reveal too much. Whatever your viewing point, it only tells
> you
> > how much congestion there is upstream of you and how much downstream.
> >
> > Rather than re-write all that's ever been written on ConEx in one email=
,
> can
> > I point you at a section of a paper to get this?:
> >        Section 4.2 of
> >        "Using Self-interest to Prevent Malice;
> >        Fixing the Denial of Service Flaw of the Internet"
> >        <http://www.bobbriscoe.net/projects/refb/index.html#refb_dplinc>
>
> I'll go have a read.
>
> > This is the only paper where the full re-ECN protocol is described but
> in
> > outline form, to save you having to wade through the detailed protocol
> spec.
> > It's also the only paper about re-ECN and DDoS (other than my later PhD
> > thesis).
> >
> > Plenty of other papers describe the two main codepoints of the protocol=
:
> > - Expected whole path congestion, W [inserted by the sender and
> immutable]
> > - (ECN) congestion experienced upstream so far, U [marks added by
> routers]
> > These are enough to answer your question above: a monitor at any point
> on
> > the path can meter both W & U, so it can calculate expected downstream
> > congestion on the rest-of-the-path as well: D =3D W - U.
> >
> > But only the above DDoS paper explains the third and last part of the
> > protocol (initial credit) which enables re-ECN to work without any
> feedback
> > at all - you need that for the DDoS case.
> >
> >
> >> If the information as traffic enters my network is 'somewhere there is
> >> congestion!' I'm not sure I can do much aside from buffer (not going
> >> to happen for very long buffers are expensive) or WRED/drop packets.
> >> If the information is that at a router-hop 10 hops back there was
> >> congestion I may be able to either WRED traffic to that destination or
> >> maybe shuttle packets on slightly longer paths if they are available
> >> (though that seems overly complex as a solution, to me).
> >
> > See above: We're not expecting the network to do fine-grained congestio=
n
> > control per flow - that's for the host to decide. We're wanting to look
> at
> > the sum of all the congestion that traffic from a site (household,
> campus)
> > is contributing to; everything on the Internet side of its access
> boundary.
>
> 'access boundary' =3D=3D BRAS?
>
> > Then at the next network border, the pair of networks at that border ca=
n
> see
> > how much congestion is each side of that border - for all traffic,
> whereever
> > it is destined. And so on.
>
> ok, clearly this is the Peering-Edge, fine.
>
> > That's all you need - necessary and sufficient - to regulate profligate
> > users (or bugged, or malicious) and to regulate networks that allow
> their
> > users to be profligate (or bugged or malicious).
>
> 'allow their users' is a little pejorative... I may have a network
> with 500k users all with 1g links (full duplex). You may have a 1G
> link to me, if your customer turns up a 'popular' site (say the
> victoria-secret fashion show live stream, for a canonical example) ...
> there will be issues between our networks. The only path to resolution
> is to force traffic on alternate paths or upgrade links between these
> 2 networks, or to just chose to drop traffic and force 'bad service'
> on 2 parts of the population. VictoriaSecret happened to pick the last
> option in 2000/2001 (I forget the right year).
>
> I suppose with better marking/exposure of the problem one/both sides
> could have chosen to drop other traffic, that may have improved
> things... pathological cases aside though, I still say that better use
> of the network by end-stations is good. Better info about what to drop
> first is also good.
>
> >> >   systems to use ConEx information to detect and shut down dangerous
> >> >   floods of congestion at the point where traffic enters a network.
> >> > "
> >> >
> >> > Do you think circumspect wording about DDoS like the above would
> still
> >> > trigger an allergic reaction from some readers? Would this sort of
> text
> >> > allay your concerns? Or are you adamant that there should be no
> mention
> >> > of
> >> > DDoS at all?
> >>
> >> it's too easy to rathole on ddos, there are far too many ways to
> >> create problems with it, and today it's not really that much of a
> >> problem. There are tools in existence today to deal with the vast
> >> majority of ddos problems, it really isn't a huge problem provided you
> >> prepare and understand the threat(s).
> >>
> >> I suppose in summary: I'm not adamant about it one way or the other,
> >> but I can see that it'll end up distracting from your actual
> >> point/topic and thus cost you cycles you could have spent explaining
> >> why conex/cc isn't just gussied up 'longdistance toll settlement for
> >> IP' (for instance, though I think conex itself just marking packets
> >> doesn't really fall into the quoted phrase).
> >
> > Yes, I understand you're trying to help us avoid interminable arguments=
.
>
> yes, though this thread wasn't my best effort :(
>
> > I guess what I'm saying is: We don't just change IP for chuckles. The
> job of
> > ConEx is to limit congestion caused by profligacy, malice and accidents=
.
> If
> > it can only do that against people who don't try too hard to push back,
> we
> > should not spend our precious time on ConEx.
>
> or rephrased a tad: "If we can help make the utilization of the
> network more optimal.." (by letting the network and users of the
> network decide which packets to discard first)
>
> > IOW, we /should/ have some level of argument about whether ConEx can
> achieve
> > what it aims to achieve. Just not too much at this early stage, when we
> > haven't even defined the ConEx protocol (as opposed to the re-ECN
> protocol).
>
> ok.
>
> -chris
>
> > Bob
> >
> >
> >> -chris
> >>
> >> > Bob
> >> >
> >> >
> >> >
> >> > ________________________________________________________________
> >> > Bob Briscoe,                                BT Innovate & Design
> >> >
> >
> > ________________________________________________________________
> > Bob Briscoe,                                BT Innovate & Design
> >
> _______________________________________________
> conex mailing list
> conex@ietf.org
> https://www.ietf.org/mailman/listinfo/conex

From christopher.morrow@gmail.com  Tue Aug 17 19:07:33 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id DAA403A6830 for <conex@core3.amsl.com>; Tue, 17 Aug 2010 19:07:33 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -98.335
X-Spam-Level: 
X-Spam-Status: No, score=-98.335 tagged_above=-999 required=5 tests=[AWL=-3.336, BAYES_50=0.001, GB_SUMOF=5, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id esXI79On+W-C for <conex@core3.amsl.com>; Tue, 17 Aug 2010 19:07:31 -0700 (PDT)
Received: from mail-iw0-f172.google.com (mail-iw0-f172.google.com [209.85.214.172]) by core3.amsl.com (Postfix) with ESMTP id A76FF3A65A5 for <conex@ietf.org>; Tue, 17 Aug 2010 19:07:30 -0700 (PDT)
Received: by iwn3 with SMTP id 3so91909iwn.31 for <conex@ietf.org>; Tue, 17 Aug 2010 19:08:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=2mPAqFcTPul5oXc7z29EPUxkg4ozCVoImb9S1osHnGE=; b=dLf1dhJ4cOhhY9wmmag9aFtSru4meaZzhcJHBAc5y/5rpojUn7d7jm6dpb/0NJO0UB fGwnVDNC1NR2lRLcqdoJT/LDQrC7J9QxqqQ+7lAcU+FyAfzWa1DNLslhAk/lhgKQoi/k 5BiHj39hoT/n8s2CSB50Py59Qw/4XRC73v/tw=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=mjXGgudmq0I5vYItEszhxiIOYyh944VwaY77fm5qbD3x/DvkH7lfw0ArwHMCtjDtEC 8fZt2q/j0JN3VdRw5PoMHC9L/nihwtJnpexfx8nAvyclbVg/EdFXekX3VhV4vd2snbc2 BYXME/7074w9JjbnFwtvdKnkAxYp8n2j21rcI=
MIME-Version: 1.0
Received: by 10.231.34.135 with SMTP id l7mr8352448ibd.148.1282097285588; Tue, 17 Aug 2010 19:08:05 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.156.203 with HTTP; Tue, 17 Aug 2010 19:08:05 -0700 (PDT)
In-Reply-To: <563C162F43D1B14E9FD2BC0A776C1E9127EF3A6931@WNEXMBX01.telecom.tcnz.net>
References: <201008031942.o73JgGSW018260@bagheera.jungle.bt.co.uk> <AANLkTikZFvkOQjNLuasif+vAjeJSac1E-BqR6pptn=7p@mail.gmail.com> <201008061551.o76FpKPZ010840@bagheera.jungle.bt.co.uk> <AANLkTi=ALJa0BpqnnBVH9XSUd61mYFOiQSGo_AzQXEE9@mail.gmail.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF3A6931@WNEXMBX01.telecom.tcnz.net>
Date: Tue, 17 Aug 2010 22:08:05 -0400
X-Google-Sender-Auth: yHW5JZfN1TaTTAocWr077D6EU5U
Message-ID: <AANLkTindQbgE-pCwwaETVng1sN-VcGYfpga8iaTx8irk@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: Kevin Mason <Kevin.Mason@telecom.co.nz>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] ConEx & DDoS
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 18 Aug 2010 02:07:34 -0000

On Tue, Aug 17, 2010 at 8:03 PM, Kevin Mason <Kevin.Mason@telecom.co.nz> wr=
ote:
>
>
> Cheers
> Kevin Mason
>
>
>> -----Original Message-----
>> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf O=
f
>> Christopher Morrow
>> Sent: Wednesday, 18 August 2010 6:18 a.m.
>> To: Bob Briscoe
>> Cc: conex@ietf.org
>> Subject: Re: [conex] ConEx & DDoS
>>
>> it would help (me at least) in this discussion if we think of a
>> notional network like:
>>
>
> [Kevin Mason] This would be helpful, but I suggest we change the labels a=
nd add a core on the other side to make it a little more generic. There may=
 be some debate about the actual labels.

seems fine to me... in fact maybe sticking a not-ascii-art picture up
I'll do tonight, my ascii-art-foo is horrid :(

>>
>> =A0 =A0 =A0 =A0 =A0 =A0receiver
>> =A0 =A0 =A0 =A0 =A0 =A0 =A0|
>> =A0 =A0 =A0 =A0 =A0 =A0cpe
>> =A0 =A0 =A0 =A0 =A0 =A0 |
> =A0 =A0 =A0 =A0Customer policy point (e.g. BRAS, GGSN, DOCSIS equivalent)
>> =A0 =A0 =A0 =A0 / =A0 =A0 \
>> =A0 =A0 coreR1 =A0 =A0coreR2 (inside a single ASN)
>> =A0 =A0 =A0 \ =A0 =A0 =A0 =A0/
>> =A0 =A0 =A0 =A0peer/transit edge (exit/entrance to another ASN)
> =A0 =A0 =A0 =A0 =A0/ =A0 =A0\
> =A0 =A0 =A0coreS1 =A0 coreS2
> =A0 =A0 =A0 =A0 \ =A0 =A0 /
> =A0 =A0 =A0 =A0Customer policy point (e.g. PE, another BRAS for peer to p=
eer)
>> =A0 =A0 =A0 =A0 =A0|
>> =A0 =A0 =A0 =A0 CE
>> =A0 =A0 =A0 =A0 =A0|
>> =A0 =A0 =A0 =A0sender>
>> (apologies for bad ascii art)
>>
>> On Fri, Aug 6, 2010 at 11:51 AM, Bob Briscoe <rbriscoe@jungle.bt.co.uk>
>> wrote:
>> > Chris,
>> >
>> > At 04:37 04/08/2010, Christopher Morrow wrote:
>> >>
>> >> (I think I sub'd to the list with this address...)
>> >>
>> >> On Tue, Aug 3, 2010 at 3:42 PM, Bob Briscoe <rbriscoe@jungle.bt.co.uk=
>
>> >> wrote:
>> >> > Chris,
>> >> >
>> >> > During the ConEx w-g session last Tuesday in Maastricht you suggest=
ed
>> we
>> >> > should not include DDoS mitigation as a use-case for ConEx. I was
>> >> > willing to
>> >> > agree as we don't need to court controversy.
>> >>
>> >> yup, no use rat-holing if it's not central to the discussion.
>>
>> oops, we rat-holed... this sort of proves my point about ddos
>> discussions wrt conex though.
>> I would stick with (for conex): "Expose information about congestion
>> on the network path, permit better/more-intelligent discard profiles
>> to be used along the network paths."
>>
>> Where 'better/more-intelligent' is really: "Drop traffic that's less
>> important to the end-users."
>>
>> Lee/Rich I think explained this (to me) as: "make sure my Internet
>> gaming works perfectly, it is OK to slow down a large video file
>> transfer." I believe plus-net in the UK does some of this today? (I
>> may have the wrong provider... but it's a competitor of BT's I
>> believe)
>>
>> >> > However, the co-authors of draft-moncaster-conex-concepts-uses have
>> >> > asked me
>> >> > to float the idea that, altho we won't include DDoS as a use-case i=
n
>> its
>> >> > own
>> >> > right, we should mention it as an extreme case of two other use-
>> cases.
>> >> > Let
>> >> > me explain...
>> >> >
>> >> > The use-cases we plan to include are:
>> >> >
>> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
>> >> > 5. Use cases (Highlighting that this is neither an exhaustive list
>> nor a
>> >> > prescriptive list...)
>> >> > =A05.1 =A0ConEx for better traffic Control
>> >> > =A0a. Targeting the right traffic
>> >> > =A0b. Encouraging (and eventually enforcing) better CC
>> >> > =A05.2 ConEx for better traffic monitoring
>> >> > =A0a. For compliance with SLAs
>> >> > =A0b. For assessing performance of your provider
>> >> > =A0c. Monitoring congestion hotspots for targeted upgrades
>> >> > =A0d. Monitoring congestion anomalies (Equipment problems or helpin=
g
>> >> > identify
>> >> > DDoS)
>> >> >
>> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
>> >> >
>> >> > We could include mentions of DDoS along the following lines (no nee=
d
>> to
>> >> > word-smith - I'm just trying to outline the concepts).
>> >> >
>> >> > 5.1b Encouraging (and eventually enforcing) better CC
>> >> > " =A0ConEx information can be used as a control metric for making
>> >> > =A0 traffic control decisions, such as deciding which traffic to
>> >> > =A0 prioritise or to identify and block sources of persistent and
>> >> > =A0 damaging congestion.
>> >>
>> >> this, I think, falls into the category of things that Lee from TWC is
>> >> interested in (and maybe his buddies at comcast/cox as well),
>> >> understanding what traffic can suffer more loss without causing more
>> >> end-user pain, and shifting that traffic to said profile.
>> >
>> > It's no coincidence that Rich Woundy is a co-author.
>> >
>> > Sure, Comcast have a current solution, but Rich is the first to say th=
at
>> it
>> > gives no encouragement (and actually punishes) approaches like LEDBAT,
>> > because it attributes blame for high utilisation by volume rather than
>> > congestion-volume. There's a list of other things Rich presented in th=
e
>> > ConEx BoF that makes ConEx worth doing beyond what Comcast currently d=
o.
>> > <http://www.ietf.org/proceedings/76/slides/conex-3.pdf>
>> >
>> > Whatever, I only included this text as a lead-up to the DDoS text
>> later...
>>
>> better understanding of what contributes to congestion in 'your
>> network' and what traffic your customers don't mind missing a few
>> packets of (at the benefit of their real-time needs) is good.
>>
>> >
>> >> > =A0 Simple ingress policer mechanisms, such as those described in
>> >> > =A0 [Policing-freedom] and [re-ecn-motive], could control the
>> >> > =A0 overall volume of congestion entering a network from each user.
>> >> > =A0 Such a policer could lead to a number of beneficial outcomes:
>> >>
>> >> these seem to be of the flavor of things Comcast 'powerboost' does, a=
t
>> >> the CPE actually (if I understand their brand of magic correctly).
>> >
>> > No, powerboost is very different.
>> >
>> > [BTW, I and others did a start-up called Qariba back in 2001 which bui=
lt
>> a
>> > powerboost-like feature for cable networks, with a network API so it
>> could
>> > be initiated from a CDN - we called it Broadband-800 by analogy to 800
>> phone
>> > calls, because the server end was effectively temporarily buying more
>> access
>> > capacity on the end-user's behalf. We had a lot of interest in the US
>> cable
>> > industry at the time, but we got hit by the bubble bursting.]
>> >
>> > Powerboost gives you access to extra capacity irrespective of whether
>> you
>> > will contribute more to congestion. ConEx is much simpler and more
>> generic.
>>
>> so... 'PowerBoost' provides higher QOS/traffic-rate for a period of
>> time, it doesn't account for what damage that change has to other
>> traffic on my cable-link, the link to the head-end or the other links
>> in the network... it can be triggered at the CPE or the BRAS (I'd bet
>> that anywhere else in the Comcast network, to take a specific example,
>> probably doesn't matter).
>>
>> I think that's the same thing I said earlier.
>>
>> >> > =A0 o Heavy users might be encouraged to shift their usage away fro=
m
>> >> > =A0 =A0 peak times in order to be able to transfer more data withou=
t
>> >> > =A0 =A0 triggering a response from the policer;
>> >> > =A0 o Users might be encouraged to use software that shifts its usa=
ge
>> away
>> >> > =A0 =A0 from congestion peaks (shifting in time whether by hours or
>> seconds
>> >> > =A0 =A0 [LEDBAT] or shifting to less congested routes [MPTCP]), aga=
in to
>> >> > =A0 =A0 transfer more data without triggering the policer;
>> >> > =A0 o Developers of operating systems might be encouraged to supply
>> such
>> >> > =A0 =A0 software as the default;
>> >> > =A0 o If certain applications did not use a congestion responsive
>> >> > transport
>> >> > =A0 =A0 and caused high levels of congestion-bit-rate, the policer =
would
>> >> > =A0 =A0 eventually force the bit-rate to reduce in response to
>> congestion.
>> >>
>> >> Today this is (the last bullet) the equivalent of classifying a type
>> >> of traffic (port/protocol/src/dst as classification hints) into a
>> >> 'high loss' bucket/queue and just dropping faster/more of it. If the
>> >> traffic is TCP you should get drops, backoff, sawtooth behaviour unti=
l
>> >> you reach steady-state (maybe?) slower transfers.
>> >>
>> >> > =A0 It is believed that a system of ConEx policers could be built t=
hat
>> >> > =A0 can verify the integrity of ConEx information and remove traffi=
c
>> >> > =A0 that does not comply with the protocol [re-ecn-motive]. If such
>> >> > =A0 robustness against cheating is indeed possible, a ConEx policer
>> would
>> >> > =A0 mitigate DDoS flooding attacks, at least to some extent, merely=
 as
>> a
>> >> > =A0 function of its ability to enforce a response to persistent and
>> >> > =A0 excessive congestion.
>> >>
>> >> So, there are surely cases of 'ddos' (or DoS) that include loud
>> >> speakers... There are also many instances of DDoS that are many
>> >> hundreds of thousands of (or millions) of very quiet voices that in
>> >> total cause the DoS/DDoS effect.
>> >>
>> >> If you look at a system of marking of congestion information,
>> >> depending upon where that marking happens, and on the traffic in
>> >> question, there's no guarantee at all that the sources will be
>> >> squelched.
>> >
>> > In the following I'm going to talk about re-ECN, rather than ConEx,
>> because
>> > we haven't defined ConEx yet...
>> >
>> >
>> >> For example, imagine a DDoS of 1 RST pkt (could also do this with 1
>> >> icmp-error-type message) per second from 1 million hosts across the
>> >> whole of the network (a 'botnet' for instance). There will never be
>> >> any packet sent back to the originators,
>> >
>> > (BTW, re-ECN doesn't need any packet sent back to the originators - yo=
u
>> > might have misunderstood the design.)
>> >
>> >> the rate will be low enough
>> >> that unless there is coincident traffic to the victim from these host=
s
>> >> (in the same protocol/port profile probably) no signal will ever be
>> >> seen that leads to squelching of traffic.
>> >
>> > Yes, of course we've thought of this sort of attack. Even if we hadn't
>> > thought of this, since 2005 the security/DoS community have been
>> thinking up
>> > attacks against re-ECN (pre-ConEx), so it would have been hard to miss
>> such
>> > an obvious one.
>> >
>> > If each bot was behind a 100Mb/s link, and everyday congestion levels
>> were
>> > typically 0.2%, a reasonable config of basic re-ECN policer (I can giv=
e
>>
>> where is this congestion level? on the 100mbps link? or somewhere in
>> the local ASN? or external to that ASN?
>> Looking at the ascii-network-art at the top:
>>
>> user -> cpe?
>> cpe -> bras?
>> bras -> core?
>> core -> peering-edge?
>> other-net -> other-net ? (ran out of space/time on the art, call this
>> 'middle of the Internet')
>> PE -> CE ?
>> CE -> server?
>>
>> where the congestion is will determine what sort of measures can be
>> applied today. In the future though if the end-stations know there is
>> congestion they could choose to modify their behavior to better
>> utilize the network...
>>
>> > typical numbers if you want) would allow a congestion-bit-rate of 10Mb=
/s
>> of
>> > *marked* packets for a minute, then 30kb/s of sustained marked traffic
>> after
>> > that.
>> >
>> > If your attack were a flooding attack with MTU size data packets
>> (12,000b),
>> > rather than an RST attack (which can be detected and ignored), each bo=
t
>>
>> the 'and ignored' part is a little misleading since 1 rst/second at
>> the source isn't noticeable, 1m rst/second at the victim is another
>> story. There will be effects on the network at the victim end,
>> 'ignore' here is an oversimplification. (don't think it's worthwhile
>> rat-holing here)
>>
>> > would have to congestion mark all the packets to have any flooding
>> effect.
>> > If each bot could draw congestion from its policer allowance at 30kb/s
>> (see
>> > above), a basic re-ECN policer would allow it to sustain an attack at =
1
>> pkt
>> > every 0.4sec - a little more than twice as fast as your attack.
>> >
>> > Agreeing on the 'largest' botnet ever seen isn't easy, but the Maripos=
a
>> > botnet dismantled earlier this year involved ~12M separate IP addresse=
s.
>> > No-one can know whether that implied 12M separate machines, but I doub=
t
>> it -
>> > source address spoofing could have been used. This is relevant because=
 a
>>
>> the numbers for mariposa I think were from C&C check-ins, so these
>> required 3-way handshakes and weren't spoofed. I believe there's some
>> rotation in the address usage (dhcp-like effects) but in the end 1m or
>> 12m isn't really important. (also not important to rat-hole here)
>>
>> > re-ECN policer would sit at the physical access - nothing in re-ECN
>> depends
>> > on valid source addresses. If each machine was spoofing 24 other
>> addresses
>> > (a total guess), that's about 500,000 real machines.
>> >
>> > Let's assume it gets much harder to marshall a larger army than the on=
e
>> said
>> > to be the largest yet. Then, a basic re-ECN deployment would contain a=
n
>> > attack to about 30kb/s x 500,000 =3D 15Gb/s (which is similar to your
>> scenario
>> > of 12kb/s x 1M =3D 12Gb/s).
>>
>> it's getting easier to marshal more hosts, not harder, again though
>> not important to rathole.
>>
>> > In summary, just a basic re-ECN policer that isn't even looking for Do=
S
>> can
>>
>> where is the policer deployed?
>>
>> o if at the source-host I still say it's hard to tell 1pps from
>> another, if there is no congestion between src and BRAS there's no
>> reason to mark anything there.
>>
>> o if at the destination/victim, then policing traffic to the victim
>> can be done if the traffic is significantly different from normal
>> traffic. Effects on legitimate traffic will be hard to avoid if the
>> attack traffic looks enough like real/legitimate traffic.
>>
>> I don't think this is really even important enough to rathole on...
>>
>> > raise the bar to require a botnet about 3,000 times bigger than withou=
t
>> > re-ECN [because that's the ratio of the peak traffic each site can sen=
d
>> > (100Mb/s) divided by the averaged rate of losses in normal traffic
>> > (30kb/s)].
>> >
>> > Yes, this isn't particularly impressive. But it's not meant to be. Thi=
s
>> is
>> > just what you get without even trying to deal with DDoS specifically.
>> >
>> > i) The most obvious thing to add to a basic re-ECN policer would be a
>> simple
>> > anomaly detector triggered by packets to one destination prefix with
>> >10% of
>> > them marked for expected congestion. That could trigger a much more
>> > stringent limit just for that destination prefix.
>>
>> so, more state on the router? or is this another inline device doing
>> the work? Factor in cost in opex/capex to implement/maintain this...
>> at verizon studies/eng work showed ~500k/BRAS would be required to
>> implement in-line 'dpi' systems on 1g links, today those links are
>> likely 10g and I don't think the cost has reduced. at 500k/BRAS I
>> really should just turn up another 20g of capacity to the BRAS.
>> Downstream from the BRAS I am limited by physics/last-mile issues so
>> we're back to the top of the thread: "If you can mark/know what
>> traffic is more relevant to real-time ops at the customer-end then you
>> can make a better decision about what packets to drop"
>>
>> >
>> > ii) If the victim network deployed ConEx monitors around its border,
>> attack
>> > traffic would really stand out obviously because nearly all attack
>> packets
>> > would be marked for rest-of-path congestion. These monitors could then
>> > trigger similar tight limits on congestion-bit-rate to that destinatio=
n.
>>
>> this is costly, very costly, see above. It's not always obvious which
>> traffic is contributing to the actual problem, the traffic most likely
>> won't be marked until I mark it at ingress (at the peering edge),
>> which means I need to know what to mark, which the victim
>> host/customer-network will have to work out, or have me help them work
>> out.
>>
>> > iii) I've assumed a homogeneous botnet. In practice many would be on
>> slower
>> > access links, and many would be within larger sites (e.g. campus nets)
>> > sharing an aggregated congestion allowance at the access from the camp=
us
>> to
>> > the Internet. I think my rough calculation is closer to 'worst case'
>> than
>> > 'typical'.
>>
>> perhaps, not worth rat-holing though.
>>
>> >
>> >> The victim still suffers
>> >> ~1mpps of traffic, and actually conex/CC just makes the problem far
>> >> worse for the victim as all of his real-user traffic will be marked
>> >> (over time) and thus squelched out while the DDoS continues to flow
>> >> unabated.
>> >
>> > Nope. Importantly, normal /unsustained/ traffic to that destination
>> would
>> > take a while (1 minute in my example) to hit the tighter congestion
>> policer
>> > limits. So regular usage would be reasonably unaffected. Whereas the
>> bots
>> > have to sustain the attack to be effective.
>>
>> botnets sustain attacks, it's what they do (among sending spam or
>> other things, not relevant to this discussion). Having sat through
>> many multi-gigabit per second attacks over the last many years...
>> sustaining a dos/ddos isn't really a problem for the attacker. User
>> traffic is always affected in these cases, if the traffic from the
>> attack looks like the normal customer traffic. If there's a method to
>> differentiate attack from customer traffic (rst packet, icmp packet)
>> the upstream can just drop the odd-traffic. They don't need anything
>> except the tools they have today to do this, and maintenance of more
>> state isn't a help.
>>
>> Unless the victim can signal the upstream that 'traffic of this type'
>> or 'packets fitting this profile' should be marked for discard/drop
>> before other packets ... there's no real win. This, though, does
>> resemble the start of the thread: "Tell me what traffic is important
>> to your real-time ops, I'll apply a drop profile to all other traffic
>> when congestion occurs."
>>
>> >
>> >> >
>> >> > =A0 It would be foolhardy to claim that it will be possible to make
>> ConEx
>> >> > =A0 invulnerable to all cheats and attacks, even though it has been
>> hard
>> >> > to
>> >> > =A0 attack it so far. Nonetheless, ConEx would still be useful even=
 if
>> a
>> >> > =A0 specific deployment of ConEx policers contained vulnerabilities=
.
>> ConEx
>> >> > =A0 could still prevent congestion collapse due to careless ommissi=
on
>> of
>> >> > =A0 congestion control, or due to release of software containing an
>> >> > =A0 accidental congestion control bug.
>> >>
>> >> I think that I nullified the above paragraph actually... except for a
>> >> software bug case, though I'd argue that in the case of the UW NTP
>> >> server incident ConEx/CC wouldn't have helped there either as the
>> >> broken software would still have been broken and the real-users of th=
e
>> >> system would have just been CC"d out of existence.
>> >
>> > Nope. ConEx congestion policers would be run by the network operator a=
nd
>> > effectively take over congestion control if hosts fail to do it for a
>> > sustained period.
>>
>> where are these run? at the BRAS? at the Peering-Edge? on the CPE?
>>
>> > Yes, you could get bugs in ConEx policers. But it's unlikely (though n=
ot
>> > impossible) that would happen at the same time as a widespread bug in
>> hosts.
>> > Safety in diversity.
>> >
>> > Yes, the ConEx marking process on the hosts might be bugged. But, as
>> with
>> > DoS protection, if a host or router is being flooded and has to drop
>> stuff,
>> > it is easy to preferentially drop unmarked ConEx packets (or non-ConEx
>> > packets) first.
>>
>> sure.
>>
>> > Agreed, this takes a leap-of-faith to believe a network might deploy a=
ll
>> > this, but it only has to deploy its own protections for itself, which =
is
>> a
>> > good deployment property.
>>
>> where are the protections deployed, cost and maintenance will be
>> important to the deployment path...
>>
>> >> > "
>> >> >
>> >> > 5.2d: Monitoring congestion anomalies
>> >> > " =A0One of the most useful things ConEx provides is the ability to
>> >> > =A0 monitor the amount of congestion entering a network. Thus ConEx
>> >> > =A0 would add congestion to the information used by existing anomal=
y
>> >> > =A0 detection systems, thus greatly improving their ability to
>> >> > discriminate
>> >> > =A0 between pathological and benign anomalies. Such congestion-carr=
ying
>> >> > =A0 anomalies might be due to accidental misconfigurations in anoth=
er
>> >> > =A0 network, or deliberate malicious attacks.
>> >
>> > I must also add that re-ECN is only handling the flooding aspect of an
>> > attack, not the /information/ in the packets (the RST flag in your
>> example).
>> > Of course a DDoS protection system needs to take this information into
>> > account.
>> >
>> > What 5.2d says is that all ConEx aims to do is add rest-of-path
>> congestion
>> > to that information, which is a powerful addition to the network's
>> > visibility. Particularly because info in the payload need not be visib=
le
>> to
>> > the network at all.
>>
>> ok, what 'rest of path'? Is this per-asn "congestion" or per-hop
>> "congestion" information? how many bits (which bits?) in the ip-header
>> are going to carry this information?
>>
>> >
>> >> I'm on board with letting folks know there is congestion, I'm not
>> >> convinced making decisions based on this is simple/helpful, yet.
>> >> Adding this information to packets, provided I don't need new
>> >> silicon/cpu-cycles/state to do this doesn't strike me as hard,
>> >
>> > Good
>> >
>> >> I can
>> >> imagine that most backbone providers wont' care and probably won't
>> >> implement the marking, but... I could be wrong.
>> >
>> > No network has to implement anything if they don't want to. It still
>> works
>> > for others who do. I also don't imagine backbone providers will care
>> about
>> > this.
>> >
>> > It's easier for a backbone to throw capacity at the problem when they'=
re
>> > running a few large links rather than loads of smaller links. So the
>> smaller
>> > links around the network edge are always going to bottleneck congestio=
n
>> on
>> > behalf of cores and backbones.
>> >
>> >
>> >> Marking something purely 'congested path' or no though isn't very
>> >> helpful (there are almost always) many hops along a path and many
>> >> paths a packet/flow may take. At a single point in time one part of a
>> >> path may be congested, but there isn't any guarantee that part will b=
e
>> >> affected even by the 1RTT time required to do something. (last-mile
>> >> problems aside, nothing fixes a 56k modem except... more bandwidth)
>> >
>> > I think you might have misunderstood. ConEx isn't intended to change w=
ho
>> > does congestion control. Hosts still do that on fast timescales.
>>
>> part of this confusion is that the wording (even in this thread)
>> points to networks doing the congestion control. That's one reason I
>> put the horrid ascii art at the top :) "Where does this congestion
>> control happen?"
>>
>> > ConEx is merely intended to allow the network to count up how much
>> > congestion is still contributed to by hosts, so the network can judge
>> how
>> > effectively the host is doing congestion control. That can ultimately =
be
>> > used to take over control (via a network-based congestion policer) if
>> the
>> > host is persistently being profligate (whether through selfishness,
>> malice
>> > or accident).
>>
>> ok, but is the host causing just itself problems? or all users on a
>> shared link? if all users on a shared link discarding packets only
>> helps if the end-host obeys the drop effects. In the end, if you
>> operate a large shared media last-mile (wireless network) ... the end
>> stations still must compete for last-mile access. If everyone is well
>> behaved then of course more marking data and more polite
>> machines/users will permit better utilization of the link(s).
>>
>> In the case of a wireless network (forgetting the cases where the
>> "network" can un-enroll/force-disconnect an end-station/user)
>> loud/impolite end-users can still cause problems for their shared
>> network breathren, there's nothing that can stop that, sadly... so
>> maybe we shouldn't rat-hole here either. For most last-mile cases the
>> goal ought to be to carry more good traffic on a link, reduce
>> retransmits and other overhead. ConEx should, I think, be able to
>> signal the end-user/station that it can do better, it should also help
>> the network (BRAS or peering-edge maybe) what traffic to prefer to
>> discard in times of stress.
>>
>> >
>> >> Being able to use information about congestion along a path to more
>> >> efficiently use the available bandwidth is a fine plan, less
>> >> re-transmits is a good plan.
>> >>
>> >> > =A0 ConEx provides the additional benefit that it exposes congestio=
n
>> >> > =A0 information as packets enter a network, not only at the point o=
f
>> >> > =A0 congestion. Therefore it could be feasible for anomaly detectio=
n
>> >>
>> >> ok, this is a point I need clarification on. Where is the congestion
>> >> it exposes?
>> >> =A0'there is congestion'
>> >> =A0 =A0 or
>> >> =A0'at hop 12 there was congestion, you are at hop 22, fyi' (honestly
>> >> it'd be better to tell me 'as 3 is congested' I think)
>> >
>> > Re-ECN deliberately doesn't tell anyone exactly where the congestion i=
s
>> -
>> > that would reveal too much. Whatever your viewing point, it only tells
>> you
>> > how much congestion there is upstream of you and how much downstream.
>> >
>> > Rather than re-write all that's ever been written on ConEx in one emai=
l,
>> can
>> > I point you at a section of a paper to get this?:
>> > =A0 =A0 =A0 =A0Section 4.2 of
>> > =A0 =A0 =A0 =A0"Using Self-interest to Prevent Malice;
>> > =A0 =A0 =A0 =A0Fixing the Denial of Service Flaw of the Internet"
>> > =A0 =A0 =A0 =A0<http://www.bobbriscoe.net/projects/refb/index.html#ref=
b_dplinc>
>>
>> I'll go have a read.
>>
>> > This is the only paper where the full re-ECN protocol is described but
>> in
>> > outline form, to save you having to wade through the detailed protocol
>> spec.
>> > It's also the only paper about re-ECN and DDoS (other than my later Ph=
D
>> > thesis).
>> >
>> > Plenty of other papers describe the two main codepoints of the protoco=
l:
>> > - Expected whole path congestion, W [inserted by the sender and
>> immutable]
>> > - (ECN) congestion experienced upstream so far, U [marks added by
>> routers]
>> > These are enough to answer your question above: a monitor at any point
>> on
>> > the path can meter both W & U, so it can calculate expected downstream
>> > congestion on the rest-of-the-path as well: D =3D W - U.
>> >
>> > But only the above DDoS paper explains the third and last part of the
>> > protocol (initial credit) which enables re-ECN to work without any
>> feedback
>> > at all - you need that for the DDoS case.
>> >
>> >
>> >> If the information as traffic enters my network is 'somewhere there i=
s
>> >> congestion!' I'm not sure I can do much aside from buffer (not going
>> >> to happen for very long buffers are expensive) or WRED/drop packets.
>> >> If the information is that at a router-hop 10 hops back there was
>> >> congestion I may be able to either WRED traffic to that destination o=
r
>> >> maybe shuttle packets on slightly longer paths if they are available
>> >> (though that seems overly complex as a solution, to me).
>> >
>> > See above: We're not expecting the network to do fine-grained congesti=
on
>> > control per flow - that's for the host to decide. We're wanting to loo=
k
>> at
>> > the sum of all the congestion that traffic from a site (household,
>> campus)
>> > is contributing to; everything on the Internet side of its access
>> boundary.
>>
>> 'access boundary' =3D=3D BRAS?
>>
>> > Then at the next network border, the pair of networks at that border c=
an
>> see
>> > how much congestion is each side of that border - for all traffic,
>> whereever
>> > it is destined. And so on.
>>
>> ok, clearly this is the Peering-Edge, fine.
>>
>> > That's all you need - necessary and sufficient - to regulate profligat=
e
>> > users (or bugged, or malicious) and to regulate networks that allow
>> their
>> > users to be profligate (or bugged or malicious).
>>
>> 'allow their users' is a little pejorative... I may have a network
>> with 500k users all with 1g links (full duplex). You may have a 1G
>> link to me, if your customer turns up a 'popular' site (say the
>> victoria-secret fashion show live stream, for a canonical example) ...
>> there will be issues between our networks. The only path to resolution
>> is to force traffic on alternate paths or upgrade links between these
>> 2 networks, or to just chose to drop traffic and force 'bad service'
>> on 2 parts of the population. VictoriaSecret happened to pick the last
>> option in 2000/2001 (I forget the right year).
>>
>> I suppose with better marking/exposure of the problem one/both sides
>> could have chosen to drop other traffic, that may have improved
>> things... pathological cases aside though, I still say that better use
>> of the network by end-stations is good. Better info about what to drop
>> first is also good.
>>
>> >> > =A0 systems to use ConEx information to detect and shut down danger=
ous
>> >> > =A0 floods of congestion at the point where traffic enters a networ=
k.
>> >> > "
>> >> >
>> >> > Do you think circumspect wording about DDoS like the above would
>> still
>> >> > trigger an allergic reaction from some readers? Would this sort of
>> text
>> >> > allay your concerns? Or are you adamant that there should be no
>> mention
>> >> > of
>> >> > DDoS at all?
>> >>
>> >> it's too easy to rathole on ddos, there are far too many ways to
>> >> create problems with it, and today it's not really that much of a
>> >> problem. There are tools in existence today to deal with the vast
>> >> majority of ddos problems, it really isn't a huge problem provided yo=
u
>> >> prepare and understand the threat(s).
>> >>
>> >> I suppose in summary: I'm not adamant about it one way or the other,
>> >> but I can see that it'll end up distracting from your actual
>> >> point/topic and thus cost you cycles you could have spent explaining
>> >> why conex/cc isn't just gussied up 'longdistance toll settlement for
>> >> IP' (for instance, though I think conex itself just marking packets
>> >> doesn't really fall into the quoted phrase).
>> >
>> > Yes, I understand you're trying to help us avoid interminable argument=
s.
>>
>> yes, though this thread wasn't my best effort :(
>>
>> > I guess what I'm saying is: We don't just change IP for chuckles. The
>> job of
>> > ConEx is to limit congestion caused by profligacy, malice and accident=
s.
>> If
>> > it can only do that against people who don't try too hard to push back=
,
>> we
>> > should not spend our precious time on ConEx.
>>
>> or rephrased a tad: "If we can help make the utilization of the
>> network more optimal.." (by letting the network and users of the
>> network decide which packets to discard first)
>>
>> > IOW, we /should/ have some level of argument about whether ConEx can
>> achieve
>> > what it aims to achieve. Just not too much at this early stage, when w=
e
>> > haven't even defined the ConEx protocol (as opposed to the re-ECN
>> protocol).
>>
>> ok.
>>
>> -chris
>>
>> > Bob
>> >
>> >
>> >> -chris
>> >>
>> >> > Bob
>> >> >
>> >> >
>> >> >
>> >> > ________________________________________________________________
>> >> > Bob Briscoe, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0BT Innovate & Design
>> >> >
>> >
>> > ________________________________________________________________
>> > Bob Briscoe, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0BT Innovate & Design
>> >
>> _______________________________________________
>> conex mailing list
>> conex@ietf.org
>> https://www.ietf.org/mailman/listinfo/conex
>

From christopher.morrow@gmail.com  Tue Aug 17 22:10:57 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 4424C3A67B7 for <conex@core3.amsl.com>; Tue, 17 Aug 2010 22:10:57 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -98.057
X-Spam-Level: 
X-Spam-Status: No, score=-98.057 tagged_above=-999 required=5 tests=[AWL=-3.058, BAYES_50=0.001, GB_SUMOF=5, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qp3sVTJNAU+4 for <conex@core3.amsl.com>; Tue, 17 Aug 2010 22:10:53 -0700 (PDT)
Received: from mail-iw0-f172.google.com (mail-iw0-f172.google.com [209.85.214.172]) by core3.amsl.com (Postfix) with ESMTP id CDEB53A68E0 for <conex@ietf.org>; Tue, 17 Aug 2010 22:10:52 -0700 (PDT)
Received: by iwn3 with SMTP id 3so260592iwn.31 for <conex@ietf.org>; Tue, 17 Aug 2010 22:11:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=uN0M0fboeBMIcqTl4A7boGCAAfDb1CW/fyFTGZDGlhA=; b=kEBUUFrvbPj7t2k/o8HXgYYQinlr4F3tYGHPO8TGdc5ajHXFxYN9kWcMrg+uvrZle/ ntX+ff4L53UqLrItuL8KYRIS040w7WvKF3vOeDEJbLHSyxB74IJFG04RoWVe/GZPBgks 6Vmvjevpru1PtkxQKxmna5gxK+qSyW7uJlcIM=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=faod1IW3Tz9FJ3AvWRafqZk4cRCydd36FqdUs5G69gD5i3xtjqBxCyYLqHpwBZs9+x jIZlpbnZhkCyzV1Sbs2tECWYOxh/SY6VLHAIUwGFec/Yr2pQDX1cwml9AW8TDXqzu51l MzMPwKYiIOVL/0r/Y3/wKvAfWIXDnh4rrHklM=
MIME-Version: 1.0
Received: by 10.231.35.202 with SMTP id q10mr6163859ibd.138.1282108286560; Tue, 17 Aug 2010 22:11:26 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.156.203 with HTTP; Tue, 17 Aug 2010 22:11:26 -0700 (PDT)
In-Reply-To: <AANLkTindQbgE-pCwwaETVng1sN-VcGYfpga8iaTx8irk@mail.gmail.com>
References: <201008031942.o73JgGSW018260@bagheera.jungle.bt.co.uk> <AANLkTikZFvkOQjNLuasif+vAjeJSac1E-BqR6pptn=7p@mail.gmail.com> <201008061551.o76FpKPZ010840@bagheera.jungle.bt.co.uk> <AANLkTi=ALJa0BpqnnBVH9XSUd61mYFOiQSGo_AzQXEE9@mail.gmail.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF3A6931@WNEXMBX01.telecom.tcnz.net> <AANLkTindQbgE-pCwwaETVng1sN-VcGYfpga8iaTx8irk@mail.gmail.com>
Date: Wed, 18 Aug 2010 01:11:26 -0400
X-Google-Sender-Auth: MtKAEMJUhlx6zQusN2sBpkasYI8
Message-ID: <AANLkTinRDU7FTkuMFW_--NqJ3WOQgF1sUfBOFFX4PpJU@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: Kevin Mason <Kevin.Mason@telecom.co.nz>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] ConEx & DDoS
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 18 Aug 2010 05:10:57 -0000

On Tue, Aug 17, 2010 at 10:08 PM, Christopher Morrow
<morrowc.lists@gmail.com> wrote:
> On Tue, Aug 17, 2010 at 8:03 PM, Kevin Mason <Kevin.Mason@telecom.co.nz> =
wrote:
>>
>>
>> Cheers
>> Kevin Mason
>>
>>
>>> -----Original Message-----
>>> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf =
Of
>>> Christopher Morrow
>>> Sent: Wednesday, 18 August 2010 6:18 a.m.
>>> To: Bob Briscoe
>>> Cc: conex@ietf.org
>>> Subject: Re: [conex] ConEx & DDoS
>>>
>>> it would help (me at least) in this discussion if we think of a
>>> notional network like:
>>>
>>
>> [Kevin Mason] This would be helpful, but I suggest we change the labels =
and add a core on the other side to make it a little more generic. There ma=
y be some debate about the actual labels.
>
> seems fine to me... in fact maybe sticking a not-ascii-art picture up
> I'll do tonight, my ascii-art-foo is horrid :(

<http://docs.as701.net/conex/conex-net-example.png>

with the dot, in case you wanted to make it better/clearer/move-it-elsewher=
e:

<http://docs.as701.net/conex/conex-net.dot>

-Chris

>
>>>
>>> =A0 =A0 =A0 =A0 =A0 =A0receiver
>>> =A0 =A0 =A0 =A0 =A0 =A0 =A0|
>>> =A0 =A0 =A0 =A0 =A0 =A0cpe
>>> =A0 =A0 =A0 =A0 =A0 =A0 |
>> =A0 =A0 =A0 =A0Customer policy point (e.g. BRAS, GGSN, DOCSIS equivalent=
)
>>> =A0 =A0 =A0 =A0 / =A0 =A0 \
>>> =A0 =A0 coreR1 =A0 =A0coreR2 (inside a single ASN)
>>> =A0 =A0 =A0 \ =A0 =A0 =A0 =A0/
>>> =A0 =A0 =A0 =A0peer/transit edge (exit/entrance to another ASN)
>> =A0 =A0 =A0 =A0 =A0/ =A0 =A0\
>> =A0 =A0 =A0coreS1 =A0 coreS2
>> =A0 =A0 =A0 =A0 \ =A0 =A0 /
>> =A0 =A0 =A0 =A0Customer policy point (e.g. PE, another BRAS for peer to =
peer)
>>> =A0 =A0 =A0 =A0 =A0|
>>> =A0 =A0 =A0 =A0 CE
>>> =A0 =A0 =A0 =A0 =A0|
>>> =A0 =A0 =A0 =A0sender>
>>> (apologies for bad ascii art)
>>>
>>> On Fri, Aug 6, 2010 at 11:51 AM, Bob Briscoe <rbriscoe@jungle.bt.co.uk>
>>> wrote:
>>> > Chris,
>>> >
>>> > At 04:37 04/08/2010, Christopher Morrow wrote:
>>> >>
>>> >> (I think I sub'd to the list with this address...)
>>> >>
>>> >> On Tue, Aug 3, 2010 at 3:42 PM, Bob Briscoe <rbriscoe@jungle.bt.co.u=
k>
>>> >> wrote:
>>> >> > Chris,
>>> >> >
>>> >> > During the ConEx w-g session last Tuesday in Maastricht you sugges=
ted
>>> we
>>> >> > should not include DDoS mitigation as a use-case for ConEx. I was
>>> >> > willing to
>>> >> > agree as we don't need to court controversy.
>>> >>
>>> >> yup, no use rat-holing if it's not central to the discussion.
>>>
>>> oops, we rat-holed... this sort of proves my point about ddos
>>> discussions wrt conex though.
>>> I would stick with (for conex): "Expose information about congestion
>>> on the network path, permit better/more-intelligent discard profiles
>>> to be used along the network paths."
>>>
>>> Where 'better/more-intelligent' is really: "Drop traffic that's less
>>> important to the end-users."
>>>
>>> Lee/Rich I think explained this (to me) as: "make sure my Internet
>>> gaming works perfectly, it is OK to slow down a large video file
>>> transfer." I believe plus-net in the UK does some of this today? (I
>>> may have the wrong provider... but it's a competitor of BT's I
>>> believe)
>>>
>>> >> > However, the co-authors of draft-moncaster-conex-concepts-uses hav=
e
>>> >> > asked me
>>> >> > to float the idea that, altho we won't include DDoS as a use-case =
in
>>> its
>>> >> > own
>>> >> > right, we should mention it as an extreme case of two other use-
>>> cases.
>>> >> > Let
>>> >> > me explain...
>>> >> >
>>> >> > The use-cases we plan to include are:
>>> >> >
>>> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
>>> >> > 5. Use cases (Highlighting that this is neither an exhaustive list
>>> nor a
>>> >> > prescriptive list...)
>>> >> > =A05.1 =A0ConEx for better traffic Control
>>> >> > =A0a. Targeting the right traffic
>>> >> > =A0b. Encouraging (and eventually enforcing) better CC
>>> >> > =A05.2 ConEx for better traffic monitoring
>>> >> > =A0a. For compliance with SLAs
>>> >> > =A0b. For assessing performance of your provider
>>> >> > =A0c. Monitoring congestion hotspots for targeted upgrades
>>> >> > =A0d. Monitoring congestion anomalies (Equipment problems or helpi=
ng
>>> >> > identify
>>> >> > DDoS)
>>> >> >
>>> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
>>> >> >
>>> >> > We could include mentions of DDoS along the following lines (no ne=
ed
>>> to
>>> >> > word-smith - I'm just trying to outline the concepts).
>>> >> >
>>> >> > 5.1b Encouraging (and eventually enforcing) better CC
>>> >> > " =A0ConEx information can be used as a control metric for making
>>> >> > =A0 traffic control decisions, such as deciding which traffic to
>>> >> > =A0 prioritise or to identify and block sources of persistent and
>>> >> > =A0 damaging congestion.
>>> >>
>>> >> this, I think, falls into the category of things that Lee from TWC i=
s
>>> >> interested in (and maybe his buddies at comcast/cox as well),
>>> >> understanding what traffic can suffer more loss without causing more
>>> >> end-user pain, and shifting that traffic to said profile.
>>> >
>>> > It's no coincidence that Rich Woundy is a co-author.
>>> >
>>> > Sure, Comcast have a current solution, but Rich is the first to say t=
hat
>>> it
>>> > gives no encouragement (and actually punishes) approaches like LEDBAT=
,
>>> > because it attributes blame for high utilisation by volume rather tha=
n
>>> > congestion-volume. There's a list of other things Rich presented in t=
he
>>> > ConEx BoF that makes ConEx worth doing beyond what Comcast currently =
do.
>>> > <http://www.ietf.org/proceedings/76/slides/conex-3.pdf>
>>> >
>>> > Whatever, I only included this text as a lead-up to the DDoS text
>>> later...
>>>
>>> better understanding of what contributes to congestion in 'your
>>> network' and what traffic your customers don't mind missing a few
>>> packets of (at the benefit of their real-time needs) is good.
>>>
>>> >
>>> >> > =A0 Simple ingress policer mechanisms, such as those described in
>>> >> > =A0 [Policing-freedom] and [re-ecn-motive], could control the
>>> >> > =A0 overall volume of congestion entering a network from each user=
.
>>> >> > =A0 Such a policer could lead to a number of beneficial outcomes:
>>> >>
>>> >> these seem to be of the flavor of things Comcast 'powerboost' does, =
at
>>> >> the CPE actually (if I understand their brand of magic correctly).
>>> >
>>> > No, powerboost is very different.
>>> >
>>> > [BTW, I and others did a start-up called Qariba back in 2001 which bu=
ilt
>>> a
>>> > powerboost-like feature for cable networks, with a network API so it
>>> could
>>> > be initiated from a CDN - we called it Broadband-800 by analogy to 80=
0
>>> phone
>>> > calls, because the server end was effectively temporarily buying more
>>> access
>>> > capacity on the end-user's behalf. We had a lot of interest in the US
>>> cable
>>> > industry at the time, but we got hit by the bubble bursting.]
>>> >
>>> > Powerboost gives you access to extra capacity irrespective of whether
>>> you
>>> > will contribute more to congestion. ConEx is much simpler and more
>>> generic.
>>>
>>> so... 'PowerBoost' provides higher QOS/traffic-rate for a period of
>>> time, it doesn't account for what damage that change has to other
>>> traffic on my cable-link, the link to the head-end or the other links
>>> in the network... it can be triggered at the CPE or the BRAS (I'd bet
>>> that anywhere else in the Comcast network, to take a specific example,
>>> probably doesn't matter).
>>>
>>> I think that's the same thing I said earlier.
>>>
>>> >> > =A0 o Heavy users might be encouraged to shift their usage away fr=
om
>>> >> > =A0 =A0 peak times in order to be able to transfer more data witho=
ut
>>> >> > =A0 =A0 triggering a response from the policer;
>>> >> > =A0 o Users might be encouraged to use software that shifts its us=
age
>>> away
>>> >> > =A0 =A0 from congestion peaks (shifting in time whether by hours o=
r
>>> seconds
>>> >> > =A0 =A0 [LEDBAT] or shifting to less congested routes [MPTCP]), ag=
ain to
>>> >> > =A0 =A0 transfer more data without triggering the policer;
>>> >> > =A0 o Developers of operating systems might be encouraged to suppl=
y
>>> such
>>> >> > =A0 =A0 software as the default;
>>> >> > =A0 o If certain applications did not use a congestion responsive
>>> >> > transport
>>> >> > =A0 =A0 and caused high levels of congestion-bit-rate, the policer=
 would
>>> >> > =A0 =A0 eventually force the bit-rate to reduce in response to
>>> congestion.
>>> >>
>>> >> Today this is (the last bullet) the equivalent of classifying a type
>>> >> of traffic (port/protocol/src/dst as classification hints) into a
>>> >> 'high loss' bucket/queue and just dropping faster/more of it. If the
>>> >> traffic is TCP you should get drops, backoff, sawtooth behaviour unt=
il
>>> >> you reach steady-state (maybe?) slower transfers.
>>> >>
>>> >> > =A0 It is believed that a system of ConEx policers could be built =
that
>>> >> > =A0 can verify the integrity of ConEx information and remove traff=
ic
>>> >> > =A0 that does not comply with the protocol [re-ecn-motive]. If suc=
h
>>> >> > =A0 robustness against cheating is indeed possible, a ConEx police=
r
>>> would
>>> >> > =A0 mitigate DDoS flooding attacks, at least to some extent, merel=
y as
>>> a
>>> >> > =A0 function of its ability to enforce a response to persistent an=
d
>>> >> > =A0 excessive congestion.
>>> >>
>>> >> So, there are surely cases of 'ddos' (or DoS) that include loud
>>> >> speakers... There are also many instances of DDoS that are many
>>> >> hundreds of thousands of (or millions) of very quiet voices that in
>>> >> total cause the DoS/DDoS effect.
>>> >>
>>> >> If you look at a system of marking of congestion information,
>>> >> depending upon where that marking happens, and on the traffic in
>>> >> question, there's no guarantee at all that the sources will be
>>> >> squelched.
>>> >
>>> > In the following I'm going to talk about re-ECN, rather than ConEx,
>>> because
>>> > we haven't defined ConEx yet...
>>> >
>>> >
>>> >> For example, imagine a DDoS of 1 RST pkt (could also do this with 1
>>> >> icmp-error-type message) per second from 1 million hosts across the
>>> >> whole of the network (a 'botnet' for instance). There will never be
>>> >> any packet sent back to the originators,
>>> >
>>> > (BTW, re-ECN doesn't need any packet sent back to the originators - y=
ou
>>> > might have misunderstood the design.)
>>> >
>>> >> the rate will be low enough
>>> >> that unless there is coincident traffic to the victim from these hos=
ts
>>> >> (in the same protocol/port profile probably) no signal will ever be
>>> >> seen that leads to squelching of traffic.
>>> >
>>> > Yes, of course we've thought of this sort of attack. Even if we hadn'=
t
>>> > thought of this, since 2005 the security/DoS community have been
>>> thinking up
>>> > attacks against re-ECN (pre-ConEx), so it would have been hard to mis=
s
>>> such
>>> > an obvious one.
>>> >
>>> > If each bot was behind a 100Mb/s link, and everyday congestion levels
>>> were
>>> > typically 0.2%, a reasonable config of basic re-ECN policer (I can gi=
ve
>>>
>>> where is this congestion level? on the 100mbps link? or somewhere in
>>> the local ASN? or external to that ASN?
>>> Looking at the ascii-network-art at the top:
>>>
>>> user -> cpe?
>>> cpe -> bras?
>>> bras -> core?
>>> core -> peering-edge?
>>> other-net -> other-net ? (ran out of space/time on the art, call this
>>> 'middle of the Internet')
>>> PE -> CE ?
>>> CE -> server?
>>>
>>> where the congestion is will determine what sort of measures can be
>>> applied today. In the future though if the end-stations know there is
>>> congestion they could choose to modify their behavior to better
>>> utilize the network...
>>>
>>> > typical numbers if you want) would allow a congestion-bit-rate of 10M=
b/s
>>> of
>>> > *marked* packets for a minute, then 30kb/s of sustained marked traffi=
c
>>> after
>>> > that.
>>> >
>>> > If your attack were a flooding attack with MTU size data packets
>>> (12,000b),
>>> > rather than an RST attack (which can be detected and ignored), each b=
ot
>>>
>>> the 'and ignored' part is a little misleading since 1 rst/second at
>>> the source isn't noticeable, 1m rst/second at the victim is another
>>> story. There will be effects on the network at the victim end,
>>> 'ignore' here is an oversimplification. (don't think it's worthwhile
>>> rat-holing here)
>>>
>>> > would have to congestion mark all the packets to have any flooding
>>> effect.
>>> > If each bot could draw congestion from its policer allowance at 30kb/=
s
>>> (see
>>> > above), a basic re-ECN policer would allow it to sustain an attack at=
 1
>>> pkt
>>> > every 0.4sec - a little more than twice as fast as your attack.
>>> >
>>> > Agreeing on the 'largest' botnet ever seen isn't easy, but the Maripo=
sa
>>> > botnet dismantled earlier this year involved ~12M separate IP address=
es.
>>> > No-one can know whether that implied 12M separate machines, but I dou=
bt
>>> it -
>>> > source address spoofing could have been used. This is relevant becaus=
e a
>>>
>>> the numbers for mariposa I think were from C&C check-ins, so these
>>> required 3-way handshakes and weren't spoofed. I believe there's some
>>> rotation in the address usage (dhcp-like effects) but in the end 1m or
>>> 12m isn't really important. (also not important to rat-hole here)
>>>
>>> > re-ECN policer would sit at the physical access - nothing in re-ECN
>>> depends
>>> > on valid source addresses. If each machine was spoofing 24 other
>>> addresses
>>> > (a total guess), that's about 500,000 real machines.
>>> >
>>> > Let's assume it gets much harder to marshall a larger army than the o=
ne
>>> said
>>> > to be the largest yet. Then, a basic re-ECN deployment would contain =
an
>>> > attack to about 30kb/s x 500,000 =3D 15Gb/s (which is similar to your
>>> scenario
>>> > of 12kb/s x 1M =3D 12Gb/s).
>>>
>>> it's getting easier to marshal more hosts, not harder, again though
>>> not important to rathole.
>>>
>>> > In summary, just a basic re-ECN policer that isn't even looking for D=
oS
>>> can
>>>
>>> where is the policer deployed?
>>>
>>> o if at the source-host I still say it's hard to tell 1pps from
>>> another, if there is no congestion between src and BRAS there's no
>>> reason to mark anything there.
>>>
>>> o if at the destination/victim, then policing traffic to the victim
>>> can be done if the traffic is significantly different from normal
>>> traffic. Effects on legitimate traffic will be hard to avoid if the
>>> attack traffic looks enough like real/legitimate traffic.
>>>
>>> I don't think this is really even important enough to rathole on...
>>>
>>> > raise the bar to require a botnet about 3,000 times bigger than witho=
ut
>>> > re-ECN [because that's the ratio of the peak traffic each site can se=
nd
>>> > (100Mb/s) divided by the averaged rate of losses in normal traffic
>>> > (30kb/s)].
>>> >
>>> > Yes, this isn't particularly impressive. But it's not meant to be. Th=
is
>>> is
>>> > just what you get without even trying to deal with DDoS specifically.
>>> >
>>> > i) The most obvious thing to add to a basic re-ECN policer would be a
>>> simple
>>> > anomaly detector triggered by packets to one destination prefix with
>>> >10% of
>>> > them marked for expected congestion. That could trigger a much more
>>> > stringent limit just for that destination prefix.
>>>
>>> so, more state on the router? or is this another inline device doing
>>> the work? Factor in cost in opex/capex to implement/maintain this...
>>> at verizon studies/eng work showed ~500k/BRAS would be required to
>>> implement in-line 'dpi' systems on 1g links, today those links are
>>> likely 10g and I don't think the cost has reduced. at 500k/BRAS I
>>> really should just turn up another 20g of capacity to the BRAS.
>>> Downstream from the BRAS I am limited by physics/last-mile issues so
>>> we're back to the top of the thread: "If you can mark/know what
>>> traffic is more relevant to real-time ops at the customer-end then you
>>> can make a better decision about what packets to drop"
>>>
>>> >
>>> > ii) If the victim network deployed ConEx monitors around its border,
>>> attack
>>> > traffic would really stand out obviously because nearly all attack
>>> packets
>>> > would be marked for rest-of-path congestion. These monitors could the=
n
>>> > trigger similar tight limits on congestion-bit-rate to that destinati=
on.
>>>
>>> this is costly, very costly, see above. It's not always obvious which
>>> traffic is contributing to the actual problem, the traffic most likely
>>> won't be marked until I mark it at ingress (at the peering edge),
>>> which means I need to know what to mark, which the victim
>>> host/customer-network will have to work out, or have me help them work
>>> out.
>>>
>>> > iii) I've assumed a homogeneous botnet. In practice many would be on
>>> slower
>>> > access links, and many would be within larger sites (e.g. campus nets=
)
>>> > sharing an aggregated congestion allowance at the access from the cam=
pus
>>> to
>>> > the Internet. I think my rough calculation is closer to 'worst case'
>>> than
>>> > 'typical'.
>>>
>>> perhaps, not worth rat-holing though.
>>>
>>> >
>>> >> The victim still suffers
>>> >> ~1mpps of traffic, and actually conex/CC just makes the problem far
>>> >> worse for the victim as all of his real-user traffic will be marked
>>> >> (over time) and thus squelched out while the DDoS continues to flow
>>> >> unabated.
>>> >
>>> > Nope. Importantly, normal /unsustained/ traffic to that destination
>>> would
>>> > take a while (1 minute in my example) to hit the tighter congestion
>>> policer
>>> > limits. So regular usage would be reasonably unaffected. Whereas the
>>> bots
>>> > have to sustain the attack to be effective.
>>>
>>> botnets sustain attacks, it's what they do (among sending spam or
>>> other things, not relevant to this discussion). Having sat through
>>> many multi-gigabit per second attacks over the last many years...
>>> sustaining a dos/ddos isn't really a problem for the attacker. User
>>> traffic is always affected in these cases, if the traffic from the
>>> attack looks like the normal customer traffic. If there's a method to
>>> differentiate attack from customer traffic (rst packet, icmp packet)
>>> the upstream can just drop the odd-traffic. They don't need anything
>>> except the tools they have today to do this, and maintenance of more
>>> state isn't a help.
>>>
>>> Unless the victim can signal the upstream that 'traffic of this type'
>>> or 'packets fitting this profile' should be marked for discard/drop
>>> before other packets ... there's no real win. This, though, does
>>> resemble the start of the thread: "Tell me what traffic is important
>>> to your real-time ops, I'll apply a drop profile to all other traffic
>>> when congestion occurs."
>>>
>>> >
>>> >> >
>>> >> > =A0 It would be foolhardy to claim that it will be possible to mak=
e
>>> ConEx
>>> >> > =A0 invulnerable to all cheats and attacks, even though it has bee=
n
>>> hard
>>> >> > to
>>> >> > =A0 attack it so far. Nonetheless, ConEx would still be useful eve=
n if
>>> a
>>> >> > =A0 specific deployment of ConEx policers contained vulnerabilitie=
s.
>>> ConEx
>>> >> > =A0 could still prevent congestion collapse due to careless ommiss=
ion
>>> of
>>> >> > =A0 congestion control, or due to release of software containing a=
n
>>> >> > =A0 accidental congestion control bug.
>>> >>
>>> >> I think that I nullified the above paragraph actually... except for =
a
>>> >> software bug case, though I'd argue that in the case of the UW NTP
>>> >> server incident ConEx/CC wouldn't have helped there either as the
>>> >> broken software would still have been broken and the real-users of t=
he
>>> >> system would have just been CC"d out of existence.
>>> >
>>> > Nope. ConEx congestion policers would be run by the network operator =
and
>>> > effectively take over congestion control if hosts fail to do it for a
>>> > sustained period.
>>>
>>> where are these run? at the BRAS? at the Peering-Edge? on the CPE?
>>>
>>> > Yes, you could get bugs in ConEx policers. But it's unlikely (though =
not
>>> > impossible) that would happen at the same time as a widespread bug in
>>> hosts.
>>> > Safety in diversity.
>>> >
>>> > Yes, the ConEx marking process on the hosts might be bugged. But, as
>>> with
>>> > DoS protection, if a host or router is being flooded and has to drop
>>> stuff,
>>> > it is easy to preferentially drop unmarked ConEx packets (or non-ConE=
x
>>> > packets) first.
>>>
>>> sure.
>>>
>>> > Agreed, this takes a leap-of-faith to believe a network might deploy =
all
>>> > this, but it only has to deploy its own protections for itself, which=
 is
>>> a
>>> > good deployment property.
>>>
>>> where are the protections deployed, cost and maintenance will be
>>> important to the deployment path...
>>>
>>> >> > "
>>> >> >
>>> >> > 5.2d: Monitoring congestion anomalies
>>> >> > " =A0One of the most useful things ConEx provides is the ability t=
o
>>> >> > =A0 monitor the amount of congestion entering a network. Thus ConE=
x
>>> >> > =A0 would add congestion to the information used by existing anoma=
ly
>>> >> > =A0 detection systems, thus greatly improving their ability to
>>> >> > discriminate
>>> >> > =A0 between pathological and benign anomalies. Such congestion-car=
rying
>>> >> > =A0 anomalies might be due to accidental misconfigurations in anot=
her
>>> >> > =A0 network, or deliberate malicious attacks.
>>> >
>>> > I must also add that re-ECN is only handling the flooding aspect of a=
n
>>> > attack, not the /information/ in the packets (the RST flag in your
>>> example).
>>> > Of course a DDoS protection system needs to take this information int=
o
>>> > account.
>>> >
>>> > What 5.2d says is that all ConEx aims to do is add rest-of-path
>>> congestion
>>> > to that information, which is a powerful addition to the network's
>>> > visibility. Particularly because info in the payload need not be visi=
ble
>>> to
>>> > the network at all.
>>>
>>> ok, what 'rest of path'? Is this per-asn "congestion" or per-hop
>>> "congestion" information? how many bits (which bits?) in the ip-header
>>> are going to carry this information?
>>>
>>> >
>>> >> I'm on board with letting folks know there is congestion, I'm not
>>> >> convinced making decisions based on this is simple/helpful, yet.
>>> >> Adding this information to packets, provided I don't need new
>>> >> silicon/cpu-cycles/state to do this doesn't strike me as hard,
>>> >
>>> > Good
>>> >
>>> >> I can
>>> >> imagine that most backbone providers wont' care and probably won't
>>> >> implement the marking, but... I could be wrong.
>>> >
>>> > No network has to implement anything if they don't want to. It still
>>> works
>>> > for others who do. I also don't imagine backbone providers will care
>>> about
>>> > this.
>>> >
>>> > It's easier for a backbone to throw capacity at the problem when they=
're
>>> > running a few large links rather than loads of smaller links. So the
>>> smaller
>>> > links around the network edge are always going to bottleneck congesti=
on
>>> on
>>> > behalf of cores and backbones.
>>> >
>>> >
>>> >> Marking something purely 'congested path' or no though isn't very
>>> >> helpful (there are almost always) many hops along a path and many
>>> >> paths a packet/flow may take. At a single point in time one part of =
a
>>> >> path may be congested, but there isn't any guarantee that part will =
be
>>> >> affected even by the 1RTT time required to do something. (last-mile
>>> >> problems aside, nothing fixes a 56k modem except... more bandwidth)
>>> >
>>> > I think you might have misunderstood. ConEx isn't intended to change =
who
>>> > does congestion control. Hosts still do that on fast timescales.
>>>
>>> part of this confusion is that the wording (even in this thread)
>>> points to networks doing the congestion control. That's one reason I
>>> put the horrid ascii art at the top :) "Where does this congestion
>>> control happen?"
>>>
>>> > ConEx is merely intended to allow the network to count up how much
>>> > congestion is still contributed to by hosts, so the network can judge
>>> how
>>> > effectively the host is doing congestion control. That can ultimately=
 be
>>> > used to take over control (via a network-based congestion policer) if
>>> the
>>> > host is persistently being profligate (whether through selfishness,
>>> malice
>>> > or accident).
>>>
>>> ok, but is the host causing just itself problems? or all users on a
>>> shared link? if all users on a shared link discarding packets only
>>> helps if the end-host obeys the drop effects. In the end, if you
>>> operate a large shared media last-mile (wireless network) ... the end
>>> stations still must compete for last-mile access. If everyone is well
>>> behaved then of course more marking data and more polite
>>> machines/users will permit better utilization of the link(s).
>>>
>>> In the case of a wireless network (forgetting the cases where the
>>> "network" can un-enroll/force-disconnect an end-station/user)
>>> loud/impolite end-users can still cause problems for their shared
>>> network breathren, there's nothing that can stop that, sadly... so
>>> maybe we shouldn't rat-hole here either. For most last-mile cases the
>>> goal ought to be to carry more good traffic on a link, reduce
>>> retransmits and other overhead. ConEx should, I think, be able to
>>> signal the end-user/station that it can do better, it should also help
>>> the network (BRAS or peering-edge maybe) what traffic to prefer to
>>> discard in times of stress.
>>>
>>> >
>>> >> Being able to use information about congestion along a path to more
>>> >> efficiently use the available bandwidth is a fine plan, less
>>> >> re-transmits is a good plan.
>>> >>
>>> >> > =A0 ConEx provides the additional benefit that it exposes congesti=
on
>>> >> > =A0 information as packets enter a network, not only at the point =
of
>>> >> > =A0 congestion. Therefore it could be feasible for anomaly detecti=
on
>>> >>
>>> >> ok, this is a point I need clarification on. Where is the congestion
>>> >> it exposes?
>>> >> =A0'there is congestion'
>>> >> =A0 =A0 or
>>> >> =A0'at hop 12 there was congestion, you are at hop 22, fyi' (honestl=
y
>>> >> it'd be better to tell me 'as 3 is congested' I think)
>>> >
>>> > Re-ECN deliberately doesn't tell anyone exactly where the congestion =
is
>>> -
>>> > that would reveal too much. Whatever your viewing point, it only tell=
s
>>> you
>>> > how much congestion there is upstream of you and how much downstream.
>>> >
>>> > Rather than re-write all that's ever been written on ConEx in one ema=
il,
>>> can
>>> > I point you at a section of a paper to get this?:
>>> > =A0 =A0 =A0 =A0Section 4.2 of
>>> > =A0 =A0 =A0 =A0"Using Self-interest to Prevent Malice;
>>> > =A0 =A0 =A0 =A0Fixing the Denial of Service Flaw of the Internet"
>>> > =A0 =A0 =A0 =A0<http://www.bobbriscoe.net/projects/refb/index.html#re=
fb_dplinc>
>>>
>>> I'll go have a read.
>>>
>>> > This is the only paper where the full re-ECN protocol is described bu=
t
>>> in
>>> > outline form, to save you having to wade through the detailed protoco=
l
>>> spec.
>>> > It's also the only paper about re-ECN and DDoS (other than my later P=
hD
>>> > thesis).
>>> >
>>> > Plenty of other papers describe the two main codepoints of the protoc=
ol:
>>> > - Expected whole path congestion, W [inserted by the sender and
>>> immutable]
>>> > - (ECN) congestion experienced upstream so far, U [marks added by
>>> routers]
>>> > These are enough to answer your question above: a monitor at any poin=
t
>>> on
>>> > the path can meter both W & U, so it can calculate expected downstrea=
m
>>> > congestion on the rest-of-the-path as well: D =3D W - U.
>>> >
>>> > But only the above DDoS paper explains the third and last part of the
>>> > protocol (initial credit) which enables re-ECN to work without any
>>> feedback
>>> > at all - you need that for the DDoS case.
>>> >
>>> >
>>> >> If the information as traffic enters my network is 'somewhere there =
is
>>> >> congestion!' I'm not sure I can do much aside from buffer (not going
>>> >> to happen for very long buffers are expensive) or WRED/drop packets.
>>> >> If the information is that at a router-hop 10 hops back there was
>>> >> congestion I may be able to either WRED traffic to that destination =
or
>>> >> maybe shuttle packets on slightly longer paths if they are available
>>> >> (though that seems overly complex as a solution, to me).
>>> >
>>> > See above: We're not expecting the network to do fine-grained congest=
ion
>>> > control per flow - that's for the host to decide. We're wanting to lo=
ok
>>> at
>>> > the sum of all the congestion that traffic from a site (household,
>>> campus)
>>> > is contributing to; everything on the Internet side of its access
>>> boundary.
>>>
>>> 'access boundary' =3D=3D BRAS?
>>>
>>> > Then at the next network border, the pair of networks at that border =
can
>>> see
>>> > how much congestion is each side of that border - for all traffic,
>>> whereever
>>> > it is destined. And so on.
>>>
>>> ok, clearly this is the Peering-Edge, fine.
>>>
>>> > That's all you need - necessary and sufficient - to regulate profliga=
te
>>> > users (or bugged, or malicious) and to regulate networks that allow
>>> their
>>> > users to be profligate (or bugged or malicious).
>>>
>>> 'allow their users' is a little pejorative... I may have a network
>>> with 500k users all with 1g links (full duplex). You may have a 1G
>>> link to me, if your customer turns up a 'popular' site (say the
>>> victoria-secret fashion show live stream, for a canonical example) ...
>>> there will be issues between our networks. The only path to resolution
>>> is to force traffic on alternate paths or upgrade links between these
>>> 2 networks, or to just chose to drop traffic and force 'bad service'
>>> on 2 parts of the population. VictoriaSecret happened to pick the last
>>> option in 2000/2001 (I forget the right year).
>>>
>>> I suppose with better marking/exposure of the problem one/both sides
>>> could have chosen to drop other traffic, that may have improved
>>> things... pathological cases aside though, I still say that better use
>>> of the network by end-stations is good. Better info about what to drop
>>> first is also good.
>>>
>>> >> > =A0 systems to use ConEx information to detect and shut down dange=
rous
>>> >> > =A0 floods of congestion at the point where traffic enters a netwo=
rk.
>>> >> > "
>>> >> >
>>> >> > Do you think circumspect wording about DDoS like the above would
>>> still
>>> >> > trigger an allergic reaction from some readers? Would this sort of
>>> text
>>> >> > allay your concerns? Or are you adamant that there should be no
>>> mention
>>> >> > of
>>> >> > DDoS at all?
>>> >>
>>> >> it's too easy to rathole on ddos, there are far too many ways to
>>> >> create problems with it, and today it's not really that much of a
>>> >> problem. There are tools in existence today to deal with the vast
>>> >> majority of ddos problems, it really isn't a huge problem provided y=
ou
>>> >> prepare and understand the threat(s).
>>> >>
>>> >> I suppose in summary: I'm not adamant about it one way or the other,
>>> >> but I can see that it'll end up distracting from your actual
>>> >> point/topic and thus cost you cycles you could have spent explaining
>>> >> why conex/cc isn't just gussied up 'longdistance toll settlement for
>>> >> IP' (for instance, though I think conex itself just marking packets
>>> >> doesn't really fall into the quoted phrase).
>>> >
>>> > Yes, I understand you're trying to help us avoid interminable argumen=
ts.
>>>
>>> yes, though this thread wasn't my best effort :(
>>>
>>> > I guess what I'm saying is: We don't just change IP for chuckles. The
>>> job of
>>> > ConEx is to limit congestion caused by profligacy, malice and acciden=
ts.
>>> If
>>> > it can only do that against people who don't try too hard to push bac=
k,
>>> we
>>> > should not spend our precious time on ConEx.
>>>
>>> or rephrased a tad: "If we can help make the utilization of the
>>> network more optimal.." (by letting the network and users of the
>>> network decide which packets to discard first)
>>>
>>> > IOW, we /should/ have some level of argument about whether ConEx can
>>> achieve
>>> > what it aims to achieve. Just not too much at this early stage, when =
we
>>> > haven't even defined the ConEx protocol (as opposed to the re-ECN
>>> protocol).
>>>
>>> ok.
>>>
>>> -chris
>>>
>>> > Bob
>>> >
>>> >
>>> >> -chris
>>> >>
>>> >> > Bob
>>> >> >
>>> >> >
>>> >> >
>>> >> > ________________________________________________________________
>>> >> > Bob Briscoe, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0BT Innovate & Design
>>> >> >
>>> >
>>> > ________________________________________________________________
>>> > Bob Briscoe, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0BT Innovate & Design
>>> >
>>> _______________________________________________
>>> conex mailing list
>>> conex@ietf.org
>>> https://www.ietf.org/mailman/listinfo/conex
>>
>

From toby@moncaster.com  Wed Aug 18 02:35:25 2010
Return-Path: <toby@moncaster.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 63D073A6A62 for <conex@core3.amsl.com>; Wed, 18 Aug 2010 02:35:25 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.249
X-Spam-Level: 
X-Spam-Status: No, score=-2.249 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HELO_EQ_DE=0.35]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id EqMT8SsiDrAM for <conex@core3.amsl.com>; Wed, 18 Aug 2010 02:35:22 -0700 (PDT)
Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.17.10]) by core3.amsl.com (Postfix) with ESMTP id 486063A68C5 for <conex@ietf.org>; Wed, 18 Aug 2010 02:35:20 -0700 (PDT)
Received: from TobysHP (host86-141-38-167.range86-141.btcentralplus.com [86.141.38.167]) by mrelayeu.kundenserver.de (node=mreu1) with ESMTP (Nemesis) id 0Md090-1OTaEF2O82-00Hf35; Wed, 18 Aug 2010 11:34:50 +0200
From: "Toby Moncaster" <toby@moncaster.com>
To: "'Kevin Mason'" <Kevin.Mason@telecom.co.nz>, "'Woundy, Richard'" <Richard_Woundy@cable.comcast.com>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se><20100812123814.GF16820@verdi>	<001b01cb3a2b$80f47420$82dd5c60$@com><alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se>	<563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net>	<EE00404438E9444D90AEA84210DC4067019325EA@pacdcexcmb05.cable.comcast.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF3A66C7@WNEXMBX01.telecom.tcnz.net>
In-Reply-To: <563C162F43D1B14E9FD2BC0A776C1E9127EF3A66C7@WNEXMBX01.telecom.tcnz.net>
Date: Wed, 18 Aug 2010 10:34:47 +0100
Message-ID: <001201cb3eb8$9992cb30$ccb86190$@com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Office Outlook 12.0
Thread-Index: Acs6u1E/9V3Zkbv3SxC1bLGI/IQHzQDBJ+SwABw2zZAADGMdcAAVBKkQ
Content-Language: en-gb
X-Provags-ID: V02:K0:PDKRkx6s1+kSbk19EJynd25jbCFo8p4TQC0AJud+1Kz cz9XFjv8ni58iuaiLG/wnKmzurMPpv/crnLa4EtztU60pBsJoh qPTerI7Z6FNdGjDQMLP54spYCYEW5CTzYSVvYlnRJ6sLEwucR2 0nnsvmAjnPgr9hJS3XzJWc5WsxH2/QU6eJBbRGYEDCcF/WZvC1 25s8VS1ITZS8bKdS+4CcIN5GA7Qg15cRMnWuEwsAa8=
Cc: conex@ietf.org
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 18 Aug 2010 09:35:25 -0000

Inline...

Toby

> -----Original Message-----
> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf
> Of Kevin Mason
> Sent: 18 August 2010 00:41
> To: Woundy, Richard
> Cc: conex@ietf.org
> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
> 01.txt
> 
> 
> 
> Cheers
> Kevin Mason
> 
> > -----Original Message-----
> > From: Woundy, Richard [mailto:Richard_Woundy@cable.comcast.com]
> > Sent: Wednesday, 18 August 2010 7:17 a.m.
> > To: Kevin Mason; conex@ietf.org
> > Subject: RE: [conex] comments on draft-moncaster-conex-concepts-uses-
> > 01.txt
> >
> > >>I would challenge the view that another service provider "causes"
> > congestion on another ISP's network. Data is only passed between
> sender
> > and receiver so it is the ISP's customer that is requesting the data
> > from customers of the other provider, not the provider "sending it".
> >
> > Kevin, you make a good point (customer pulls rather than provider
> > pushes), but I don't think that is the entire story.
> >
> > Consider a hypothetical example (an *extreme* case to be sure, but
> not
> > totally disconnected from reality) in which a service provider
> changes
> > its routing policy in the following manner: from forwarding traffic
> > somewhat equally over 10 interconnects to a downstream ISP, to
> > forwarding all traffic over a single interconnect to the downstream
> ISP
> > (such as changing policy to "hot potato routing" from a central
> hosting
> > center to the downstream ISP). That change in policy is very likely
> to
> > cause a lot of congestion over the single interconnect link, even as
> the
> > overall consumer behavior doesn't change at all.
> 
> [Kevin Mason] The culpability for congestion on the interconnecting hop
> is very dependent on who dimensions it and the commercials that
> underpin it. If the "sending" provider dimensions it then congestion on
> this hop is solely the accountability of the sending proivder, no need
> for congestion exposure here as they can directly measure it today
> (queuing and discards). IF the receiving provider dimensions it then
> they have no current visibility of the congestion on the preceding
> link, but congestion exposure will potentially only tell them that
> congestion has already been experienced, not where. So if there is any
> SLA around performance of the interconnection hop then the receiving
> party still has to get info from the sending provider to ascertain that
> it is the interconnecting hop that is the problem and not a hop deeper
> in the sending provder's network.
> 
> Capturing the information for network management purposes at the
> interprovider level may well be very useful for overall network
> planning purposes if practical, but using it to underpin payment
> between providers is very different.

I obviously bow to the greater knowledge of Kevin and Rich in this, but it
seems to me there are scenarios where ConEx information may be a sensible
basis for settlements (perhaps not with money changing hands, but for
instance with shared backbones such as that used by the UK academic
community (ja.net)...

> 
> However I do not think we need to get too hung up on this, the point is
> that it is debateable who "causes" the congestion in an interprovider
> context, and therefore getting agreement on who might "pay" for it has
> the potential only to enrich the legal industry. I am however in favour
> of using congestion information for accounting purposes at a individual
> ISP customer account level to recognise and reward cooperative end user
> behaviour (e.g. congestion caps)

OK, this is in danger of becoming highly philosophical! Let's take 3 common
scenarios:

1) A user accesses video content via a CDN
2) A user uploads photos to facebook
3) A user does a web search and visits a link from google

In all these scenarios it is debatable who CAUSES any congestion this
traffic encounters. In 1) it could be said to be the user (he wanted to
watch the video) or it could be the CDN (they sent the actual traffic) or it
could be the owner of the content (they presumably gain in some manner from
people watching that content). In case 2) the user is clearly responsible,
but it might be argued that facebook also gains, because it depends on
having users doing this sort of thing as its business model... In 3) it
could be the user, it could be google (they get paid for the click), it
could be the final site from which the data came, etc. But in all cases
there is an argument that the upstream forwarding nodes are also to some
extent responsible - they aren't in any way responsible for the content, nor
are they responsible for it arriving at their inbound interface, but they
are responsible for any routing decisions they make which might have a bad
impact on downstream networks. However I accept Kevin's core point that we
should avoid apportioning blame in our descriptions...

Incidentally, it would be interesting to know to what extent routing
decisions by one network have a knock-on effect on the traffic and
congestion in downstream networks... This seems to be a key bit of
information that ConEx can provide that is currently missing.

Oh, and BTW, I personally don't believe ConEx is going to be able to be used
for traffic engineering on fast timescales - the congestion simply varies
too quickly over packet timescales (or it does with window-based
controllers, does anyone know if this is also true for controllers such as
Cubic?)

> 
> 
> To my mind the power of Conex is to provide a forwarding device a
> richer set of information to make better decisions on how to manage
> their queues for the greater good.

YES! Or perhaps less evangelically, ConEx provides information so they know
what impact their decisions are having downstream (after all there may be
operators that want to hinder rather than help).

> 
> >
> > There are a lot less extreme, real-world examples of routing policy
> > changes that would have a similar network impact.

To say nothing of fast re-routes, etc

> >
> > -- Rich
> >
> > -----Original Message-----
> > From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On
> Behalf
> > Of Kevin Mason
> > Sent: Tuesday, August 17, 2010 1:35 AM
> > To: conex@ietf.org
> > Subject: [conex] comments on draft-moncaster-conex-concepts-uses-
> 01.txt
> >
> > A  comment on the draft.
> >
> > On accouinting approaches to using congtion information
> >
> > I would challenge the view that another service provider "causes"
> > congestion on another ISP's network. Data is only passed between
> sender
> > and receiver so it is the ISP's customer that is requesting the data
> > from customers of the other provider, not the provider "sending it".
> >
> > So if the "sending" provider is not causing the congestion (because
> it
> > is the receiving provider's customer the requested it) then arguing
> the
> > "sending" provider should pay for any congestion the resulted might
> be
> > difficult. I can see endless legal arguments as to why one or the
> other
> > party is culpable and therefore who should endure any commercial
> > consequences.
> >
> > On network uses of the information I think there is a general concept
> > that is not being captured well.
> >
> > In ISP networks there are, very simply, two parts. Firstly there is
> the
> > connectivity between each account holders demarcation (UNI) and a IP
> > edge device (BRAS/BNG in Broadband Forum speak). The IP edge device
> > typically facitites AAA functions as well as user based policy
> > enforcement and by necessity is fully aware of what flows below to
> what
> > UNI's (e.g. because they are all on a single authenticated VLAN or
> PPP
> > tunnel).
> >
> > Beyond that point between the IP edge and any peering point, the
> network
> > does not maintain an specific awareness of individual end points.
> > Routers could  theoretically maintain information about each
> > source/destination pair AND consult some database to relate that to a
> > end user profile, but this is not very scaleable.
> >
> > So in the core ISP network, if a forwarding next hop is approaching
> > overload, then the egress config of the router must deal with the
> > aggregate flow and act on information that is in the packet header
> > alone. Maintaining knowledge of which "users" have "caused" the most
> > congestion in recent times is too hard.
> >
> > So it would appear that a possible scheme might be that a two stage
> > queue management regime might be desirable, whereby at a lower queue
> > size packets begin to be congestion marked if ECN capable and maybe
> > discarded if not, but at a slightly higher queue depth, packets that
> are
> > ECN capable but with a high positive congestion value get discarded,
> on
> > the basis that these have a lower chance of reaching their
> destination
> > than packets not declaring an expectation of congestion on the rest
> of
> > the path. I don't see a "border monitor" as described in the draft
> being
> > very practical or useful in this part of the network.
> >
> > When packets arrive at the BRAS destined for the user, then user
> based
> > policy could be applied. One such policy might be to discard all
> packets
> > with a congestion deficit of more than x. This is the safety net
> against
> > dishonesty by the sender. An additional policy might be to discard
> > packets that have experienced congestion above a threshiold (which
> may
> > be different for different user profiles) so far AND that are
> destined
> > to a user that has a recent history of high congestion marked
> packets.
> > If previous congestion marks have result in the user backing off then
> > this policy would not be invoked, so it would only apply to users
> that
> > are persistently contributing to congestion somewhere on the path
> > traversed (on the same provider or any preceding providers network).
> >
> > These policy actions could constitute the "edge monitor" functions
> > referred to in the draft but would actually be part of the policy
> > functions of the edge device itself, not any independent function.
> >
> > Other may have diffeent views of how the revealed congestion
> information
> > might be used but I believer it is useful to at least consider the
> two
> > parts of an ISP network when discussing possible used for the
> > information.
> >
> > Cheers
> > Kevin Mason
> > _______________________________________________
> > conex mailing list
> > conex@ietf.org
> > https://www.ietf.org/mailman/listinfo/conex
> _______________________________________________
> conex mailing list
> conex@ietf.org
> https://www.ietf.org/mailman/listinfo/conex


From christopher.morrow@gmail.com  Wed Aug 18 06:53:01 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id B22B53A6893 for <conex@core3.amsl.com>; Wed, 18 Aug 2010 06:53:01 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -101.622
X-Spam-Level: 
X-Spam-Status: No, score=-101.622 tagged_above=-999 required=5 tests=[AWL=0.977, BAYES_00=-2.599, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pAFoB8hpVKSB for <conex@core3.amsl.com>; Wed, 18 Aug 2010 06:52:59 -0700 (PDT)
Received: from mail-yw0-f44.google.com (mail-yw0-f44.google.com [209.85.213.44]) by core3.amsl.com (Postfix) with ESMTP id 72D233A6954 for <conex@ietf.org>; Wed, 18 Aug 2010 06:52:59 -0700 (PDT)
Received: by ywi4 with SMTP id 4so72052ywi.31 for <conex@ietf.org>; Wed, 18 Aug 2010 06:53:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=+rkiAd+/zAzVARlrXyAXFoGY3iquo/YYfxfefRFNycQ=; b=HygbwSR26Dk/esi+b/Qqv5I4TzJp7MA7CxU5z8XQFHLJzKRkWvP1zAuMWNf4xgmH4X 5hR89SoiwlnFS8EW2kRpXxl576Gfj4cb8JEdZbcQVZ3XpI2tOU0vOFqVEyoMEjdW0QOC isHfAoeJWTgHxgWAIoFQ00dBonCgGJqJgyH7Y=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=EZ6ugfqAzVh80InCS0f9cbnvAGRbX7YUxAfHRvYQHkARAqeunouj3Y5N0kSCXgXXti YsSvYDjWttEL1+VWW94d2Avec2c6wYpDv38Qd6ziL4Vvs8CeVheTKscRNdUvsWvrE7hu Hw/hetkjIFApmBBxsF8RDV6MXYWuBnaodFGDY=
MIME-Version: 1.0
Received: by 10.101.133.12 with SMTP id k12mr9503106ann.27.1282139614247; Wed, 18 Aug 2010 06:53:34 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.156.203 with HTTP; Wed, 18 Aug 2010 06:53:34 -0700 (PDT)
In-Reply-To: <001201cb3eb8$9992cb30$ccb86190$@com>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net> <EE00404438E9444D90AEA84210DC4067019325EA@pacdcexcmb05.cable.comcast.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF3A66C7@WNEXMBX01.telecom.tcnz.net> <001201cb3eb8$9992cb30$ccb86190$@com>
Date: Wed, 18 Aug 2010 09:53:34 -0400
X-Google-Sender-Auth: Hzc5FMnvBMznD1xH1Ct4CgtgfZE
Message-ID: <AANLkTi=24o44ACGFgN2N4_xt+Bo1rydC6gvR_et-8XVG@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: Toby Moncaster <toby@moncaster.com>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: Kevin Mason <Kevin.Mason@telecom.co.nz>, "Woundy, Richard" <Richard_Woundy@cable.comcast.com>, conex@ietf.org
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 18 Aug 2010 13:53:01 -0000

On Wed, Aug 18, 2010 at 5:34 AM, Toby Moncaster <toby@moncaster.com> wrote:
> Inline...
>
> Toby
>
>> -----Original Message-----
>> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf
>> Of Kevin Mason
>> Sent: 18 August 2010 00:41
>> To: Woundy, Richard
>> Cc: conex@ietf.org
>> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
>> 01.txt
>>
>>
>>
>> Cheers
>> Kevin Mason
>>
>> > -----Original Message-----
>> > From: Woundy, Richard [mailto:Richard_Woundy@cable.comcast.com]
>> > Sent: Wednesday, 18 August 2010 7:17 a.m.
>> > To: Kevin Mason; conex@ietf.org
>> > Subject: RE: [conex] comments on draft-moncaster-conex-concepts-uses-
>> > 01.txt
>> >
>> > >>I would challenge the view that another service provider "causes"
>> > congestion on another ISP's network. Data is only passed between
>> sender
>> > and receiver so it is the ISP's customer that is requesting the data
>> > from customers of the other provider, not the provider "sending it".
>> >
>> > Kevin, you make a good point (customer pulls rather than provider
>> > pushes), but I don't think that is the entire story.
>> >
>> > Consider a hypothetical example (an *extreme* case to be sure, but
>> not
>> > totally disconnected from reality) in which a service provider
>> changes
>> > its routing policy in the following manner: from forwarding traffic
>> > somewhat equally over 10 interconnects to a downstream ISP, to
>> > forwarding all traffic over a single interconnect to the downstream
>> ISP
>> > (such as changing policy to "hot potato routing" from a central
>> hosting
>> > center to the downstream ISP). That change in policy is very likely
>> to
>> > cause a lot of congestion over the single interconnect link, even as
>> the
>> > overall consumer behavior doesn't change at all.
>>
>> [Kevin Mason] The culpability for congestion on the interconnecting hop
>> is very dependent on who dimensions it and the commercials that
>> underpin it. If the "sending" provider dimensions it then congestion on
>> this hop is solely the accountability of the sending proivder, no need
>> for congestion exposure here as they can directly measure it today
>> (queuing and discards). IF the receiving provider dimensions it then
>> they have no current visibility of the congestion on the preceding
>> link, but congestion exposure will potentially only tell them that
>> congestion has already been experienced, not where. So if there is any
>> SLA around performance of the interconnection hop then the receiving
>> party still has to get info from the sending provider to ascertain that
>> it is the interconnecting hop that is the problem and not a hop deeper
>> in the sending provder's network.
>>
>> Capturing the information for network management purposes at the
>> interprovider level may well be very useful for overall network
>> planning purposes if practical, but using it to underpin payment
>> between providers is very different.
>
> I obviously bow to the greater knowledge of Kevin and Rich in this, but i=
t
> seems to me there are scenarios where ConEx information may be a sensible
> basis for settlements (perhaps not with money changing hands, but for
> instance with shared backbones such as that used by the UK academic
> community (ja.net)...

on the discussion of settlements ... the only place I think that makes
sense is in determining if/when a relationship should change, from
'customer' to 'peer' or vice-versa.

In the case of purely 'customer' relationships (which won't change,
for instance 'dsl customer') today most folks are just charged for the
connection. if you propose to convert them to a form of billing based
on congestion you'll have to find story that doesn't end up just
confusing the customer I think. confused customers =3D=3D
call-center-questions :(

I also bet that the congestion here is going to be mostly on the last
mile link, customers may be interested in changing their behavior to
better utilize their bw. It's not clear that there's a cost benefit
for the customer since no amount of money is going to change their
last mile problems.

>
>>
>> However I do not think we need to get too hung up on this, the point is
>> that it is debateable who "causes" the congestion in an interprovider
>> context, and therefore getting agreement on who might "pay" for it has
>> the potential only to enrich the legal industry. I am however in favour
>> of using congestion information for accounting purposes at a individual
>> ISP customer account level to recognise and reward cooperative end user
>> behaviour (e.g. congestion caps)
>
> OK, this is in danger of becoming highly philosophical! Let's take 3 comm=
on
> scenarios:
>
> 1) A user accesses video content via a CDN
> 2) A user uploads photos to facebook
> 3) A user does a web search and visits a link from google
>
> In all these scenarios it is debatable who CAUSES any congestion this
> traffic encounters. In 1) it could be said to be the user (he wanted to
> watch the video) or it could be the CDN (they sent the actual traffic) or=
 it
> could be the owner of the content (they presumably gain in some manner fr=
om
> people watching that content). In case 2) the user is clearly responsible=
,
> but it might be argued that facebook also gains, because it depends on
> having users doing this sort of thing as its business model... In 3) it
> could be the user, it could be google (they get paid for the click), it
> could be the final site from which the data came, etc. But in all cases
> there is an argument that the upstream forwarding nodes are also to some
> extent responsible - they aren't in any way responsible for the content, =
nor
> are they responsible for it arriving at their inbound interface, but they
> are responsible for any routing decisions they make which might have a ba=
d
> impact on downstream networks. However I accept Kevin's core point that w=
e
> should avoid apportioning blame in our descriptions...

yes

> Incidentally, it would be interesting to know to what extent routing
> decisions by one network have a knock-on effect on the traffic and
> congestion in downstream networks... This seems to be a key bit of
> information that ConEx can provide that is currently missing.

this is readily apparent today... there are folks that use these
'tricks' (decisions) in order to affect outcomes of peering agreements
actually. it's really not that hard to figure out. I think rich's
earlier example (10 links with one carrying traffic only) is an
example of a mis-configuration really, not an intentional
configuration.

> Oh, and BTW, I personally don't believe ConEx is going to be able to be u=
sed
> for traffic engineering on fast timescales - the congestion simply varies
> too quickly over packet timescales (or it does with window-based
> controllers, does anyone know if this is also true for controllers such a=
s
> Cubic?)

I agree with this (no TE from conex).

>> To my mind the power of Conex is to provide a forwarding device a
>> richer set of information to make better decisions on how to manage
>> their queues for the greater good.

perhaps, though we do this today with QOS markings, or by
classification of traffic based on 5-tuple info that is in turn
converted to QOS markings and actions. Adding another few bits to
watch for classification marking seems fine.

-Chris

> YES! Or perhaps less evangelically, ConEx provides information so they kn=
ow
> what impact their decisions are having downstream (after all there may be
> operators that want to hinder rather than help).
>
>>
>> >
>> > There are a lot less extreme, real-world examples of routing policy
>> > changes that would have a similar network impact.
>
> To say nothing of fast re-routes, etc
>
>> >
>> > -- Rich
>> >
>> > -----Original Message-----
>> > From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On
>> Behalf
>> > Of Kevin Mason
>> > Sent: Tuesday, August 17, 2010 1:35 AM
>> > To: conex@ietf.org
>> > Subject: [conex] comments on draft-moncaster-conex-concepts-uses-
>> 01.txt
>> >
>> > A =A0comment on the draft.
>> >
>> > On accouinting approaches to using congtion information
>> >
>> > I would challenge the view that another service provider "causes"
>> > congestion on another ISP's network. Data is only passed between
>> sender
>> > and receiver so it is the ISP's customer that is requesting the data
>> > from customers of the other provider, not the provider "sending it".
>> >
>> > So if the "sending" provider is not causing the congestion (because
>> it
>> > is the receiving provider's customer the requested it) then arguing
>> the
>> > "sending" provider should pay for any congestion the resulted might
>> be
>> > difficult. I can see endless legal arguments as to why one or the
>> other
>> > party is culpable and therefore who should endure any commercial
>> > consequences.
>> >
>> > On network uses of the information I think there is a general concept
>> > that is not being captured well.
>> >
>> > In ISP networks there are, very simply, two parts. Firstly there is
>> the
>> > connectivity between each account holders demarcation (UNI) and a IP
>> > edge device (BRAS/BNG in Broadband Forum speak). The IP edge device
>> > typically facitites AAA functions as well as user based policy
>> > enforcement and by necessity is fully aware of what flows below to
>> what
>> > UNI's (e.g. because they are all on a single authenticated VLAN or
>> PPP
>> > tunnel).
>> >
>> > Beyond that point between the IP edge and any peering point, the
>> network
>> > does not maintain an specific awareness of individual end points.
>> > Routers could =A0theoretically maintain information about each
>> > source/destination pair AND consult some database to relate that to a
>> > end user profile, but this is not very scaleable.
>> >
>> > So in the core ISP network, if a forwarding next hop is approaching
>> > overload, then the egress config of the router must deal with the
>> > aggregate flow and act on information that is in the packet header
>> > alone. Maintaining knowledge of which "users" have "caused" the most
>> > congestion in recent times is too hard.
>> >
>> > So it would appear that a possible scheme might be that a two stage
>> > queue management regime might be desirable, whereby at a lower queue
>> > size packets begin to be congestion marked if ECN capable and maybe
>> > discarded if not, but at a slightly higher queue depth, packets that
>> are
>> > ECN capable but with a high positive congestion value get discarded,
>> on
>> > the basis that these have a lower chance of reaching their
>> destination
>> > than packets not declaring an expectation of congestion on the rest
>> of
>> > the path. I don't see a "border monitor" as described in the draft
>> being
>> > very practical or useful in this part of the network.
>> >
>> > When packets arrive at the BRAS destined for the user, then user
>> based
>> > policy could be applied. One such policy might be to discard all
>> packets
>> > with a congestion deficit of more than x. This is the safety net
>> against
>> > dishonesty by the sender. An additional policy might be to discard
>> > packets that have experienced congestion above a threshiold (which
>> may
>> > be different for different user profiles) so far AND that are
>> destined
>> > to a user that has a recent history of high congestion marked
>> packets.
>> > If previous congestion marks have result in the user backing off then
>> > this policy would not be invoked, so it would only apply to users
>> that
>> > are persistently contributing to congestion somewhere on the path
>> > traversed (on the same provider or any preceding providers network).
>> >
>> > These policy actions could constitute the "edge monitor" functions
>> > referred to in the draft but would actually be part of the policy
>> > functions of the edge device itself, not any independent function.
>> >
>> > Other may have diffeent views of how the revealed congestion
>> information
>> > might be used but I believer it is useful to at least consider the
>> two
>> > parts of an ISP network when discussing possible used for the
>> > information.
>> >
>> > Cheers
>> > Kevin Mason
>> > _______________________________________________
>> > conex mailing list
>> > conex@ietf.org
>> > https://www.ietf.org/mailman/listinfo/conex
>> _______________________________________________
>> conex mailing list
>> conex@ietf.org
>> https://www.ietf.org/mailman/listinfo/conex
>
> _______________________________________________
> conex mailing list
> conex@ietf.org
> https://www.ietf.org/mailman/listinfo/conex
>

From john@jlc.net  Wed Aug 18 07:07:59 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id C76473A696E for <conex@core3.amsl.com>; Wed, 18 Aug 2010 07:07:59 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.688
X-Spam-Level: 
X-Spam-Status: No, score=-105.688 tagged_above=-999 required=5 tests=[AWL=0.911, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 1xY9uIHdAdo8 for <conex@core3.amsl.com>; Wed, 18 Aug 2010 07:07:58 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id 9B94A3A6956 for <conex@ietf.org>; Wed, 18 Aug 2010 07:07:58 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id F0A4133C4C; Wed, 18 Aug 2010 10:08:33 -0400 (EDT)
Date: Wed, 18 Aug 2010 10:08:33 -0400
From: John Leslie <john@jlc.net>
To: Kevin Mason <Kevin.Mason@telecom.co.nz>
Message-ID: <20100818140833.GU16820@verdi>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net>
User-Agent: Mutt/1.4.1i
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 18 Aug 2010 14:07:59 -0000

Kevin Mason <Kevin.Mason@telecom.co.nz> wrote:
> 
> I would challenge the view that another service provider "causes"
> congestion on another ISP's network.

   I expect "causes congestion" to disappear from the next version.

> Data is only passed between sender and receiver so it is the ISP's
> customer that is requesting the data from customers of the other
> provider, not the provider "sending it".

   This model doesn't actually work unless the requesting customer has
a way to know how much data s/he is "requesting", which is not the case
for the typical "click".

   Senders, OTOH, always know how much they're sending.

   But, in practice, neither end knows how fast it can be sent without
congestion being experienced -- until the sender actually tries sending
it. We can imagine a world in which the receiver offers advice on how
fast to send -- and perhaps that "should" be built into TCP, but it's
not in-charter for ConEx to standardize that.

   And, in practice, the receiver doesn't know how fast the sender can
send without experiencing congestion either: for the last-mile may share
fabric with other users, or there may be sufficient unsolicited traffic,
or some anomaly along the path may arise, or...

> So if the "sending" provider is not causing the congestion (because it
> is the receiving provider's customer the requested it) then arguing the
> "sending" provider should pay for any congestion the resulted might be
> difficult. I can see endless legal arguments as to why one or the other
> party is culpable and therefore who should endure any commercial
> consequences.

   You are imagining a government-mandated process. While ConEx cannot
avoid such an unfortunate outcome, I hope we don't refuse to standardize
a way to pass information, just because it could be misused. :^(

--
John Leslie <john@jlc.net>

From john@jlc.net  Wed Aug 18 07:46:56 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 6F5123A6919 for <conex@core3.amsl.com>; Wed, 18 Aug 2010 07:46:56 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.728
X-Spam-Level: 
X-Spam-Status: No, score=-105.728 tagged_above=-999 required=5 tests=[AWL=0.871, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id d03UuL6aJPdL for <conex@core3.amsl.com>; Wed, 18 Aug 2010 07:46:55 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id C61E33A695B for <conex@ietf.org>; Wed, 18 Aug 2010 07:46:54 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id 2AC0133C4C; Wed, 18 Aug 2010 10:47:30 -0400 (EDT)
Date: Wed, 18 Aug 2010 10:47:30 -0400
From: John Leslie <john@jlc.net>
To: Kevin Mason <Kevin.Mason@telecom.co.nz>
Message-ID: <20100818144730.GV16820@verdi>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net>
User-Agent: Mutt/1.4.1i
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: [conex] Enforcement at the edge
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 18 Aug 2010 14:46:56 -0000

Kevin Mason <Kevin.Mason@telecom.co.nz> wrote:
> 
> In ISP networks there are, very simply, two parts. Firstly there is the
> connectivity between each account holders demarcation (UNI) and a IP
> edge device (BRAS/BNG in Broadband Forum speak). The IP edge device
> typically facitites AAA functions as well as user based policy
> enforcement and by necessity is fully aware of what flows below to
> what UNI's (e.g. because they are all on a single authenticated VLAN or
> PPP tunnel).

   That is one model (and probably deserve consideration as such), but
not the only model.

> Beyond that point between the IP edge and any peering point, the network
> does not maintain an specific awareness of individual end points.

   Agreed. (And I don't believe anyone here is proposing that this change.)

> Routers could  theoretically maintain information about each source/
>destination pair AND consult some database to relate that to a end user
> profile, but this is not very scaleable. 

   Exactly!

> So in the core ISP network, if a forwarding next hop is approaching
> overload, then the egress config of the router must deal with the
> aggregate flow and act on information that is in the packet header
> alone. Maintaining knowledge of which "users" have "caused" the mosti
> congestion in recent times is too hard.

   Agreed.

   Note, however, that collecting information which "could" be used for
settlements is easy enough.

> So it would appear that a possible scheme might be that a two stage
> queue management regime might be desirable, whereby at a lower queue
> size packets begin to be congestion marked if ECN capable

   There is a technical problem there: ECN marking currenly means that
the receiver should treat it exactly the same as a drop for purposes
of congestion-control. Thus, ECN marking might be thoroughly unpopular
if applied when packet drop is not "imminent"...

> and maybe discarded if not,

   If you're talking Random Early Discard, we're OK here, but from what
follows, I'm not sure...

> but at a slightly higher queue depth, packets that are ECN capable but
> with a high positive congestion value get discarded, on the basis that
> these have a lower chance of reaching their destination than packets
> not declaring an expectation of congestion on the rest of the path.

   This confuses me a bit...

   What I envision is that ConEx packets will contain both an indication
of congestion "predicted" (set by the sender) and congestion "experienced"
(set by ECN or some similar mechanism), and that the difference measures
how likely the packet is to reach its destination without the difference
going negative (meaning that the congestion prediction was low).

   To me, packets with a negative difference (which I call "predicted
congestion exhausted") deserve a lower priority in the queue than non-
ConEx packets, while packets with a sufficiently positive difference
deserve a higher priority in the queue.

   (Implementation of this in a core router is unlikely -- but that does
not change the question of "deserving". Implementation could be done in
a separate box placed between routers at points where congestion is
common, and the queue could be maintained in that box.)

> I don't see a "border monitor" as described in the draft being very
> practical or useful in this part of the network.

   A Border Monitor can provide information useful for settlements or
planning upgrades. The concept I described above could be in a "Border
Policer", but that's probably not the right name for it...

> When packets arrive at the BRAS destined for the user, then user based
> policy could be applied. One such policy might be to discard all
> packets with a congestion deficit of more than x.

   Exactly!

> This is the safety net against dishonesty by the sender. An additional
> policy might be to discard packets that have experienced congestion
> above a threshiold (which may be different for different user profiles)
> so far AND that are destined to a user that has a recent history of
> high congestion marked packets.

   This is a way to implement a "receiver pays" paradigm; and I don't
believe ConEx should forbid such a practice.

   And, indeed, while I would wish for "sender pays" to be the norm, I
wouldn't consider such a practice "disruptive"... If the sender pays
but experiences no benefit, that's the sender's problem: practical
implementations _will_ have to deal with "over-paying" for the benefit
received.

> If previous congestion marks have result in the user backing off then
> this policy would not be invoked, so it would only apply to users that
> are persistently contributing to congestion somewhere on the path
> traversed (on the same provider or any preceding providers network).

   I'm not sure I follow this...

> These policy actions could constitute the "edge monitor" functions
> referred to in the draft but would actually be part of the policy
> functions of the edge device itself, not any independent function.

   I don't believe the authors consider that behavior appropriate for
an "Edge Monitor"; but it is probably appropriate for an "Edge Policer".
(I doubt we're yet anywhere near WG consensus on what functions an
"Edge Policer" should do to fit within a ConEx standard.)

> Other may have different views of how the revealed congestion
> information might be used but I believer it is useful to at least
> consider the two parts of an ISP network when discussing possible
> used for the information.

   Agreed. We should have a way of talking about what "Edge Policers"
(or any other name we may give them) may do.

--
John Leslie <john@jlc.net>

From prvs=4848A3A2BD=Kevin.Mason@telecom.co.nz  Thu Aug 19 23:02:58 2010
Return-Path: <prvs=4848A3A2BD=Kevin.Mason@telecom.co.nz>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 34A323A684F for <conex@core3.amsl.com>; Thu, 19 Aug 2010 23:02:58 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.418
X-Spam-Level: 
X-Spam-Status: No, score=-1.418 tagged_above=-999 required=5 tests=[AWL=0.322,  BAYES_20=-0.74, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 9o2g9vOz43Ap for <conex@core3.amsl.com>; Thu, 19 Aug 2010 23:02:57 -0700 (PDT)
Received: from mgate2.telecom.co.nz (envoy-out.telecom.co.nz [146.171.15.100]) by core3.amsl.com (Postfix) with ESMTP id E54BA3A67BE for <conex@ietf.org>; Thu, 19 Aug 2010 23:02:56 -0700 (PDT)
Received: from mgate5.telecom.co.nz (unknown [146.171.1.21]) by mgate2.telecom.co.nz (Tumbleweed MailGate 3.7.1) with ESMTP id 294D01030B1C; Fri, 20 Aug 2010 18:03:26 +1200 (NZST)
X-WSS-ID: 0L7FSTP-08-1J6-02
X-M-MSG: 
Received: from hp2846.telecom.tcnz.net (hp2846.telecom.tcnz.net [146.171.228.248]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mgate5.telecom.co.nz (Postfix) with ESMTP id 1A82D64C3FCB; Fri, 20 Aug 2010 18:03:25 +1200 (NZST)
Received: from hp3119.telecom.tcnz.net (146.171.212.204) by hp2846.telecom.tcnz.net (146.171.228.248) with Microsoft SMTP Server (TLS) id 8.2.234.1; Fri, 20 Aug 2010 18:03:28 +1200
Received: from WNEXMBX01.telecom.tcnz.net ([146.171.212.201]) by hp3119.telecom.tcnz.net ([146.171.212.204]) with mapi; Fri, 20 Aug 2010 18:03:28 +1200
From: Kevin Mason <Kevin.Mason@telecom.co.nz>
To: John Leslie <john@jlc.net>
Date: Fri, 20 Aug 2010 18:03:27 +1200
Thread-Topic: [conex]  comments on draft-moncaster-conex-concepts-uses-01.txt
Thread-Index: Acs+3tu8QnBlF1chTYqfr9fsz55fhgBLkBPA
Message-ID: <563C162F43D1B14E9FD2BC0A776C1E9127EF78F506@WNEXMBX01.telecom.tcnz.net>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net> <20100818140833.GU16820@verdi>
In-Reply-To: <20100818140833.GU16820@verdi>
Accept-Language: en-US, en-NZ
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US, en-NZ
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Aug 2010 06:02:58 -0000

Cheers
Kevin Mason
> -----Original Message-----
> From: John Leslie [mailto:john@jlc.net]
> Sent: Thursday, 19 August 2010 2:09 a.m.
> To: Kevin Mason
> Cc: conex@ietf.org
> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
> 01.txt
>=20
> Kevin Mason <Kevin.Mason@telecom.co.nz> wrote:
> >
> > I would challenge the view that another service provider "causes"
> > congestion on another ISP's network.
>=20
>    I expect "causes congestion" to disappear from the next version.
>=20
> > Data is only passed between sender and receiver so it is the ISP's
> > customer that is requesting the data from customers of the other
> > provider, not the provider "sending it".
>=20
>    This model doesn't actually work unless the requesting customer has
> a way to know how much data s/he is "requesting", which is not the case
> for the typical "click".
>=20
>    Senders, OTOH, always know how much they're sending.
>=20
>    But, in practice, neither end knows how fast it can be sent without
> congestion being experienced -- until the sender actually tries sending
> it. We can imagine a world in which the receiver offers advice on how
> fast to send -- and perhaps that "should" be built into TCP, but it's
> not in-charter for ConEx to standardize that.
>=20
>    And, in practice, the receiver doesn't know how fast the sender can
> send without experiencing congestion either: for the last-mile may share
> fabric with other users, or there may be sufficient unsolicited traffic,
> or some anomaly along the path may arise, or...
>=20
> > So if the "sending" provider is not causing the congestion (because it
> > is the receiving provider's customer the requested it) then arguing the
> > "sending" provider should pay for any congestion the resulted might be
> > difficult. I can see endless legal arguments as to why one or the other
> > party is culpable and therefore who should endure any commercial
> > consequences.
>=20
>    You are imagining a government-mandated process. While ConEx cannot
> avoid such an unfortunate outcome, I hope we don't refuse to standardize
> a way to pass information, just because it could be misused. :^(

[Kevin Mason] Wherever charging occurs between providers industry watchdogs=
 will take an interest, it is inevitable, but this is a distraction.

If financial penalties are to be imposed then the party that the penalty is=
 imposed upon has to be able to practically avoid the consequences of the p=
enalty (charges) if they chose to. I do not see any dialogue in the draft t=
o indicate how a "sending provider" might practically achieve that.

The issue for me is that the current draft leaves me with a sense that char=
ging at the provider boundary is perceived to be a significant benefit for =
use of conex. I believe this implication is unhelpful.

I would encourage this to be moderated to just talk about collection of inf=
ormation for a variety of uses e.g. network management, network planning an=
d a possible basis for underpinning bilateral commercial arrangements.
>=20
> --
> John Leslie <john@jlc.net>

From prvs=4850445EE8=Kevin.Mason@telecom.co.nz  Sun Aug 22 16:45:39 2010
Return-Path: <prvs=4850445EE8=Kevin.Mason@telecom.co.nz>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 94B413A697A for <conex@core3.amsl.com>; Sun, 22 Aug 2010 16:45:39 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.581
X-Spam-Level: 
X-Spam-Status: No, score=-0.581 tagged_above=-999 required=5 tests=[AWL=-0.644, BAYES_50=0.001, RCVD_IN_DNSWL_LOW=-1, SARE_SPEC_ROLEX_NOV5A=1.062]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id z118vmx-8T0q for <conex@core3.amsl.com>; Sun, 22 Aug 2010 16:45:37 -0700 (PDT)
Received: from mgate2.telecom.co.nz (envoy-out.telecom.co.nz [146.171.15.100]) by core3.amsl.com (Postfix) with ESMTP id 1B56A3A693A for <conex@ietf.org>; Sun, 22 Aug 2010 16:45:33 -0700 (PDT)
Received: from mgate4.telecom.co.nz (unknown [146.171.1.21]) by mgate2.telecom.co.nz (Tumbleweed MailGate 3.7.1) with ESMTP id 2EC2C1029084; Mon, 23 Aug 2010 11:45:59 +1200 (NZST)
X-WSS-ID: 0L7KVCQ-04-0PY-02
X-M-MSG: 
Received: from hp2847.telecom.tcnz.net (hp2847.telecom.tcnz.net [146.171.228.249]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mgate4.telecom.co.nz (Postfix) with ESMTP id 103E13B7C0AA; Mon, 23 Aug 2010 11:46:02 +1200 (NZST)
Received: from hp3120.telecom.tcnz.net (146.171.212.205) by hp2847.telecom.tcnz.net (146.171.228.249) with Microsoft SMTP Server (TLS) id 8.2.234.1; Mon, 23 Aug 2010 11:46:02 +1200
Received: from WNEXMBX01.telecom.tcnz.net ([146.171.212.201]) by hp3120.telecom.tcnz.net ([146.171.212.205]) with mapi; Mon, 23 Aug 2010 11:46:02 +1200
From: Kevin Mason <Kevin.Mason@telecom.co.nz>
To: Christopher Morrow <morrowc.lists@gmail.com>, Toby Moncaster <toby@moncaster.com>
Date: Mon, 23 Aug 2010 11:46:15 +1200
Thread-Topic: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
Thread-Index: Acs+3MOzzpFOjC+kRIKxviBaXK+b2gDdqEfw
Message-ID: <563C162F43D1B14E9FD2BC0A776C1E9127EF929B5E@WNEXMBX01.telecom.tcnz.net>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi>	<001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net> <EE00404438E9444D90AEA84210DC4067019325EA@pacdcexcmb05.cable.comcast.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF3A66C7@WNEXMBX01.telecom.tcnz.net> <001201cb3eb8$9992cb30$ccb86190$@com> <AANLkTi=24o44ACGFgN2N4_xt+Bo1rydC6gvR_et-8XVG@mail.gmail.com>
In-Reply-To: <AANLkTi=24o44ACGFgN2N4_xt+Bo1rydC6gvR_et-8XVG@mail.gmail.com>
Accept-Language: en-US, en-NZ
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US, en-NZ
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "Woundy, Richard" <Richard_Woundy@cable.comcast.com>, "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 22 Aug 2010 23:45:39 -0000

> > Oh, and BTW, I personally don't believe ConEx is going to be able to be
> used
> > for traffic engineering on fast timescales - the congestion simply
> varies
> > too quickly over packet timescales (or it does with window-based
> > controllers, does anyone know if this is also true for controllers such
> as
> > Cubic?)
>
> I agree with this (no TE from conex).

My expectation is the traffic engineering is more background activity, mont=
hly/quarterly action not automated
>
> >> To my mind the power of Conex is to provide a forwarding device a
> >> richer set of information to make better decisions on how to manage
> >> their queues for the greater good.
>
> perhaps, though we do this today with QOS markings, or by
> classification of traffic based on 5-tuple info that is in turn
> converted to QOS markings and actions. Adding another few bits to
> watch for classification marking seems fine.

QoS marking cannot tell the router the likelihood of any packet getting to =
its ultimate destination. So for packets with the same classification (mark=
ing) if the capacity for that "class" is approaching overload, favouring fl=
ows that are not expecting onward congestion has the potential to improve o=
utcomes for all. If this is not a practical use of Conex then I am a loss a=
s to what else it would be useful for.
>
> -Chris
>

Cheers
Kevin Mason
> -----Original Message-----
> From: christopher.morrow@gmail.com [mailto:christopher.morrow@gmail.com]
> On Behalf Of Christopher Morrow
> Sent: Thursday, 19 August 2010 1:54 a.m.
> To: Toby Moncaster
> Cc: Kevin Mason; Woundy, Richard; conex@ietf.org
> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
> 01.txt
>
> On Wed, Aug 18, 2010 at 5:34 AM, Toby Moncaster <toby@moncaster.com>
> wrote:
> > Inline...
> >
> > Toby
> >
> >> -----Original Message-----
> >> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behalf
> >> Of Kevin Mason
> >> Sent: 18 August 2010 00:41
> >> To: Woundy, Richard
> >> Cc: conex@ietf.org
> >> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
> >> 01.txt
> >>
> >>
> >>
> >> Cheers
> >> Kevin Mason
> >>
> >> > -----Original Message-----
> >> > From: Woundy, Richard [mailto:Richard_Woundy@cable.comcast.com]
> >> > Sent: Wednesday, 18 August 2010 7:17 a.m.
> >> > To: Kevin Mason; conex@ietf.org
> >> > Subject: RE: [conex] comments on draft-moncaster-conex-concepts-uses=
-
> >> > 01.txt
> >> >
> >> > >>I would challenge the view that another service provider "causes"
> >> > congestion on another ISP's network. Data is only passed between
> >> sender
> >> > and receiver so it is the ISP's customer that is requesting the data
> >> > from customers of the other provider, not the provider "sending it".
> >> >
> >> > Kevin, you make a good point (customer pulls rather than provider
> >> > pushes), but I don't think that is the entire story.
> >> >
> >> > Consider a hypothetical example (an *extreme* case to be sure, but
> >> not
> >> > totally disconnected from reality) in which a service provider
> >> changes
> >> > its routing policy in the following manner: from forwarding traffic
> >> > somewhat equally over 10 interconnects to a downstream ISP, to
> >> > forwarding all traffic over a single interconnect to the downstream
> >> ISP
> >> > (such as changing policy to "hot potato routing" from a central
> >> hosting
> >> > center to the downstream ISP). That change in policy is very likely
> >> to
> >> > cause a lot of congestion over the single interconnect link, even as
> >> the
> >> > overall consumer behavior doesn't change at all.
> >>
> >> [Kevin Mason] The culpability for congestion on the interconnecting ho=
p
> >> is very dependent on who dimensions it and the commercials that
> >> underpin it. If the "sending" provider dimensions it then congestion o=
n
> >> this hop is solely the accountability of the sending proivder, no need
> >> for congestion exposure here as they can directly measure it today
> >> (queuing and discards). IF the receiving provider dimensions it then
> >> they have no current visibility of the congestion on the preceding
> >> link, but congestion exposure will potentially only tell them that
> >> congestion has already been experienced, not where. So if there is any
> >> SLA around performance of the interconnection hop then the receiving
> >> party still has to get info from the sending provider to ascertain tha=
t
> >> it is the interconnecting hop that is the problem and not a hop deeper
> >> in the sending provder's network.
> >>
> >> Capturing the information for network management purposes at the
> >> interprovider level may well be very useful for overall network
> >> planning purposes if practical, but using it to underpin payment
> >> between providers is very different.
> >
> > I obviously bow to the greater knowledge of Kevin and Rich in this, but
> it
> > seems to me there are scenarios where ConEx information may be a
> sensible
> > basis for settlements (perhaps not with money changing hands, but for
> > instance with shared backbones such as that used by the UK academic
> > community (ja.net)...
>
> on the discussion of settlements ... the only place I think that makes
> sense is in determining if/when a relationship should change, from
> 'customer' to 'peer' or vice-versa.
>
> In the case of purely 'customer' relationships (which won't change,
> for instance 'dsl customer') today most folks are just charged for the
> connection. if you propose to convert them to a form of billing based
> on congestion you'll have to find story that doesn't end up just
> confusing the customer I think. confused customers =3D=3D
> call-center-questions :(
>
> I also bet that the congestion here is going to be mostly on the last
> mile link, customers may be interested in changing their behavior to
> better utilize their bw. It's not clear that there's a cost benefit
> for the customer since no amount of money is going to change their
> last mile problems.
>
> >
> >>
> >> However I do not think we need to get too hung up on this, the point i=
s
> >> that it is debateable who "causes" the congestion in an interprovider
> >> context, and therefore getting agreement on who might "pay" for it has
> >> the potential only to enrich the legal industry. I am however in favou=
r
> >> of using congestion information for accounting purposes at a individua=
l
> >> ISP customer account level to recognise and reward cooperative end use=
r
> >> behaviour (e.g. congestion caps)
> >
> > OK, this is in danger of becoming highly philosophical! Let's take 3
> common
> > scenarios:
> >
> > 1) A user accesses video content via a CDN
> > 2) A user uploads photos to facebook
> > 3) A user does a web search and visits a link from google
> >
> > In all these scenarios it is debatable who CAUSES any congestion this
> > traffic encounters. In 1) it could be said to be the user (he wanted to
> > watch the video) or it could be the CDN (they sent the actual traffic)
> or it
> > could be the owner of the content (they presumably gain in some manner
> from
> > people watching that content). In case 2) the user is clearly
> responsible,
> > but it might be argued that facebook also gains, because it depends on
> > having users doing this sort of thing as its business model... In 3) it
> > could be the user, it could be google (they get paid for the click), it
> > could be the final site from which the data came, etc. But in all cases
> > there is an argument that the upstream forwarding nodes are also to som=
e
> > extent responsible - they aren't in any way responsible for the content=
,
> nor
> > are they responsible for it arriving at their inbound interface, but
> they
> > are responsible for any routing decisions they make which might have a
> bad
> > impact on downstream networks. However I accept Kevin's core point that
> we
> > should avoid apportioning blame in our descriptions...
>
> yes
>
> > Incidentally, it would be interesting to know to what extent routing
> > decisions by one network have a knock-on effect on the traffic and
> > congestion in downstream networks... This seems to be a key bit of
> > information that ConEx can provide that is currently missing.
>
> this is readily apparent today... there are folks that use these
> 'tricks' (decisions) in order to affect outcomes of peering agreements
> actually. it's really not that hard to figure out. I think rich's
> earlier example (10 links with one carrying traffic only) is an
> example of a mis-configuration really, not an intentional
> configuration.
>
> > Oh, and BTW, I personally don't believe ConEx is going to be able to be
> used
> > for traffic engineering on fast timescales - the congestion simply
> varies
> > too quickly over packet timescales (or it does with window-based
> > controllers, does anyone know if this is also true for controllers such
> as
> > Cubic?)
>
> I agree with this (no TE from conex).
>
> >> To my mind the power of Conex is to provide a forwarding device a
> >> richer set of information to make better decisions on how to manage
> >> their queues for the greater good.
>
> perhaps, though we do this today with QOS markings, or by
> classification of traffic based on 5-tuple info that is in turn
> converted to QOS markings and actions. Adding another few bits to
> watch for classification marking seems fine.
>
> -Chris
>
> > YES! Or perhaps less evangelically, ConEx provides information so they
> know
> > what impact their decisions are having downstream (after all there may
> be
> > operators that want to hinder rather than help).
> >
> >>
> >> >
> >> > There are a lot less extreme, real-world examples of routing policy
> >> > changes that would have a similar network impact.
> >
> > To say nothing of fast re-routes, etc
> >
> >> >
> >> > -- Rich
> >> >
> >> > -----Original Message-----
> >> > From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On
> >> Behalf
> >> > Of Kevin Mason
> >> > Sent: Tuesday, August 17, 2010 1:35 AM
> >> > To: conex@ietf.org
> >> > Subject: [conex] comments on draft-moncaster-conex-concepts-uses-
> >> 01.txt
> >> >
> >> > A  comment on the draft.
> >> >
> >> > On accouinting approaches to using congtion information
> >> >
> >> > I would challenge the view that another service provider "causes"
> >> > congestion on another ISP's network. Data is only passed between
> >> sender
> >> > and receiver so it is the ISP's customer that is requesting the data
> >> > from customers of the other provider, not the provider "sending it".
> >> >
> >> > So if the "sending" provider is not causing the congestion (because
> >> it
> >> > is the receiving provider's customer the requested it) then arguing
> >> the
> >> > "sending" provider should pay for any congestion the resulted might
> >> be
> >> > difficult. I can see endless legal arguments as to why one or the
> >> other
> >> > party is culpable and therefore who should endure any commercial
> >> > consequences.
> >> >
> >> > On network uses of the information I think there is a general concep=
t
> >> > that is not being captured well.
> >> >
> >> > In ISP networks there are, very simply, two parts. Firstly there is
> >> the
> >> > connectivity between each account holders demarcation (UNI) and a IP
> >> > edge device (BRAS/BNG in Broadband Forum speak). The IP edge device
> >> > typically facitites AAA functions as well as user based policy
> >> > enforcement and by necessity is fully aware of what flows below to
> >> what
> >> > UNI's (e.g. because they are all on a single authenticated VLAN or
> >> PPP
> >> > tunnel).
> >> >
> >> > Beyond that point between the IP edge and any peering point, the
> >> network
> >> > does not maintain an specific awareness of individual end points.
> >> > Routers could  theoretically maintain information about each
> >> > source/destination pair AND consult some database to relate that to =
a
> >> > end user profile, but this is not very scaleable.
> >> >
> >> > So in the core ISP network, if a forwarding next hop is approaching
> >> > overload, then the egress config of the router must deal with the
> >> > aggregate flow and act on information that is in the packet header
> >> > alone. Maintaining knowledge of which "users" have "caused" the most
> >> > congestion in recent times is too hard.
> >> >
> >> > So it would appear that a possible scheme might be that a two stage
> >> > queue management regime might be desirable, whereby at a lower queue
> >> > size packets begin to be congestion marked if ECN capable and maybe
> >> > discarded if not, but at a slightly higher queue depth, packets that
> >> are
> >> > ECN capable but with a high positive congestion value get discarded,
> >> on
> >> > the basis that these have a lower chance of reaching their
> >> destination
> >> > than packets not declaring an expectation of congestion on the rest
> >> of
> >> > the path. I don't see a "border monitor" as described in the draft
> >> being
> >> > very practical or useful in this part of the network.
> >> >
> >> > When packets arrive at the BRAS destined for the user, then user
> >> based
> >> > policy could be applied. One such policy might be to discard all
> >> packets
> >> > with a congestion deficit of more than x. This is the safety net
> >> against
> >> > dishonesty by the sender. An additional policy might be to discard
> >> > packets that have experienced congestion above a threshiold (which
> >> may
> >> > be different for different user profiles) so far AND that are
> >> destined
> >> > to a user that has a recent history of high congestion marked
> >> packets.
> >> > If previous congestion marks have result in the user backing off the=
n
> >> > this policy would not be invoked, so it would only apply to users
> >> that
> >> > are persistently contributing to congestion somewhere on the path
> >> > traversed (on the same provider or any preceding providers network).
> >> >
> >> > These policy actions could constitute the "edge monitor" functions
> >> > referred to in the draft but would actually be part of the policy
> >> > functions of the edge device itself, not any independent function.
> >> >
> >> > Other may have diffeent views of how the revealed congestion
> >> information
> >> > might be used but I believer it is useful to at least consider the
> >> two
> >> > parts of an ISP network when discussing possible used for the
> >> > information.
> >> >
> >> > Cheers
> >> > Kevin Mason
> >> > _______________________________________________
> >> > conex mailing list
> >> > conex@ietf.org
> >> > https://www.ietf.org/mailman/listinfo/conex
> >> _______________________________________________
> >> conex mailing list
> >> conex@ietf.org
> >> https://www.ietf.org/mailman/listinfo/conex
> >
> > _______________________________________________
> > conex mailing list
> > conex@ietf.org
> > https://www.ietf.org/mailman/listinfo/conex
> >

From christopher.morrow@gmail.com  Sun Aug 22 20:53:56 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 4CD703A67E3 for <conex@core3.amsl.com>; Sun, 22 Aug 2010 20:53:56 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -101.692
X-Spam-Level: 
X-Spam-Status: No, score=-101.692 tagged_above=-999 required=5 tests=[AWL=0.907, BAYES_00=-2.599, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id i-GKc9senq+w for <conex@core3.amsl.com>; Sun, 22 Aug 2010 20:53:55 -0700 (PDT)
Received: from mail-iw0-f172.google.com (mail-iw0-f172.google.com [209.85.214.172]) by core3.amsl.com (Postfix) with ESMTP id 18BD63A697A for <conex@ietf.org>; Sun, 22 Aug 2010 20:53:55 -0700 (PDT)
Received: by iwn3 with SMTP id 3so5895038iwn.31 for <conex@ietf.org>; Sun, 22 Aug 2010 20:54:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=bpPlTy7Qt8TeqIKdHyNoG5mWYtsG3QzAjE8YTkD0VGU=; b=uQeiL1Vvf8dh1b8RKGSzq0KCmxNa3EiqhoWPptX2ra5igQC8SfTSVGRzvm2yYFGHx0 wtp0wPKI1Pduc5EEIWQcTauK9Xc6WVUfMbw226+fupGo/LdzPaad8So6bj1TwCwN6x8y sfaItAzZAFnuHuB/lT4tT+SfF70MExmDx76u0=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=Vl5v34jjKWCit/fseu4J8q56ZMa/BoNZvaRicqJ6n4UVEl5B0Csqsf5jOt3PuPe8p1 AxNeB1Nj+jr3K03aIh69AKdiRg+mzPM99+wY/dfgGyvme8opL0nQ7pJBF6FQnOTU3CoM Ws/dELxdnwbOVH/u0/hIinKvGplsQWUZo9kFg=
MIME-Version: 1.0
Received: by 10.231.11.11 with SMTP id r11mr5453938ibr.135.1282535668337; Sun, 22 Aug 2010 20:54:28 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.184.131 with HTTP; Sun, 22 Aug 2010 20:54:28 -0700 (PDT)
In-Reply-To: <563C162F43D1B14E9FD2BC0A776C1E9127EF78F506@WNEXMBX01.telecom.tcnz.net>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net> <20100818140833.GU16820@verdi> <563C162F43D1B14E9FD2BC0A776C1E9127EF78F506@WNEXMBX01.telecom.tcnz.net>
Date: Sun, 22 Aug 2010 23:54:28 -0400
X-Google-Sender-Auth: waTT--VsqMbjXz1yBt4lmeWuZ2o
Message-ID: <AANLkTinA_CdrA-2yRZQwji8eueofi1OLGyEfS2FVraCq@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: Kevin Mason <Kevin.Mason@telecom.co.nz>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 23 Aug 2010 03:53:56 -0000

On Fri, Aug 20, 2010 at 2:03 AM, Kevin Mason <Kevin.Mason@telecom.co.nz> wr=
ote:
>
>
> Cheers
> Kevin Mason
>> -----Original Message-----
>> From: John Leslie [mailto:john@jlc.net]
>> Sent: Thursday, 19 August 2010 2:09 a.m.
>> To: Kevin Mason
>> Cc: conex@ietf.org
>> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
>> 01.txt
>>
>> Kevin Mason <Kevin.Mason@telecom.co.nz> wrote:
>> >
>> > I would challenge the view that another service provider "causes"
>> > congestion on another ISP's network.
>>
>> =A0 =A0I expect "causes congestion" to disappear from the next version.
>>
>> > Data is only passed between sender and receiver so it is the ISP's
>> > customer that is requesting the data from customers of the other
>> > provider, not the provider "sending it".
>>
>> =A0 =A0This model doesn't actually work unless the requesting customer h=
as
>> a way to know how much data s/he is "requesting", which is not the case
>> for the typical "click".
>>
>> =A0 =A0Senders, OTOH, always know how much they're sending.
>>
>> =A0 =A0But, in practice, neither end knows how fast it can be sent witho=
ut
>> congestion being experienced -- until the sender actually tries sending
>> it. We can imagine a world in which the receiver offers advice on how
>> fast to send -- and perhaps that "should" be built into TCP, but it's
>> not in-charter for ConEx to standardize that.
>>
>> =A0 =A0And, in practice, the receiver doesn't know how fast the sender c=
an
>> send without experiencing congestion either: for the last-mile may share
>> fabric with other users, or there may be sufficient unsolicited traffic,
>> or some anomaly along the path may arise, or...
>>
>> > So if the "sending" provider is not causing the congestion (because it
>> > is the receiving provider's customer the requested it) then arguing th=
e
>> > "sending" provider should pay for any congestion the resulted might be
>> > difficult. I can see endless legal arguments as to why one or the othe=
r
>> > party is culpable and therefore who should endure any commercial
>> > consequences.
>>
>> =A0 =A0You are imagining a government-mandated process. While ConEx cann=
ot
>> avoid such an unfortunate outcome, I hope we don't refuse to standardize
>> a way to pass information, just because it could be misused. :^(
>
> [Kevin Mason] Wherever charging occurs between providers industry watchdo=
gs will take an interest, it is inevitable, but this is a distraction.
>
> If financial penalties are to be imposed then the party that the penalty =
is imposed upon has to be able to practically avoid the consequences of the=
 penalty (charges) if they chose to. I do not see any dialogue in the draft=
 to indicate how a "sending provider" might practically achieve that.
>

keeping in mind also that the 'sending provider' could be more than 1
AS away (like the average as-path length today on the Internet is
3.something).

> The issue for me is that the current draft leaves me with a sense that ch=
arging at the provider boundary is perceived to be a significant benefit fo=
r use of conex. I believe this implication is unhelpful.
>

I agree with this, the idea that conex was/is/could-be used as a
settlement regime turns the discussion into something that isn't
helpful. conex, it seems to me, could really just be a method to help
folk better use the bw they have. by 'better use' I really mean 'more
efficiently utilize'.

-Chris

> I would encourage this to be moderated to just talk about collection of i=
nformation for a variety of uses e.g. network management, network planning =
and a possible basis for underpinning bilateral commercial arrangements.
>>
>> --
>> John Leslie <john@jlc.net>
> _______________________________________________
> conex mailing list
> conex@ietf.org
> https://www.ietf.org/mailman/listinfo/conex
>

From christopher.morrow@gmail.com  Sun Aug 22 20:58:14 2010
Return-Path: <christopher.morrow@gmail.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 42B913A67E2 for <conex@core3.amsl.com>; Sun, 22 Aug 2010 20:58:14 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -101.221
X-Spam-Level: 
X-Spam-Status: No, score=-101.221 tagged_above=-999 required=5 tests=[AWL=0.316, BAYES_00=-2.599, SARE_SPEC_ROLEX_NOV5A=1.062, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dmLmwyUiF99S for <conex@core3.amsl.com>; Sun, 22 Aug 2010 20:58:12 -0700 (PDT)
Received: from mail-iw0-f172.google.com (mail-iw0-f172.google.com [209.85.214.172]) by core3.amsl.com (Postfix) with ESMTP id 386E53A6359 for <conex@ietf.org>; Sun, 22 Aug 2010 20:58:12 -0700 (PDT)
Received: by iwn3 with SMTP id 3so5898769iwn.31 for <conex@ietf.org>; Sun, 22 Aug 2010 20:58:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=fHBBwCeaRdloRw0yTGpvzjlImVua1PCutuqUYJk6ovI=; b=M9JqOW/Aos0NVOK9yl6zYfQLLDsMiqcGwovu+pAqYDe+ALCm4eRSJ9FabEHhICwGqv 77OhtuMNH1WGwbNROZEVkC2NaoLPUs4/5HJE5jHCapEHzvClQKE2e7hsH2aNrFjYR49Q mvKpsfEH3qlpP0swp6aT/o25dFSxJhoDfHrJI=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=BDclN8gWTQnSNKp90WEwiCTDto1MVRhW29tvQSPGTDV4fRedxs6yIK93erwMUQ/kFs 8VjN60dPusOW4ehcH1nzQy+K/d3WG9QIgxh5VtoeiIPcuCmLRs/rP8r5re0/ffdCMyFT neCA/688dFCx+OTg88KD54LuK3SMsvB1g+Dqw=
MIME-Version: 1.0
Received: by 10.231.166.72 with SMTP id l8mr6068259iby.95.1282535925553; Sun, 22 Aug 2010 20:58:45 -0700 (PDT)
Sender: christopher.morrow@gmail.com
Received: by 10.231.184.131 with HTTP; Sun, 22 Aug 2010 20:58:45 -0700 (PDT)
In-Reply-To: <563C162F43D1B14E9FD2BC0A776C1E9127EF929B5E@WNEXMBX01.telecom.tcnz.net>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se> <20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net> <EE00404438E9444D90AEA84210DC4067019325EA@pacdcexcmb05.cable.comcast.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF3A66C7@WNEXMBX01.telecom.tcnz.net> <001201cb3eb8$9992cb30$ccb86190$@com> <AANLkTi=24o44ACGFgN2N4_xt+Bo1rydC6gvR_et-8XVG@mail.gmail.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF929B5E@WNEXMBX01.telecom.tcnz.net>
Date: Sun, 22 Aug 2010 23:58:45 -0400
X-Google-Sender-Auth: Supx2rTX26o8ftpk8lRmaROWI4w
Message-ID: <AANLkTi=LQkCEFOfOsED9ix4hZqvx=CUT23A+zpiwRdcA@mail.gmail.com>
From: Christopher Morrow <morrowc.lists@gmail.com>
To: Kevin Mason <Kevin.Mason@telecom.co.nz>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: "Woundy, Richard" <Richard_Woundy@cable.comcast.com>, "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 23 Aug 2010 03:58:14 -0000

On Sun, Aug 22, 2010 at 7:46 PM, Kevin Mason <Kevin.Mason@telecom.co.nz> wr=
ote:
>> > Oh, and BTW, I personally don't believe ConEx is going to be able to b=
e
>> used
>> > for traffic engineering on fast timescales - the congestion simply
>> varies
>> > too quickly over packet timescales (or it does with window-based
>> > controllers, does anyone know if this is also true for controllers suc=
h
>> as
>> > Cubic?)
>>
>> I agree with this (no TE from conex).
>
> My expectation is the traffic engineering is more background activity, mo=
nthly/quarterly action not automated

uhm... openflow has the promise to make this sort of calculation
happen very much more rapidly than 1/month || 1/quarter. In today's
large internet networks I believe much of this is done daily and often
much more often than that. (without openflow)

>>
>> >> To my mind the power of Conex is to provide a forwarding device a
>> >> richer set of information to make better decisions on how to manage
>> >> their queues for the greater good.
>>
>> perhaps, though we do this today with QOS markings, or by
>> classification of traffic based on 5-tuple info that is in turn
>> converted to QOS markings and actions. Adding another few bits to
>> watch for classification marking seems fine.
>
> QoS marking cannot tell the router the likelihood of any packet getting t=
o its ultimate destination. So for packets with the same classification (ma=
rking) if the capacity for that "class" is approaching overload, favouring =
flows that are not expecting onward congestion has the potential to improve=
 outcomes for all. If this is not a practical use of Conex then I am a loss=
 as to what else it would be useful for.

right, qos I was using as method to trigger a different forwarding
decision (forwarding and/or marking really). Adding conex info to what
makes the this happen (as I said) seems fine.

"Traffic to the destination port 77 + protocol 85 + source-address 12
has been marked by the conex mechanism, so potentially will get
dropped downstream, if you have the same profile traffic headed to
port 78 maybe prioritize that higher."

(in short I think we are agreeing here)

-chris

>>
>> -Chris
>>
>
> Cheers
> Kevin Mason
>> -----Original Message-----
>> From: christopher.morrow@gmail.com [mailto:christopher.morrow@gmail.com]
>> On Behalf Of Christopher Morrow
>> Sent: Thursday, 19 August 2010 1:54 a.m.
>> To: Toby Moncaster
>> Cc: Kevin Mason; Woundy, Richard; conex@ietf.org
>> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
>> 01.txt
>>
>> On Wed, Aug 18, 2010 at 5:34 AM, Toby Moncaster <toby@moncaster.com>
>> wrote:
>> > Inline...
>> >
>> > Toby
>> >
>> >> -----Original Message-----
>> >> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On Behal=
f
>> >> Of Kevin Mason
>> >> Sent: 18 August 2010 00:41
>> >> To: Woundy, Richard
>> >> Cc: conex@ietf.org
>> >> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
>> >> 01.txt
>> >>
>> >>
>> >>
>> >> Cheers
>> >> Kevin Mason
>> >>
>> >> > -----Original Message-----
>> >> > From: Woundy, Richard [mailto:Richard_Woundy@cable.comcast.com]
>> >> > Sent: Wednesday, 18 August 2010 7:17 a.m.
>> >> > To: Kevin Mason; conex@ietf.org
>> >> > Subject: RE: [conex] comments on draft-moncaster-conex-concepts-use=
s-
>> >> > 01.txt
>> >> >
>> >> > >>I would challenge the view that another service provider "causes"
>> >> > congestion on another ISP's network. Data is only passed between
>> >> sender
>> >> > and receiver so it is the ISP's customer that is requesting the dat=
a
>> >> > from customers of the other provider, not the provider "sending it"=
.
>> >> >
>> >> > Kevin, you make a good point (customer pulls rather than provider
>> >> > pushes), but I don't think that is the entire story.
>> >> >
>> >> > Consider a hypothetical example (an *extreme* case to be sure, but
>> >> not
>> >> > totally disconnected from reality) in which a service provider
>> >> changes
>> >> > its routing policy in the following manner: from forwarding traffic
>> >> > somewhat equally over 10 interconnects to a downstream ISP, to
>> >> > forwarding all traffic over a single interconnect to the downstream
>> >> ISP
>> >> > (such as changing policy to "hot potato routing" from a central
>> >> hosting
>> >> > center to the downstream ISP). That change in policy is very likely
>> >> to
>> >> > cause a lot of congestion over the single interconnect link, even a=
s
>> >> the
>> >> > overall consumer behavior doesn't change at all.
>> >>
>> >> [Kevin Mason] The culpability for congestion on the interconnecting h=
op
>> >> is very dependent on who dimensions it and the commercials that
>> >> underpin it. If the "sending" provider dimensions it then congestion =
on
>> >> this hop is solely the accountability of the sending proivder, no nee=
d
>> >> for congestion exposure here as they can directly measure it today
>> >> (queuing and discards). IF the receiving provider dimensions it then
>> >> they have no current visibility of the congestion on the preceding
>> >> link, but congestion exposure will potentially only tell them that
>> >> congestion has already been experienced, not where. So if there is an=
y
>> >> SLA around performance of the interconnection hop then the receiving
>> >> party still has to get info from the sending provider to ascertain th=
at
>> >> it is the interconnecting hop that is the problem and not a hop deepe=
r
>> >> in the sending provder's network.
>> >>
>> >> Capturing the information for network management purposes at the
>> >> interprovider level may well be very useful for overall network
>> >> planning purposes if practical, but using it to underpin payment
>> >> between providers is very different.
>> >
>> > I obviously bow to the greater knowledge of Kevin and Rich in this, bu=
t
>> it
>> > seems to me there are scenarios where ConEx information may be a
>> sensible
>> > basis for settlements (perhaps not with money changing hands, but for
>> > instance with shared backbones such as that used by the UK academic
>> > community (ja.net)...
>>
>> on the discussion of settlements ... the only place I think that makes
>> sense is in determining if/when a relationship should change, from
>> 'customer' to 'peer' or vice-versa.
>>
>> In the case of purely 'customer' relationships (which won't change,
>> for instance 'dsl customer') today most folks are just charged for the
>> connection. if you propose to convert them to a form of billing based
>> on congestion you'll have to find story that doesn't end up just
>> confusing the customer I think. confused customers =3D=3D
>> call-center-questions :(
>>
>> I also bet that the congestion here is going to be mostly on the last
>> mile link, customers may be interested in changing their behavior to
>> better utilize their bw. It's not clear that there's a cost benefit
>> for the customer since no amount of money is going to change their
>> last mile problems.
>>
>> >
>> >>
>> >> However I do not think we need to get too hung up on this, the point =
is
>> >> that it is debateable who "causes" the congestion in an interprovider
>> >> context, and therefore getting agreement on who might "pay" for it ha=
s
>> >> the potential only to enrich the legal industry. I am however in favo=
ur
>> >> of using congestion information for accounting purposes at a individu=
al
>> >> ISP customer account level to recognise and reward cooperative end us=
er
>> >> behaviour (e.g. congestion caps)
>> >
>> > OK, this is in danger of becoming highly philosophical! Let's take 3
>> common
>> > scenarios:
>> >
>> > 1) A user accesses video content via a CDN
>> > 2) A user uploads photos to facebook
>> > 3) A user does a web search and visits a link from google
>> >
>> > In all these scenarios it is debatable who CAUSES any congestion this
>> > traffic encounters. In 1) it could be said to be the user (he wanted t=
o
>> > watch the video) or it could be the CDN (they sent the actual traffic)
>> or it
>> > could be the owner of the content (they presumably gain in some manner
>> from
>> > people watching that content). In case 2) the user is clearly
>> responsible,
>> > but it might be argued that facebook also gains, because it depends on
>> > having users doing this sort of thing as its business model... In 3) i=
t
>> > could be the user, it could be google (they get paid for the click), i=
t
>> > could be the final site from which the data came, etc. But in all case=
s
>> > there is an argument that the upstream forwarding nodes are also to so=
me
>> > extent responsible - they aren't in any way responsible for the conten=
t,
>> nor
>> > are they responsible for it arriving at their inbound interface, but
>> they
>> > are responsible for any routing decisions they make which might have a
>> bad
>> > impact on downstream networks. However I accept Kevin's core point tha=
t
>> we
>> > should avoid apportioning blame in our descriptions...
>>
>> yes
>>
>> > Incidentally, it would be interesting to know to what extent routing
>> > decisions by one network have a knock-on effect on the traffic and
>> > congestion in downstream networks... This seems to be a key bit of
>> > information that ConEx can provide that is currently missing.
>>
>> this is readily apparent today... there are folks that use these
>> 'tricks' (decisions) in order to affect outcomes of peering agreements
>> actually. it's really not that hard to figure out. I think rich's
>> earlier example (10 links with one carrying traffic only) is an
>> example of a mis-configuration really, not an intentional
>> configuration.
>>
>> > Oh, and BTW, I personally don't believe ConEx is going to be able to b=
e
>> used
>> > for traffic engineering on fast timescales - the congestion simply
>> varies
>> > too quickly over packet timescales (or it does with window-based
>> > controllers, does anyone know if this is also true for controllers suc=
h
>> as
>> > Cubic?)
>>
>> I agree with this (no TE from conex).
>>
>> >> To my mind the power of Conex is to provide a forwarding device a
>> >> richer set of information to make better decisions on how to manage
>> >> their queues for the greater good.
>>
>> perhaps, though we do this today with QOS markings, or by
>> classification of traffic based on 5-tuple info that is in turn
>> converted to QOS markings and actions. Adding another few bits to
>> watch for classification marking seems fine.
>>
>> -Chris
>>
>> > YES! Or perhaps less evangelically, ConEx provides information so they
>> know
>> > what impact their decisions are having downstream (after all there may
>> be
>> > operators that want to hinder rather than help).
>> >
>> >>
>> >> >
>> >> > There are a lot less extreme, real-world examples of routing policy
>> >> > changes that would have a similar network impact.
>> >
>> > To say nothing of fast re-routes, etc
>> >
>> >> >
>> >> > -- Rich
>> >> >
>> >> > -----Original Message-----
>> >> > From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On
>> >> Behalf
>> >> > Of Kevin Mason
>> >> > Sent: Tuesday, August 17, 2010 1:35 AM
>> >> > To: conex@ietf.org
>> >> > Subject: [conex] comments on draft-moncaster-conex-concepts-uses-
>> >> 01.txt
>> >> >
>> >> > A =A0comment on the draft.
>> >> >
>> >> > On accouinting approaches to using congtion information
>> >> >
>> >> > I would challenge the view that another service provider "causes"
>> >> > congestion on another ISP's network. Data is only passed between
>> >> sender
>> >> > and receiver so it is the ISP's customer that is requesting the dat=
a
>> >> > from customers of the other provider, not the provider "sending it"=
.
>> >> >
>> >> > So if the "sending" provider is not causing the congestion (because
>> >> it
>> >> > is the receiving provider's customer the requested it) then arguing
>> >> the
>> >> > "sending" provider should pay for any congestion the resulted might
>> >> be
>> >> > difficult. I can see endless legal arguments as to why one or the
>> >> other
>> >> > party is culpable and therefore who should endure any commercial
>> >> > consequences.
>> >> >
>> >> > On network uses of the information I think there is a general conce=
pt
>> >> > that is not being captured well.
>> >> >
>> >> > In ISP networks there are, very simply, two parts. Firstly there is
>> >> the
>> >> > connectivity between each account holders demarcation (UNI) and a I=
P
>> >> > edge device (BRAS/BNG in Broadband Forum speak). The IP edge device
>> >> > typically facitites AAA functions as well as user based policy
>> >> > enforcement and by necessity is fully aware of what flows below to
>> >> what
>> >> > UNI's (e.g. because they are all on a single authenticated VLAN or
>> >> PPP
>> >> > tunnel).
>> >> >
>> >> > Beyond that point between the IP edge and any peering point, the
>> >> network
>> >> > does not maintain an specific awareness of individual end points.
>> >> > Routers could =A0theoretically maintain information about each
>> >> > source/destination pair AND consult some database to relate that to=
 a
>> >> > end user profile, but this is not very scaleable.
>> >> >
>> >> > So in the core ISP network, if a forwarding next hop is approaching
>> >> > overload, then the egress config of the router must deal with the
>> >> > aggregate flow and act on information that is in the packet header
>> >> > alone. Maintaining knowledge of which "users" have "caused" the mos=
t
>> >> > congestion in recent times is too hard.
>> >> >
>> >> > So it would appear that a possible scheme might be that a two stage
>> >> > queue management regime might be desirable, whereby at a lower queu=
e
>> >> > size packets begin to be congestion marked if ECN capable and maybe
>> >> > discarded if not, but at a slightly higher queue depth, packets tha=
t
>> >> are
>> >> > ECN capable but with a high positive congestion value get discarded=
,
>> >> on
>> >> > the basis that these have a lower chance of reaching their
>> >> destination
>> >> > than packets not declaring an expectation of congestion on the rest
>> >> of
>> >> > the path. I don't see a "border monitor" as described in the draft
>> >> being
>> >> > very practical or useful in this part of the network.
>> >> >
>> >> > When packets arrive at the BRAS destined for the user, then user
>> >> based
>> >> > policy could be applied. One such policy might be to discard all
>> >> packets
>> >> > with a congestion deficit of more than x. This is the safety net
>> >> against
>> >> > dishonesty by the sender. An additional policy might be to discard
>> >> > packets that have experienced congestion above a threshiold (which
>> >> may
>> >> > be different for different user profiles) so far AND that are
>> >> destined
>> >> > to a user that has a recent history of high congestion marked
>> >> packets.
>> >> > If previous congestion marks have result in the user backing off th=
en
>> >> > this policy would not be invoked, so it would only apply to users
>> >> that
>> >> > are persistently contributing to congestion somewhere on the path
>> >> > traversed (on the same provider or any preceding providers network)=
.
>> >> >
>> >> > These policy actions could constitute the "edge monitor" functions
>> >> > referred to in the draft but would actually be part of the policy
>> >> > functions of the edge device itself, not any independent function.
>> >> >
>> >> > Other may have diffeent views of how the revealed congestion
>> >> information
>> >> > might be used but I believer it is useful to at least consider the
>> >> two
>> >> > parts of an ISP network when discussing possible used for the
>> >> > information.
>> >> >
>> >> > Cheers
>> >> > Kevin Mason
>> >> > _______________________________________________
>> >> > conex mailing list
>> >> > conex@ietf.org
>> >> > https://www.ietf.org/mailman/listinfo/conex
>> >> _______________________________________________
>> >> conex mailing list
>> >> conex@ietf.org
>> >> https://www.ietf.org/mailman/listinfo/conex
>> >
>> > _______________________________________________
>> > conex mailing list
>> > conex@ietf.org
>> > https://www.ietf.org/mailman/listinfo/conex
>> >
>

From john@jlc.net  Sun Aug 22 21:54:58 2010
Return-Path: <john@jlc.net>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 72B513A67F0 for <conex@core3.amsl.com>; Sun, 22 Aug 2010 21:54:58 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.749
X-Spam-Level: 
X-Spam-Status: No, score=-105.749 tagged_above=-999 required=5 tests=[AWL=0.850, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pIeokAK49+Pz for <conex@core3.amsl.com>; Sun, 22 Aug 2010 21:54:57 -0700 (PDT)
Received: from mailhost.jlc.net (mailhost.jlc.net [199.201.159.4]) by core3.amsl.com (Postfix) with ESMTP id EE6CE3A67D1 for <conex@ietf.org>; Sun, 22 Aug 2010 21:54:56 -0700 (PDT)
Received: by mailhost.jlc.net (Postfix, from userid 104) id A1EF633C6D; Mon, 23 Aug 2010 00:55:29 -0400 (EDT)
Date: Mon, 23 Aug 2010 00:55:29 -0400
From: John Leslie <john@jlc.net>
To: Christopher Morrow <morrowc.lists@gmail.com>
Message-ID: <20100823045529.GZ16820@verdi>
References: <20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net> <EE00404438E9444D90AEA84210DC4067019325EA@pacdcexcmb05.cable.comcast.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF3A66C7@WNEXMBX01.telecom.tcnz.net> <001201cb3eb8$9992cb30$ccb86190$@com> <AANLkTi=24o44ACGFgN2N4_xt+Bo1rydC6gvR_et-8XVG@mail.gmail.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF929B5E@WNEXMBX01.telecom.tcnz.net> <AANLkTi=LQkCEFOfOsED9ix4hZqvx=CUT23A+zpiwRdcA@mail.gmail.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <AANLkTi=LQkCEFOfOsED9ix4hZqvx=CUT23A+zpiwRdcA@mail.gmail.com>
User-Agent: Mutt/1.4.1i
Cc: Kevin Mason <Kevin.Mason@telecom.co.nz>, "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 23 Aug 2010 04:54:58 -0000

Christopher Morrow <morrowc.lists@gmail.com> wrote:
> Kevin Mason <Kevin.Mason@telecom.co.nz> wrote:
>> Christopher Morrow <morrowc.lists@gmail.com> wrote:
>>>> Kevin Mason <Kevin.Mason@telecom.co.nz> wrote:
>>>> 
>>>>> To my mind the power of Conex is to provide a forwarding device a
>>>>> richer set of information to make better decisions on how to manage
>>>>> their queues for the greater good.

   This is on the vague side...

>>> perhaps, though we do this today with QOS markings, or by
>>> classification of traffic based on 5-tuple info that is in turn
>>> converted to QOS markings and actions. Adding another few bits to
>>> watch for classification marking seems fine.
>>
>> QoS marking cannot tell the router the likelihood of any packet
>> getting to its ultimate destination.

   Agreed.

>> So for packets with the same classification (marking) if the capacity
>> for that "class" is approaching overload, favouring flows that are
>> not expecting onward congestion has the potential to improve outcomes
>> for all.

   I don't follow the logic here...

   If you mean that a "provisioned bit rate" flow that is "predicting
congestion" is in danger of failing, I can agree. But you aren't limiting
it to that, AFAICT.

   At first blush, I can't see why favoring a different flow _could_
improve outcomes for a flow "predicting congestion". And that doesn't
depend on whether we're in a "sender pays" paradigm, a "receiver pays"
paradigm, or a "nobody pays" paradigm.

   In a "provisioned bit rate" case, terminating _any_ flow when the
aggregate bit-rate exceeds capacity _will_ necessarily improve outcomes
for the remaining flows. But I don't follow why a flow predicting
congestion is a "better" candidate for termination.

   Unless we know we're at the final bottleneck (shared by all flows in
that "class", "predicted congestion" in one flow doesn't tell us what
actual congestion other flows may experience. And if we are at the final
bottleneck, expediting the flow that _does_ predict congestion would
seem to improve the speed with which it can respond to the actual
congestion (by receiving a "congestion experienced" marking rather than
waiting for a timeout).

   Possibly, of course, I'm guessing wrong what you mean by "favouring
flows"...

>> If this is not a practical use of Conex then I am a loss as to what
>> else it would be useful for.

   ConEx clearly can be useful for distinguishing "less than best effort"
flows from "interactive" flows, and using that information to target
efforts at bandwidth upgrades. LBE flows don't call for bandwidth
upgrades; interactive flows do.

> right, qos I was using as method to trigger a different forwarding
> decision (forwarding and/or marking really). Adding conex info to what
> makes the this happen (as I said) seems fine.

   Again, on the vague side...

   If you have a queue at a forwarding point, you _could_ adjust
priorities to forward some traffic more quickly or more slowly; or
you could set probability-of-drop higer or lower in a RED algorithm.
I'm not sure what you have in mind here...

> "Traffic to the destination port 77 + protocol 85 + source-address 12
> has been marked by the conex mechanism, so potentially will get
> dropped downstream, if you have the same profile traffic headed to
> port 78 maybe prioritize that higher."

   Obviously I can't prevent folks from applying such an algorithm,
but it seems ill-advised to me.

   We really don't know what some downstream forwarder will do in the
event of congestion; and I don't expect to try to standardize that in
a ConEx protocol. But were I designing the forwarding algorithms, I'd
try to deliver ConEx traffic which predicts more congestion than I
know to exist _more_quickly_ than non-ConEx traffic, in hopes that
any "congestion experienced" markings will cause backoff more quickly.

   Recall that the ConEx mechanism _will_ (somehow) contain both
"congestion predicted" and "congestion experienced so far".

   Also recall that a newly-starting interactive flow has good reason
to estimate high on "predicted congestion" until path-congestion
information returns to the sender.

   Thus, dropping traffic which estimates high, simply _because_ it
is ConEx aware, seems unlikely to help.

   (YMMV, no doubt...)

--
John Leslie <john@jlc.net>

From toby@moncaster.com  Mon Aug 23 06:52:34 2010
Return-Path: <toby@moncaster.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id A75583A6A40 for <conex@core3.amsl.com>; Mon, 23 Aug 2010 06:52:34 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.249
X-Spam-Level: 
X-Spam-Status: No, score=-2.249 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HELO_EQ_DE=0.35]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TQnrOL67I7Fq for <conex@core3.amsl.com>; Mon, 23 Aug 2010 06:52:32 -0700 (PDT)
Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.186]) by core3.amsl.com (Postfix) with ESMTP id 249A53A6A35 for <conex@ietf.org>; Mon, 23 Aug 2010 06:52:32 -0700 (PDT)
Received: from TobysHP (host86-148-19-193.range86-148.btcentralplus.com [86.148.19.193]) by mrelayeu.kundenserver.de (node=mreu2) with ESMTP (Nemesis) id 0LmgTP-1PMbI84AZ1-00ZewA; Mon, 23 Aug 2010 15:53:00 +0200
From: "Toby Moncaster" <toby@moncaster.com>
To: "'Christopher Morrow'" <morrowc.lists@gmail.com>, "'Kevin Mason'" <Kevin.Mason@telecom.co.nz>
References: <alpine.DEB.1.10.1008121327540.8562@uplift.swm.pp.se>	<20100812123814.GF16820@verdi>	<001b01cb3a2b$80f47420$82dd5c60$@com>	<alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se>	<563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net>	<EE00404438E9444D90AEA84210DC4067019325EA@pacdcexcmb05.cable.comcast.com>	<563C162F43D1B14E9FD2BC0A776C1E9127EF3A66C7@WNEXMBX01.telecom.tcnz.net>	<001201cb3eb8$9992cb30$ccb86190$@com>	<AANLkTi=24o44ACGFgN2N4_xt+Bo1rydC6gvR_et-8XVG@mail.gmail.com>	<563C162F43D1B14E9FD2BC0A776C1E9127EF929B5E@WNEXMBX01.telecom.tcnz.net> <AANLkTi=LQkCEFOfOsED9ix4hZqvx=CUT23A+zpiwRdcA@mail.gmail.com>
In-Reply-To: <AANLkTi=LQkCEFOfOsED9ix4hZqvx=CUT23A+zpiwRdcA@mail.gmail.com>
Date: Mon, 23 Aug 2010 14:52:59 +0100
Message-ID: <002101cb42ca$7fec2300$7fc46900$@com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Office Outlook 12.0
Thread-Index: ActCd3ymtOR5VFKlTg+SVnoVNXg6dwAUcUsw
Content-Language: en-gb
X-Provags-ID: V02:K0:QqqTFyctF3hB6VCqkYkdtIopfDS0QpzMGamX0ttrHi1 C6gr0vf2tPkhe1W5nt+Y9UvY2/UGDvRxf6L2fi7n61Up4mjMfq urk1tVxQB765tJkoG7r54o6N3XtNsnNQW0KXree8nZ6Y39xIN9 S+w3FoN+pwtwWnozow13lcZyip2pvpbfwAP1J1aY3gALfnY7bY Zw3bUu3p6p4VMWE0nrBeJk+Rl7t4kI9Y80iTH5Udf4=
Cc: "'Woundy, Richard'" <Richard_Woundy@cable.comcast.com>, conex@ietf.org
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 23 Aug 2010 13:52:34 -0000

> -----Original Message-----
> From: christopher.morrow@gmail.com
> [mailto:christopher.morrow@gmail.com] On Behalf Of Christopher Morrow
> Sent: 23 August 2010 04:59
> To: Kevin Mason
> Cc: Toby Moncaster; Woundy, Richard; conex@ietf.org
> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
> 01.txt
>=20
> On Sun, Aug 22, 2010 at 7:46 PM, Kevin Mason
> <Kevin.Mason@telecom.co.nz> wrote:
> >> > Oh, and BTW, I personally don't believe ConEx is going to be able
> to be
> >> used
> >> > for traffic engineering on fast timescales - the congestion =
simply
> >> varies
> >> > too quickly over packet timescales (or it does with window-based
> >> > controllers, does anyone know if this is also true for =
controllers
> such
> >> as
> >> > Cubic?)
> >>
> >> I agree with this (no TE from conex).
> >
> > My expectation is the traffic engineering is more background
> activity, monthly/quarterly action not automated
>=20
> uhm... openflow has the promise to make this sort of calculation
> happen very much more rapidly than 1/month || 1/quarter. In today's
> large internet networks I believe much of this is done daily and often
> much more often than that. (without openflow)

Any chance of a quick 101 on OpenFlow? Routing is not my strongest =
suite!
Can it do these calculations across AS boundaries? Can it see how =
decisions
made locally impact networks downstream? That is the extra information =
that
ConEx provides...

Certainly re-configuration of BW shares can (and does) happen on =
timescales
of hours. I suspect though the monthly/quarterly reference above is more
about provisioning physical infrastructure (not the overlying logical
network structure).

>=20
> >>
> >> >> To my mind the power of Conex is to provide a forwarding device =
a
> >> >> richer set of information to make better decisions on how to
> manage
> >> >> their queues for the greater good.
> >>
> >> perhaps, though we do this today with QOS markings, or by
> >> classification of traffic based on 5-tuple info that is in turn
> >> converted to QOS markings and actions. Adding another few bits to
> >> watch for classification marking seems fine.
> >
> > QoS marking cannot tell the router the likelihood of any packet
> getting to its ultimate destination. So for packets with the same
> classification (marking) if the capacity for that "class" is
> approaching overload, favouring flows that are not expecting onward
> congestion has the potential to improve outcomes for all. If this is
> not a practical use of Conex then I am a loss as to what else it would
> be useful for.
>=20
> right, qos I was using as method to trigger a different forwarding
> decision (forwarding and/or marking really). Adding conex info to what
> makes the this happen (as I said) seems fine.
>=20
> "Traffic to the destination port 77 + protocol 85 + source-address 12
> has been marked by the conex mechanism, so potentially will get
> dropped downstream, if you have the same profile traffic headed to
> port 78 maybe prioritize that higher."
>=20
> (in short I think we are agreeing here)

We are certainly heading towards agreement I think.

At this stage in the life of ConEx, what matters is anything that =
actually
acts as a technical block to its deployment. The chance are that the way
operators, app writers and vendors end up using ConEx will differ from =
how
we imagine things at this stage - ConEx is an enabler for many things =
and
none of us have the ability to see the future... Consequently all the =
use
cases document can do is capture some of the more likely scenarios where
ConEx can make an impact.

Toby

>=20
> -chris
>=20
> >>
> >> -Chris
> >>
> >
> > Cheers
> > Kevin Mason
> >> -----Original Message-----
> >> From: christopher.morrow@gmail.com
> [mailto:christopher.morrow@gmail.com]
> >> On Behalf Of Christopher Morrow
> >> Sent: Thursday, 19 August 2010 1:54 a.m.
> >> To: Toby Moncaster
> >> Cc: Kevin Mason; Woundy, Richard; conex@ietf.org
> >> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-
> uses-
> >> 01.txt
> >>
> >> On Wed, Aug 18, 2010 at 5:34 AM, Toby Moncaster =
<toby@moncaster.com>
> >> wrote:
> >> > Inline...
> >> >
> >> > Toby
> >> >
> >> >> -----Original Message-----
> >> >> From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] On
> Behalf
> >> >> Of Kevin Mason
> >> >> Sent: 18 August 2010 00:41
> >> >> To: Woundy, Richard
> >> >> Cc: conex@ietf.org
> >> >> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-
> uses-
> >> >> 01.txt
> >> >>
> >> >>
> >> >>
> >> >> Cheers
> >> >> Kevin Mason
> >> >>
> >> >> > -----Original Message-----
> >> >> > From: Woundy, Richard =
[mailto:Richard_Woundy@cable.comcast.com]
> >> >> > Sent: Wednesday, 18 August 2010 7:17 a.m.
> >> >> > To: Kevin Mason; conex@ietf.org
> >> >> > Subject: RE: [conex] comments on draft-moncaster-conex-
> concepts-uses-
> >> >> > 01.txt
> >> >> >
> >> >> > >>I would challenge the view that another service provider
> "causes"
> >> >> > congestion on another ISP's network. Data is only passed
> between
> >> >> sender
> >> >> > and receiver so it is the ISP's customer that is requesting =
the
> data
> >> >> > from customers of the other provider, not the provider =
"sending
> it".
> >> >> >
> >> >> > Kevin, you make a good point (customer pulls rather than
> provider
> >> >> > pushes), but I don't think that is the entire story.
> >> >> >
> >> >> > Consider a hypothetical example (an *extreme* case to be sure,
> but
> >> >> not
> >> >> > totally disconnected from reality) in which a service provider
> >> >> changes
> >> >> > its routing policy in the following manner: from forwarding
> traffic
> >> >> > somewhat equally over 10 interconnects to a downstream ISP, to
> >> >> > forwarding all traffic over a single interconnect to the
> downstream
> >> >> ISP
> >> >> > (such as changing policy to "hot potato routing" from a =
central
> >> >> hosting
> >> >> > center to the downstream ISP). That change in policy is very
> likely
> >> >> to
> >> >> > cause a lot of congestion over the single interconnect link,
> even as
> >> >> the
> >> >> > overall consumer behavior doesn't change at all.
> >> >>
> >> >> [Kevin Mason] The culpability for congestion on the
> interconnecting hop
> >> >> is very dependent on who dimensions it and the commercials that
> >> >> underpin it. If the "sending" provider dimensions it then
> congestion on
> >> >> this hop is solely the accountability of the sending proivder, =
no
> need
> >> >> for congestion exposure here as they can directly measure it
> today
> >> >> (queuing and discards). IF the receiving provider dimensions it
> then
> >> >> they have no current visibility of the congestion on the
> preceding
> >> >> link, but congestion exposure will potentially only tell them
> that
> >> >> congestion has already been experienced, not where. So if there
> is any
> >> >> SLA around performance of the interconnection hop then the
> receiving
> >> >> party still has to get info from the sending provider to
> ascertain that
> >> >> it is the interconnecting hop that is the problem and not a hop
> deeper
> >> >> in the sending provder's network.
> >> >>
> >> >> Capturing the information for network management purposes at the
> >> >> interprovider level may well be very useful for overall network
> >> >> planning purposes if practical, but using it to underpin payment
> >> >> between providers is very different.
> >> >
> >> > I obviously bow to the greater knowledge of Kevin and Rich in
> this, but
> >> it
> >> > seems to me there are scenarios where ConEx information may be a
> >> sensible
> >> > basis for settlements (perhaps not with money changing hands, but
> for
> >> > instance with shared backbones such as that used by the UK
> academic
> >> > community (ja.net)...
> >>
> >> on the discussion of settlements ... the only place I think that
> makes
> >> sense is in determining if/when a relationship should change, from
> >> 'customer' to 'peer' or vice-versa.
> >>
> >> In the case of purely 'customer' relationships (which won't change,
> >> for instance 'dsl customer') today most folks are just charged for
> the
> >> connection. if you propose to convert them to a form of billing
> based
> >> on congestion you'll have to find story that doesn't end up just
> >> confusing the customer I think. confused customers =3D=3D
> >> call-center-questions :(
> >>
> >> I also bet that the congestion here is going to be mostly on the
> last
> >> mile link, customers may be interested in changing their behavior =
to
> >> better utilize their bw. It's not clear that there's a cost benefit
> >> for the customer since no amount of money is going to change their
> >> last mile problems.
> >>
> >> >
> >> >>
> >> >> However I do not think we need to get too hung up on this, the
> point is
> >> >> that it is debateable who "causes" the congestion in an
> interprovider
> >> >> context, and therefore getting agreement on who might "pay" for
> it has
> >> >> the potential only to enrich the legal industry. I am however in
> favour
> >> >> of using congestion information for accounting purposes at a
> individual
> >> >> ISP customer account level to recognise and reward cooperative
> end user
> >> >> behaviour (e.g. congestion caps)
> >> >
> >> > OK, this is in danger of becoming highly philosophical! Let's =
take
> 3
> >> common
> >> > scenarios:
> >> >
> >> > 1) A user accesses video content via a CDN
> >> > 2) A user uploads photos to facebook
> >> > 3) A user does a web search and visits a link from google
> >> >
> >> > In all these scenarios it is debatable who CAUSES any congestion
> this
> >> > traffic encounters. In 1) it could be said to be the user (he
> wanted to
> >> > watch the video) or it could be the CDN (they sent the actual
> traffic)
> >> or it
> >> > could be the owner of the content (they presumably gain in some
> manner
> >> from
> >> > people watching that content). In case 2) the user is clearly
> >> responsible,
> >> > but it might be argued that facebook also gains, because it
> depends on
> >> > having users doing this sort of thing as its business model... In
> 3) it
> >> > could be the user, it could be google (they get paid for the
> click), it
> >> > could be the final site from which the data came, etc. But in all
> cases
> >> > there is an argument that the upstream forwarding nodes are also
> to some
> >> > extent responsible - they aren't in any way responsible for the
> content,
> >> nor
> >> > are they responsible for it arriving at their inbound interface,
> but
> >> they
> >> > are responsible for any routing decisions they make which might
> have a
> >> bad
> >> > impact on downstream networks. However I accept Kevin's core =
point
> that
> >> we
> >> > should avoid apportioning blame in our descriptions...
> >>
> >> yes
> >>
> >> > Incidentally, it would be interesting to know to what extent
> routing
> >> > decisions by one network have a knock-on effect on the traffic =
and
> >> > congestion in downstream networks... This seems to be a key bit =
of
> >> > information that ConEx can provide that is currently missing.
> >>
> >> this is readily apparent today... there are folks that use these
> >> 'tricks' (decisions) in order to affect outcomes of peering
> agreements
> >> actually. it's really not that hard to figure out. I think rich's
> >> earlier example (10 links with one carrying traffic only) is an
> >> example of a mis-configuration really, not an intentional
> >> configuration.
> >>
> >> > Oh, and BTW, I personally don't believe ConEx is going to be able
> to be
> >> used
> >> > for traffic engineering on fast timescales - the congestion =
simply
> >> varies
> >> > too quickly over packet timescales (or it does with window-based
> >> > controllers, does anyone know if this is also true for =
controllers
> such
> >> as
> >> > Cubic?)
> >>
> >> I agree with this (no TE from conex).
> >>
> >> >> To my mind the power of Conex is to provide a forwarding device =
a
> >> >> richer set of information to make better decisions on how to
> manage
> >> >> their queues for the greater good.
> >>
> >> perhaps, though we do this today with QOS markings, or by
> >> classification of traffic based on 5-tuple info that is in turn
> >> converted to QOS markings and actions. Adding another few bits to
> >> watch for classification marking seems fine.
> >>
> >> -Chris
> >>
> >> > YES! Or perhaps less evangelically, ConEx provides information so
> they
> >> know
> >> > what impact their decisions are having downstream (after all =
there
> may
> >> be
> >> > operators that want to hinder rather than help).
> >> >
> >> >>
> >> >> >
> >> >> > There are a lot less extreme, real-world examples of routing
> policy
> >> >> > changes that would have a similar network impact.
> >> >
> >> > To say nothing of fast re-routes, etc
> >> >
> >> >> >
> >> >> > -- Rich
> >> >> >
> >> >> > -----Original Message-----
> >> >> > From: conex-bounces@ietf.org [mailto:conex-bounces@ietf.org] =
On
> >> >> Behalf
> >> >> > Of Kevin Mason
> >> >> > Sent: Tuesday, August 17, 2010 1:35 AM
> >> >> > To: conex@ietf.org
> >> >> > Subject: [conex] comments on draft-moncaster-conex-concepts-
> uses-
> >> >> 01.txt
> >> >> >
> >> >> > A =A0comment on the draft.
> >> >> >
> >> >> > On accouinting approaches to using congtion information
> >> >> >
> >> >> > I would challenge the view that another service provider
> "causes"
> >> >> > congestion on another ISP's network. Data is only passed
> between
> >> >> sender
> >> >> > and receiver so it is the ISP's customer that is requesting =
the
> data
> >> >> > from customers of the other provider, not the provider =
"sending
> it".
> >> >> >
> >> >> > So if the "sending" provider is not causing the congestion
> (because
> >> >> it
> >> >> > is the receiving provider's customer the requested it) then
> arguing
> >> >> the
> >> >> > "sending" provider should pay for any congestion the resulted
> might
> >> >> be
> >> >> > difficult. I can see endless legal arguments as to why one or
> the
> >> >> other
> >> >> > party is culpable and therefore who should endure any
> commercial
> >> >> > consequences.
> >> >> >
> >> >> > On network uses of the information I think there is a general
> concept
> >> >> > that is not being captured well.
> >> >> >
> >> >> > In ISP networks there are, very simply, two parts. Firstly
> there is
> >> >> the
> >> >> > connectivity between each account holders demarcation (UNI) =
and
> a IP
> >> >> > edge device (BRAS/BNG in Broadband Forum speak). The IP edge
> device
> >> >> > typically facitites AAA functions as well as user based policy
> >> >> > enforcement and by necessity is fully aware of what flows =
below
> to
> >> >> what
> >> >> > UNI's (e.g. because they are all on a single authenticated =
VLAN
> or
> >> >> PPP
> >> >> > tunnel).
> >> >> >
> >> >> > Beyond that point between the IP edge and any peering point,
> the
> >> >> network
> >> >> > does not maintain an specific awareness of individual end
> points.
> >> >> > Routers could =A0theoretically maintain information about each
> >> >> > source/destination pair AND consult some database to relate
> that to a
> >> >> > end user profile, but this is not very scaleable.
> >> >> >
> >> >> > So in the core ISP network, if a forwarding next hop is
> approaching
> >> >> > overload, then the egress config of the router must deal with
> the
> >> >> > aggregate flow and act on information that is in the packet
> header
> >> >> > alone. Maintaining knowledge of which "users" have "caused" =
the
> most
> >> >> > congestion in recent times is too hard.
> >> >> >
> >> >> > So it would appear that a possible scheme might be that a two
> stage
> >> >> > queue management regime might be desirable, whereby at a lower
> queue
> >> >> > size packets begin to be congestion marked if ECN capable and
> maybe
> >> >> > discarded if not, but at a slightly higher queue depth, =
packets
> that
> >> >> are
> >> >> > ECN capable but with a high positive congestion value get
> discarded,
> >> >> on
> >> >> > the basis that these have a lower chance of reaching their
> >> >> destination
> >> >> > than packets not declaring an expectation of congestion on the
> rest
> >> >> of
> >> >> > the path. I don't see a "border monitor" as described in the
> draft
> >> >> being
> >> >> > very practical or useful in this part of the network.
> >> >> >
> >> >> > When packets arrive at the BRAS destined for the user, then
> user
> >> >> based
> >> >> > policy could be applied. One such policy might be to discard
> all
> >> >> packets
> >> >> > with a congestion deficit of more than x. This is the safety
> net
> >> >> against
> >> >> > dishonesty by the sender. An additional policy might be to
> discard
> >> >> > packets that have experienced congestion above a threshiold
> (which
> >> >> may
> >> >> > be different for different user profiles) so far AND that are
> >> >> destined
> >> >> > to a user that has a recent history of high congestion marked
> >> >> packets.
> >> >> > If previous congestion marks have result in the user backing
> off then
> >> >> > this policy would not be invoked, so it would only apply to
> users
> >> >> that
> >> >> > are persistently contributing to congestion somewhere on the
> path
> >> >> > traversed (on the same provider or any preceding providers
> network).
> >> >> >
> >> >> > These policy actions could constitute the "edge monitor"
> functions
> >> >> > referred to in the draft but would actually be part of the
> policy
> >> >> > functions of the edge device itself, not any independent
> function.
> >> >> >
> >> >> > Other may have diffeent views of how the revealed congestion
> >> >> information
> >> >> > might be used but I believer it is useful to at least consider
> the
> >> >> two
> >> >> > parts of an ISP network when discussing possible used for the
> >> >> > information.
> >> >> >
> >> >> > Cheers
> >> >> > Kevin Mason
> >> >> > _______________________________________________
> >> >> > conex mailing list
> >> >> > conex@ietf.org
> >> >> > https://www.ietf.org/mailman/listinfo/conex
> >> >> _______________________________________________
> >> >> conex mailing list
> >> >> conex@ietf.org
> >> >> https://www.ietf.org/mailman/listinfo/conex
> >> >
> >> > _______________________________________________
> >> > conex mailing list
> >> > conex@ietf.org
> >> > https://www.ietf.org/mailman/listinfo/conex
> >> >
> >


From marcelo@it.uc3m.es  Mon Aug 23 12:07:04 2010
Return-Path: <marcelo@it.uc3m.es>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id AA0F23A680D for <conex@core3.amsl.com>; Mon, 23 Aug 2010 12:07:04 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.599
X-Spam-Level: 
X-Spam-Status: No, score=-106.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id cqk8pDrBW67Z for <conex@core3.amsl.com>; Mon, 23 Aug 2010 12:07:03 -0700 (PDT)
Received: from smtp02.uc3m.es (smtp02.uc3m.es [163.117.176.132]) by core3.amsl.com (Postfix) with ESMTP id 659693A6AA8 for <conex@ietf.org>; Mon, 23 Aug 2010 12:06:02 -0700 (PDT)
X-uc3m-safe: yes
Received: from marcelo-bagnulos-macbook-pro.local (76.pool85-55-2.dynamic.orange.es [85.55.2.76]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp02.uc3m.es (Postfix) with ESMTP id 5CE4370D507 for <conex@ietf.org>; Mon, 23 Aug 2010 21:06:33 +0200 (CEST)
Message-ID: <4C72C6A4.4000901@it.uc3m.es>
Date: Mon, 23 Aug 2010 21:06:12 +0200
From: marcelo bagnulo braun <marcelo@it.uc3m.es>
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; es-ES; rv:1.9.1.11) Gecko/20100711 Thunderbird/3.0.6
MIME-Version: 1.0
To: conex@ietf.org
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
X-TM-AS-Product-Ver: IMSS-7.0.0.3116-6.0.0.1038-17590.000
Subject: [conex] minutes...
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 23 Aug 2010 19:07:04 -0000

Hi,

could the minute taker from the last meeting send us the minutes? We 
need to post them soon.

Thanks, marcelo


From prvs=48524BE385=Kevin.Mason@telecom.co.nz  Tue Aug 24 16:29:08 2010
Return-Path: <prvs=48524BE385=Kevin.Mason@telecom.co.nz>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 42B9E3A6A7A for <conex@core3.amsl.com>; Tue, 24 Aug 2010 16:29:08 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.375
X-Spam-Level: 
X-Spam-Status: No, score=-1.375 tagged_above=-999 required=5 tests=[AWL=0.365,  BAYES_20=-0.74, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id cO3hLZW86d+S for <conex@core3.amsl.com>; Tue, 24 Aug 2010 16:29:06 -0700 (PDT)
Received: from mgate2.telecom.co.nz (envoy-out.telecom.co.nz [146.171.15.100]) by core3.amsl.com (Postfix) with ESMTP id 960C03A6A2E for <conex@ietf.org>; Tue, 24 Aug 2010 16:29:06 -0700 (PDT)
Received: from mgate5.telecom.co.nz (unknown [146.171.1.21]) by mgate2.telecom.co.nz (Tumbleweed MailGate 3.7.1) with ESMTP id 213002C3B104; Wed, 25 Aug 2010 11:29:33 +1200 (NZST)
X-WSS-ID: 0L7OJXC-08-0LP-02
X-M-MSG: 
Received: from hp2848.telecom.tcnz.net (hp2848.telecom.tcnz.net [146.171.228.250]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mgate5.telecom.co.nz (Postfix) with ESMTP id 1058764A872D; Wed, 25 Aug 2010 11:29:36 +1200 (NZST)
Received: from hp3119.telecom.tcnz.net (146.171.212.204) by hp2848.telecom.tcnz.net (146.171.228.250) with Microsoft SMTP Server (TLS) id 8.2.234.1; Wed, 25 Aug 2010 11:29:36 +1200
Received: from WNEXMBX01.telecom.tcnz.net ([146.171.212.201]) by hp3119.telecom.tcnz.net ([146.171.212.204]) with mapi; Wed, 25 Aug 2010 11:29:36 +1200
From: Kevin Mason <Kevin.Mason@telecom.co.nz>
To: John Leslie <john@jlc.net>, Christopher Morrow <morrowc.lists@gmail.com>
Date: Wed, 25 Aug 2010 11:29:35 +1200
Thread-Topic: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
Thread-Index: ActCf2/aeQ3QhIotQGuZPi+ARPABkwAypTmg
Message-ID: <563C162F43D1B14E9FD2BC0A776C1E9127EFCA5F29@WNEXMBX01.telecom.tcnz.net>
References: <20100812123814.GF16820@verdi> <001b01cb3a2b$80f47420$82dd5c60$@com> <alpine.DEB.1.10.1008130933210.8562@uplift.swm.pp.se> <563C162F43D1B14E9FD2BC0A776C1E9127EF2857D3@WNEXMBX01.telecom.tcnz.net> <EE00404438E9444D90AEA84210DC4067019325EA@pacdcexcmb05.cable.comcast.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF3A66C7@WNEXMBX01.telecom.tcnz.net> <001201cb3eb8$9992cb30$ccb86190$@com> <AANLkTi=24o44ACGFgN2N4_xt+Bo1rydC6gvR_et-8XVG@mail.gmail.com> <563C162F43D1B14E9FD2BC0A776C1E9127EF929B5E@WNEXMBX01.telecom.tcnz.net> <AANLkTi=LQkCEFOfOsED9ix4hZqvx=CUT23A+zpiwRdcA@mail.gmail.com> <20100823045529.GZ16820@verdi>
In-Reply-To: <20100823045529.GZ16820@verdi>
Accept-Language: en-US, en-NZ
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US, en-NZ
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "conex@ietf.org" <conex@ietf.org>
Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-01.txt
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 24 Aug 2010 23:29:08 -0000

Cheers
Kevin Mason
> -----Original Message-----
> From: John Leslie [mailto:john@jlc.net]
> Sent: Monday, 23 August 2010 4:55 p.m.
> To: Christopher Morrow
> Cc: Kevin Mason; conex@ietf.org
> Subject: Re: [conex] comments on draft-moncaster-conex-concepts-uses-
> 01.txt
>=20
> Christopher Morrow <morrowc.lists@gmail.com> wrote:
> > Kevin Mason <Kevin.Mason@telecom.co.nz> wrote:
> >> Christopher Morrow <morrowc.lists@gmail.com> wrote:
> >>>> Kevin Mason <Kevin.Mason@telecom.co.nz> wrote:
> >>>>
> >>>>> To my mind the power of Conex is to provide a forwarding device a
> >>>>> richer set of information to make better decisions on how to manage
> >>>>> their queues for the greater good.
>=20
>    This is on the vague side...
>=20
> >>> perhaps, though we do this today with QOS markings, or by
> >>> classification of traffic based on 5-tuple info that is in turn
> >>> converted to QOS markings and actions. Adding another few bits to
> >>> watch for classification marking seems fine.
> >>
> >> QoS marking cannot tell the router the likelihood of any packet
> >> getting to its ultimate destination.
>=20
>    Agreed.
>=20
> >> So for packets with the same classification (marking) if the capacity
> >> for that "class" is approaching overload, favouring flows that are
> >> not expecting onward congestion has the potential to improve outcomes
> >> for all.
>=20
>    I don't follow the logic here...
[Kevin Mason]=20
Let me try and explain again

A router in the core backbone is not practically aware of which packets bel=
ong to which ultimate end users, what the commercial plan the destination u=
ser is (i.e. what they are fairly entitled to) and what the recent activity=
 history of that user has been.

If a core router is making a forwarding decision and the queue for that cla=
ss is filling, then it makes some queue management decisions, e.g. RED. RED=
 in essence is sending a message to the user host that capacity is approach=
ing overload somewhere on the path and congestion avoidance action should b=
e taken.

With ECN capable endpoints the same overload onset indication can be sent b=
y marking, with hopefully the same or similar end user host response.


Now with CONEX an additional dimension might be added to the queue manageme=
nt logic. The challenge is what could be done at this point to use the cone=
x information to improve overall capacity sharing.

My assumption is that as the queue fills the first RED queue threshold migh=
t be to;
- begin random discarding non ECN traffic
- begin random marking ECN traffic not CONEX enabled,=20

The mark probability for CONEX traffic might be a function of the predicted=
 congestion, so traffic that is predicting high onward congestion would hav=
e a higher mark probability than traffic with a lower forward prediction

The logic for CONEX behaviour is that if a packet is predicting further con=
gestion then invoking a congestion response early may help relieve capacity=
 overload at more than one point in the path.=20

When a packet eventually arrives at a user endpoint aware policy point (e.g=
. a BRAS), packets that have a congestion prediction deficit might be subje=
ct to random discard if they are destined for a user that has exceeded a us=
er policy based volume and/or "congestion experienced" threshold. This migh=
t discourage persistent under prediction and also try and invoke a rate red=
uction response for those longer duration flows traversing paths on which c=
ongestion is persistent and/or rising.

For user endpoints with no recent history (i.e. are below a "congestion exp=
erienced" or volume threshold), no action may be taken. An alternative migh=
t be to "unmark" the ECN indication so no congestion response would be invo=
ked and this flow could continue to enjoy wire speed performance. Unmarking=
 may also help flows in start-up initially predicting high forward congesti=
on (that may have been marked in a core router queue) as they would not be =
penalised (congestion response would not be invoked at the host) if they we=
re from an itinerant user.


>=20
>    If you mean that a "provisioned bit rate" flow that is "predicting
> congestion" is in danger of failing, I can agree. But you aren't limiting
> it to that, AFAICT.

No I am not considering guaranteed bit rate flows here.
>=20
>    At first blush, I can't see why favoring a different flow _could_
> improve outcomes for a flow "predicting congestion". And that doesn't
> depend on whether we're in a "sender pays" paradigm, a "receiver pays"
> paradigm, or a "nobody pays" paradigm.
>=20
>    In a "provisioned bit rate" case, terminating _any_ flow when the
> aggregate bit-rate exceeds capacity _will_ necessarily improve outcomes
> for the remaining flows. But I don't follow why a flow predicting
> congestion is a "better" candidate for termination.
>=20
>    Unless we know we're at the final bottleneck (shared by all flows in
> that "class", "predicted congestion" in one flow doesn't tell us what
> actual congestion other flows may experience. And if we are at the final
> bottleneck, expediting the flow that _does_ predict congestion would
> seem to improve the speed with which it can respond to the actual
> congestion (by receiving a "congestion experienced" marking rather than
> waiting for a timeout).
>=20
>    Possibly, of course, I'm guessing wrong what you mean by "favouring
> flows"...
>=20
> >> If this is not a practical use of Conex then I am a loss as to what
> >> else it would be useful for.
>=20
>    ConEx clearly can be useful for distinguishing "less than best effort"
> flows from "interactive" flows, and using that information to target
> efforts at bandwidth upgrades. LBE flows don't call for bandwidth
> upgrades; interactive flows do.

I don't see how this can be done by CONEX, traffic class (priority) is not =
conveyed by CONEX.=20
>=20
> > right, qos I was using as method to trigger a different forwarding
> > decision (forwarding and/or marking really). Adding conex info to what
> > makes the this happen (as I said) seems fine.
>=20
>    Again, on the vague side...
>=20
>    If you have a queue at a forwarding point, you _could_ adjust
> priorities to forward some traffic more quickly or more slowly; or
> you could set probability-of-drop higer or lower in a RED algorithm.
> I'm not sure what you have in mind here...
>=20
> > "Traffic to the destination port 77 + protocol 85 + source-address 12
> > has been marked by the conex mechanism, so potentially will get
> > dropped downstream, if you have the same profile traffic headed to
> > port 78 maybe prioritize that higher."
>=20
>    Obviously I can't prevent folks from applying such an algorithm,
> but it seems ill-advised to me.
>=20
>    We really don't know what some downstream forwarder will do in the
> event of congestion; and I don't expect to try to standardize that in
> a ConEx protocol. But were I designing the forwarding algorithms, I'd
> try to deliver ConEx traffic which predicts more congestion than I
> know to exist _more_quickly_=20

How does a router know what congestion actually exists beyond its egress in=
terface. CONEX predictions across multiple packets might gives some clues, =
but the router does not inherently know what distant path any packet will t=
raverse so a lower prediction on one packet does not mean that the current =
packet being forwarded with a higher prediction value will actually travers=
e exactly the same path to its destination (and can be assumed to be actual=
ly over predicting).

than non-ConEx traffic, in hopes that
> any "congestion experienced" markings will cause backoff more quickly.
>=20
>    Recall that the ConEx mechanism _will_ (somehow) contain both
> "congestion predicted" and "congestion experienced so far".
>=20
>    Also recall that a newly-starting interactive flow has good reason
> to estimate high on "predicted congestion" until path-congestion
> information returns to the sender.
>=20
>    Thus, dropping traffic which estimates high, simply _because_ it
> is ConEx aware, seems unlikely to help.
>=20
>    (YMMV, no doubt...)
>=20
> --
> John Leslie <john@jlc.net>

From nanditad@google.com  Wed Aug 25 07:23:36 2010
Return-Path: <nanditad@google.com>
X-Original-To: conex@core3.amsl.com
Delivered-To: conex@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id 6508D3A69F9 for <conex@core3.amsl.com>; Wed, 25 Aug 2010 07:23:35 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.728
X-Spam-Level: 
X-Spam-Status: No, score=-105.728 tagged_above=-999 required=5 tests=[AWL=0.248, BAYES_00=-2.599, FM_FORGED_GMAIL=0.622, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vk5cuFnJPQn1 for <conex@core3.amsl.com>; Wed, 25 Aug 2010 07:23:33 -0700 (PDT)
Received: from smtp-out.google.com (smtp-out.google.com [216.239.44.51]) by core3.amsl.com (Postfix) with ESMTP id DF7CC3A6ADC for <conex@ietf.org>; Wed, 25 Aug 2010 07:23:32 -0700 (PDT)
Received: from hpaq14.eem.corp.google.com (hpaq14.eem.corp.google.com [172.25.149.14]) by smtp-out.google.com with ESMTP id o7PEO4qr008243 for <conex@ietf.org>; Wed, 25 Aug 2010 07:24:05 -0700
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=google.com; s=beta; t=1282746245; bh=oDmCKZNxG5wDecBTWWK7HB9Jx7o=; h=MIME-Version:Date:Message-ID:Subject:From:To:Content-Type; b=cndKWqba2AWUKOaBtBLaEUZJAC8lNx7LT7Rq0rW/sbtGw30M7MXqJOFi2vH0EZZ8W FwCOn/k5/7FX2vkVRCZEQ==
Received: from gyg10 (gyg10.prod.google.com [10.243.50.138]) by hpaq14.eem.corp.google.com with ESMTP id o7PEO32C006998 for <conex@ietf.org>; Wed, 25 Aug 2010 07:24:03 -0700
Received: by gyg10 with SMTP id 10so268506gyg.30 for <conex@ietf.org>; Wed, 25 Aug 2010 07:24:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=beta; h=domainkey-signature:mime-version:received:received:date:message-id :subject:from:to:content-type; bh=4XQwLq8PgTNUsuai1IcsTlepDwrih6FB9wKJFMuedRc=; b=ezzvdLb9UlHltRnj1D9uksuOX9K86f4MY+3f5TIK43Yw5vbpcMymRHxRWI4NtW3COX TxaPU9VHGOPXsyc9WbZg==
DomainKey-Signature: a=rsa-sha1; c=nofws; d=google.com; s=beta; h=mime-version:date:message-id:subject:from:to:content-type; b=EjYDeFCQZbCobyxRhn0+90MzQBvzoa2amq7TcxlUwQ+BGGc/KOmcLe48vc1iNnWmOK EoHTAl0rUE8nGIsbgbAQ==
MIME-Version: 1.0
Received: by 10.90.75.10 with SMTP id x10mr5703998aga.4.1282746242822; Wed, 25 Aug 2010 07:24:02 -0700 (PDT)
Received: by 10.91.215.19 with HTTP; Wed, 25 Aug 2010 07:24:02 -0700 (PDT)
Date: Wed, 25 Aug 2010 19:54:02 +0530
Message-ID: <AANLkTinh27U+=ZH2_HbSEoGg9vrfpaZ=LePFo3CiPsmf@mail.gmail.com>
From: Nandita Dukkipati <nanditad@google.com>
To: conex@ietf.org
Content-Type: multipart/alternative; boundary=0016361e87b623f1f3048ea6a244
X-System-Of-Record: true
Subject: [conex] Conex WG meeting minutes (27 July, 2010)
X-BeenThere: conex@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
List-Id: Congestion Exposure working group discussion list <conex.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/conex>
List-Post: <mailto:conex@ietf.org>
List-Help: <mailto:conex-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/conex>, <mailto:conex-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 25 Aug 2010 14:23:36 -0000

--0016361e87b623f1f3048ea6a244
Content-Type: text/plain; charset=ISO-8859-1

Hi Everyone,

Please find the Conex meeting minutes at this link:
http://www.ietf.org/proceedings/78/minutes/conex.txt
Meeting minutes have been recorded by Matthew Ford (thanks! Matthew).

Please email me any corrections to the notes.

Thanks,
-Nandita

--0016361e87b623f1f3048ea6a244
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi Everyone,<div><br></div><div>Please find the Conex meeting minutes at th=
is link:=A0<a href=3D"http://www.ietf.org/proceedings/78/minutes/conex.txt"=
>http://www.ietf.org/proceedings/78/minutes/conex.txt</a></div><div>Meeting=
 minutes have been recorded by Matthew Ford (thanks! Matthew).</div>
<div><br></div><div>Please email me any corrections to the notes.</div><div=
><br></div><div>Thanks,</div><div>-Nandita</div>

--0016361e87b623f1f3048ea6a244--
