
From david.black@emc.com  Sun Jan  1 11:10:36 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0239D1F0C36 for <dc@ietfa.amsl.com>; Sun,  1 Jan 2012 11:10:36 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.583
X-Spam-Level: 
X-Spam-Status: No, score=-106.583 tagged_above=-999 required=5 tests=[AWL=0.016, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id g4gikkphnS6F for <dc@ietfa.amsl.com>; Sun,  1 Jan 2012 11:10:35 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 4A4D71F0C38 for <dc@ietf.org>; Sun,  1 Jan 2012 11:10:34 -0800 (PST)
Received: from hop04-l1d11-si01.isus.emc.com (HOP04-L1D11-SI01.isus.emc.com [10.254.111.54]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q01JAQLG027612 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 1 Jan 2012 14:10:26 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.221.145]) by hop04-l1d11-si01.isus.emc.com (RSA Interceptor); Sun, 1 Jan 2012 14:10:10 -0500
Received: from mxhub04.corp.emc.com (mxhub04.corp.emc.com [10.254.141.106]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q01JAAVC012067; Sun, 1 Jan 2012 14:10:10 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub04.corp.emc.com ([10.254.141.106]) with mapi; Sun, 1 Jan 2012 14:10:09 -0500
From: <david.black@emc.com>
To: <russw@riw.us>, <dc@ietf.org>
Date: Sun, 1 Jan 2012 14:10:08 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczGRpJvomHShY+MR56d9MdLrJSTVACcc6/A
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us>
In-Reply-To: <4EFC947A.4020007@riw.us>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 01 Jan 2012 19:10:36 -0000

> > That means that if you have to protect data, you have to know somehow w=
hich host is allowed to see
> which disks. Doing this end-to-end using TCP/IP is problematic because th=
e NAS controller doesn't know
> about tenant id, and whether the MAC and IP are being duplicated across c=
ustomers. To make the storage
> aware of segmentation, you have to carry some segment (GRE, MPLS, VLAN, e=
tc.) into the storage box.
> That's not true today.
>=20
> Maybe I'm dense, but wouldn't it be just as easy to add the capability
> to differentiate based on IP address as it would be to design the entire
> network and all protocols around what the current software capabilities
> on the hypervisor are?

As previously noted, Ashish's original statement about what storage boxes c=
annot do was just
plain wrong, as there's plenty of VLAN support on current storage boxes.  F=
WIW, differentiation
based on IP address is not difficult, although one has to think carefully a=
bout scope of IP
address as differentiating on source IP address assumes that the source IP =
address isn't forged
(VLANs are easier to work with than IP blocks for this concern).

Thanks,
--David
----------------------------------------------------
David L. Black, Distinguished Engineer
EMC Corporation, 176 South St., Hopkinton, MA=A0 01748
+1 (508) 293-7953=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 FAX: +1 (508) 293-778=
6
david.black@emc.com=A0=A0=A0=A0=A0=A0=A0 Mobile: +1 (978) 394-7754
----------------------------------------------------



From adalela@cisco.com  Sun Jan  1 19:58:34 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9FC261F0C38 for <dc@ietfa.amsl.com>; Sun,  1 Jan 2012 19:58:34 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.242
X-Spam-Level: 
X-Spam-Status: No, score=-2.242 tagged_above=-999 required=5 tests=[AWL=0.357,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xTuCNaESjKDz for <dc@ietfa.amsl.com>; Sun,  1 Jan 2012 19:58:34 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 7E6C01F0C36 for <dc@ietf.org>; Sun,  1 Jan 2012 19:58:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2912; q=dns/txt; s=iport; t=1325476713; x=1326686313; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to; bh=B6iBE5E5KUPCI22BRIlngvhMU+828EeiZbtfM1xS098=; b=KBXhCOHOpmYppTgS4v/NmCEFNLXx7yUtuM9TasPnDexFTVyqInVPz243 bFPoyCm0GKCdiw9c76t9QICk3rM5m/MT1TnEDeMu7nkNFUL1v7AeMYLQA JL0CugWlu+TfYVkzI11kCC2zS02A4xmWHU1vHYtgd3F1crf3pAII0kaEw A=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ao8gAAUqAU9Io8UY/2dsb2JhbAA6BwOCBatXgXIBAQEDAQEBAQ8BHT4XBAIBCBEEAQELBhcBBgEmHwkIAQEEAQoICBMHh1gIlioBnTeIVhqCPGMEiAQxnwk
X-IronPort-AV: E=Sophos;i="4.71,442,1320624000";  d="scan'208";a="2610042"
Received: from vla196-nat.cisco.com (HELO bgl-core-3.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 02 Jan 2012 03:58:32 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-3.cisco.com (8.14.3/8.14.3) with ESMTP id q023wVpA019793; Mon, 2 Jan 2012 03:58:31 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Mon, 2 Jan 2012 09:28:31 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Date: Mon, 2 Jan 2012 09:28:32 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com>
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczGRpJvomHShY+MR56d9MdLrJSTVACcc6/AAAOlMaA=
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <david.black@emc.com>, <russw@riw.us>, <dc@ietf.org>
X-OriginalArrivalTime: 02 Jan 2012 03:58:31.0636 (UTC) FILETIME=[CACB9940:01CCC902]
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Jan 2012 03:58:34 -0000

David,

On a L3 network, VLAN will help you contain broadcasts but not enough =
security. E.g. If a tenant has more than one VLAN then you can route =
across the VLANs. Now, you need IP to identify. This now opens up to =
issues like IP duplication.=20

VLANs will work if one customer equals one VLAN. The other thing I find =
interesting is that this conversation happened in the context of L3 vs. =
L2 and you mentioned that TCP/IP can be used for storage as a way to =
avoid L2. But, now, we need VLAN for segmentation. Now when you do DCI, =
then you need MAC over IP (or EoMPLS or something else), but not IP over =
IP (the entire context of this discussion).

http://tools.ietf.org/html/draft-dalela-dc-approaches-00 analyzes many =
of these issues collectively. I believe the goal of this group should be =
to take the problem set as a whole - and we know there are many possible =
approaches that already exist.

Thanks, Ashish

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of =
david.black@emc.com
Sent: Monday, January 02, 2012 12:40 AM
To: russw@riw.us; dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center =
interconnect

> > That means that if you have to protect data, you have to know =
somehow which host is allowed to see
> which disks. Doing this end-to-end using TCP/IP is problematic because =
the NAS controller doesn't know
> about tenant id, and whether the MAC and IP are being duplicated =
across customers. To make the storage
> aware of segmentation, you have to carry some segment (GRE, MPLS, =
VLAN, etc.) into the storage box.
> That's not true today.
>=20
> Maybe I'm dense, but wouldn't it be just as easy to add the capability
> to differentiate based on IP address as it would be to design the =
entire
> network and all protocols around what the current software =
capabilities
> on the hypervisor are?

As previously noted, Ashish's original statement about what storage =
boxes cannot do was just
plain wrong, as there's plenty of VLAN support on current storage boxes. =
 FWIW, differentiation
based on IP address is not difficult, although one has to think =
carefully about scope of IP
address as differentiating on source IP address assumes that the source =
IP address isn't forged
(VLANs are easier to work with than IP blocks for this concern).

Thanks,
--David
----------------------------------------------------
David L. Black, Distinguished Engineer
EMC Corporation, 176 South St., Hopkinton, MA=A0 01748
+1 (508) 293-7953=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 FAX: +1 (508) =
293-7786
david.black@emc.com=A0=A0=A0=A0=A0=A0=A0 Mobile: +1 (978) 394-7754
----------------------------------------------------


_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From pedro.r.marques@gmail.com  Sun Jan  1 20:55:49 2012
Return-Path: <pedro.r.marques@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 18CC411E8099 for <dc@ietfa.amsl.com>; Sun,  1 Jan 2012 20:55:49 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.205
X-Spam-Level: 
X-Spam-Status: No, score=-2.205 tagged_above=-999 required=5 tests=[AWL=1.395,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Z0uQR40idc4D for <dc@ietfa.amsl.com>; Sun,  1 Jan 2012 20:55:48 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id 2A90811E8098 for <dc@ietf.org>; Sun,  1 Jan 2012 20:55:48 -0800 (PST)
Received: by iabz21 with SMTP id z21so7238860iab.31 for <dc@ietf.org>; Sun, 01 Jan 2012 20:55:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Tko+0SZqJkzsgdKmKQqCr4KQRkjZSfPcjYJgVnU4Eao=; b=genb6Ulu2GIz1vREAYm77HlLt7PNYjHfEZwQ0TaKTHkaCJwqypHiqNTrKafkZ+Y/hr 47JU7pdfuXKO1EoNREDGr//K9abwluMEXuM86ZC+7xfAfjvqMde0BRP45nITlbLNS2xq F6ou3RYeUQCJMvV6+QxYD1/LiKmug1F1hMwF4=
MIME-Version: 1.0
Received: by 10.50.15.161 with SMTP id y1mr66029775igc.4.1325480147759; Sun, 01 Jan 2012 20:55:47 -0800 (PST)
Received: by 10.231.201.82 with HTTP; Sun, 1 Jan 2012 20:55:47 -0800 (PST)
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com>
Date: Sun, 1 Jan 2012 20:55:47 -0800
Message-ID: <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com>
From: Pedro Marques <pedro.r.marques@gmail.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Content-Type: text/plain; charset=ISO-8859-1
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Jan 2012 04:55:49 -0000

Ashish,
Please see inline.

On Sun, Jan 1, 2012 at 7:58 PM, Ashish Dalela (adalela)
<adalela@cisco.com> wrote:
>
> David,
>
> On a L3 network, VLAN will help you contain broadcasts but not enough security. E.g. If a tenant has more than one VLAN then you can route across the VLANs. Now, you need IP to identify. This now opens up to issues like IP duplication.
>
> VLANs will work if one customer equals one VLAN.

Clearly not an useful assumption. A customer must be able to user
multiple 'segments'/'security groups' and have traffic be exchanged
among them.

As for the 'issues like IP duplication'. It would be preferable to
assume in the conversation  basic security features such as the
verification of the inner and outer addresses, whether this is IP in
mac or IP in IP.

> The other thing I find interesting is that this conversation happened in the context of L3 vs. L2 and you mentioned that TCP/IP can be used for storage as a way to avoid L2. But, now, we need VLAN for segmentation.

Why do you need VLANs segmentation ? An exactly what is the scope of
the VLANs that you need for segmentation ? interface-scope or
data-center scope ?

  Pedro.

From adalela@cisco.com  Sun Jan  1 21:25:19 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id DBFE211E8099 for <dc@ietfa.amsl.com>; Sun,  1 Jan 2012 21:25:19 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.268
X-Spam-Level: 
X-Spam-Status: No, score=-2.268 tagged_above=-999 required=5 tests=[AWL=0.331,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id am59NgcaG+8b for <dc@ietfa.amsl.com>; Sun,  1 Jan 2012 21:25:19 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 88DB211E808A for <dc@ietf.org>; Sun,  1 Jan 2012 21:25:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2370; q=dns/txt; s=iport; t=1325481918; x=1326691518; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=zAGavRYR3IsCAoIBVkW0g7J37cdYf9UVWs+Di50YvhI=; b=I1CFurn6zM9lzFTNJdASbPlNiKMQuVW6Vv++Te9CMr6NhF7OrfUrLZyk YkrD1T1+31W5o3kJPGraw+FuZrryFa78niFtwkgO/+uAqmV9yDX54o0RI g8tZP+r86N/cOek7PR6ynJCMKIGT7GhE0u2fNeQBkhsossfsZuOVpgBDS w=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ao8gAEA/AU9Io8UY/2dsb2JhbAA6CoIFq1eBcgEBAQMBEgEdCisUBQcEAgEIEQQBAQEKBgUSAQYBICUJCAEBBAsICBMHh1iWQAGdLYhWE4JDYwSINZc8h00
X-IronPort-AV: E=Sophos;i="4.71,443,1320624000";  d="scan'208";a="2613797"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 02 Jan 2012 05:25:17 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q025PG6I029172; Mon, 2 Jan 2012 05:25:17 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Mon, 2 Jan 2012 10:55:16 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Mon, 2 Jan 2012 10:55:26 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com>
In-Reply-To: <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczJCsyIPwDTsGPsQKShVpFKWO9stQAA0ZoA
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Pedro Marques" <pedro.r.marques@gmail.com>
X-OriginalArrivalTime: 02 Jan 2012 05:25:16.0859 (UTC) FILETIME=[E95998B0:01CCC90E]
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Jan 2012 05:25:20 -0000

Pedro,

>> It would be preferable to assume in the conversation  basic security
features such as the verification of the inner and outer addresses,
whether this is IP in mac or IP in IP.

That's another very important issue that all network services
(firewalls, load-balancers, wan optimizer, ..) need to take into account
the change in the header structure (mentioned in the "approaches"
draft). Changes to these structures break network services. So, if there
is no generally agreed upon way of doing this, then there are risks.
Also, how does a service know how many headers to look at?

>> Why do you need VLANs segmentation ? An exactly what is the scope of
the VLANs that you need for segmentation ? interface-scope or
data-center scope ?

I believe this datacenter scope. Most of the reasons are historical.
Broadcast discovery (including ARP resolve), simpler configuration.

Thanks, Ashish


-----Original Message-----
From: Pedro Marques [mailto:pedro.r.marques@gmail.com]=20
Sent: Monday, January 02, 2012 10:26 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect

Ashish,
Please see inline.

On Sun, Jan 1, 2012 at 7:58 PM, Ashish Dalela (adalela)
<adalela@cisco.com> wrote:
>
> David,
>
> On a L3 network, VLAN will help you contain broadcasts but not enough
security. E.g. If a tenant has more than one VLAN then you can route
across the VLANs. Now, you need IP to identify. This now opens up to
issues like IP duplication.
>
> VLANs will work if one customer equals one VLAN.

Clearly not an useful assumption. A customer must be able to user
multiple 'segments'/'security groups' and have traffic be exchanged
among them.

As for the 'issues like IP duplication'. It would be preferable to
assume in the conversation  basic security features such as the
verification of the inner and outer addresses, whether this is IP in
mac or IP in IP.

> The other thing I find interesting is that this conversation happened
in the context of L3 vs. L2 and you mentioned that TCP/IP can be used
for storage as a way to avoid L2. But, now, we need VLAN for
segmentation.

Why do you need VLANs segmentation ? An exactly what is the scope of
the VLANs that you need for segmentation ? interface-scope or
data-center scope ?

  Pedro.

From aldrin.isaac@gmail.com  Mon Jan  2 08:23:22 2012
Return-Path: <aldrin.isaac@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B93E421F84D4 for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 08:23:22 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.82
X-Spam-Level: 
X-Spam-Status: No, score=-2.82 tagged_above=-999 required=5 tests=[AWL=0.178,  BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_31=0.6, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NCRdMp3CHM-P for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 08:23:21 -0800 (PST)
Received: from mail-qw0-f51.google.com (mail-qw0-f51.google.com [209.85.216.51]) by ietfa.amsl.com (Postfix) with ESMTP id 8CDF721F84D5 for <dc@ietf.org>; Mon,  2 Jan 2012 08:23:21 -0800 (PST)
Received: by qadz3 with SMTP id z3so10206122qad.10 for <dc@ietf.org>; Mon, 02 Jan 2012 08:23:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :message-id:references:to:x-mailer; bh=kF4EBJQnCoeKuO0YSg2AbcV2DnNv929KutGyQUv3RPs=; b=YaE/pq6iZFVFYGLQ2KV7FZU/W1YlruvxuC3+dVtbgM/KVq7inHNREeT6Ahg7uGTqUY RhBFPhzXlp5HDMXDQ8+2q4IRMJTfXqFEuobMAeVX/aqvlT/3G3feZGp7WG8wfiettrsA 9RSdoC5dV7kIFJ286vO1Hh2b6jlknPUOEELVs=
Received: by 10.224.182.10 with SMTP id ca10mr52599804qab.1.1325521401026; Mon, 02 Jan 2012 08:23:21 -0800 (PST)
Received: from mymac.home (ool-44c1c730.dyn.optonline.net. [68.193.199.48]) by mx.google.com with ESMTPS id m20sm93789231qaj.14.2012.01.02.08.23.18 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 02 Jan 2012 08:23:19 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: multipart/alternative; boundary="Apple-Mail=_47E8D238-820D-4780-B3AD-D12FCD468F6B"
From: Aldrin Isaac <aldrin.isaac@gmail.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com>
Date: Mon, 2 Jan 2012 11:23:17 -0500
Message-Id: <1ED01508-C24B-455A-90EA-FC9AE59FB020@gmail.com>
References: <618BE8B40039924EB9AED233D4A09C5102B2527A@XMB-BGL-416.cisco.com> <D96F76EF-0011-4F33-A1CF-EC9AD12BA411@gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: dc@ietf.org
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Jan 2012 16:23:22 -0000

--Apple-Mail=_47E8D238-820D-4780-B3AD-D12FCD468F6B
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=windows-1252

Hi Ashish,=20

Hierarchical MAC should be considered for addition to the Ethernet =
toolkit for "legacy" Ethernet.  However, I would want to see it combined =
with end-station and uPE UNI that allows for *network-assigned* =
MAC/MAC-prefix, and also support for MAC "default route" announcement.  =
As you advocate, an ARP registry could further enhance scaling for ARP =
(if there are no show stopping limitations), but hierarchical MAC and =
ARP registry should operate independently.  Support for virtual network =
contexts would be required for both UNI and registry.  I am a big =
proponent of VRFs and virtual LANs so a solution that takes these off =
the table would be off my table too.

Best -- aldrin


On Dec 30, 2011, at 8:54 PM, Ashish Dalela (adalela) wrote:

> Hi Aldrin,
> =20
> >> How would you address gratuitous ARP when using the hierarchical =
MAC addressing with a registry to store MAC-IP bindings?
> =20
> We should ideally have a protocol that maps ARP messages into the =
registry and vice versa. Why new protocol? Greater security and =
reliability plus avoiding ARP broadcasts. Gratuitous ARP doesn=92t have =
an acknowledge, so if you want to perform some actions based on this, =
you need to get an =93acknowledge=94 for it. Gratuitous ARP can get =
dropped.
> =20
> The other issue is that a user can misuse it do MAC hijacking. But if =
you have a hierarchical MAC you can=92t hijack because network knows =
your location. Things like dot1x which solved these issues in the campus =
space are not used in datacenter.
> =20
> >> Also, when using MAC prefixes in complex L2VPN topologies how could =
we address the risk of a hub forwarding into the wrong spoke context on =
the leaf switch?  Each egress port on the leaf switch may be in a =
different context (ex: EVI in EVPN).
> =20
> We should avoid building any kind of VRF at the control plane, because =
that=92s just not scalable. With hierarchical MAC, you have global L2 =
addressing so from a forwarding perspective it looks like static L3 =
routing and anyone can send packets anywhere. When a VM moves, this =
doesn=92t change the global forwarding table (unless a virtual switch =
moves =96 which is also possible). To segment customers we tag packets =
with some tenant id, and it will be dropped at destination. The tags =
will be present both at ingress and egress ports.
> =20
> There are only two possible approaches =96 drop at source or drop at =
destination. If we try to drop at source, we need a mapping to which =
destinations are allowed (too many, and the scale is unpredictable). If =
we drop at destination, we still need to know which destinations are =
allowed, but that=92s just comprised of local host entries (no new =
entries).
> =20
> I=92m not totally certain if I answered your question, so let me know.
> =20
> Thanks, Ashish
> =20
> From: Aldrin Isaac [mailto:aldrin.isaac@gmail.com]=20
> Sent: Saturday, December 31, 2011 4:07 AM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: Re: [dc] new drafts
> =20
> Hi Ashish,
> =20
> How would you address gratuitous ARP when using the hierarchical MAC =
addressing with a registry to store MAC-IP bindings?
> =20
> Also, when using MAC prefixes in complex L2VPN topologies how could we =
address the risk of a hub forwarding into the wrong spoke context on the =
leaf switch?  Each egress port on the leaf switch may be in a different =
context (ex: EVI in EVPN).
> =20
> Thanks -- aldrin
> =20
> =20
> On Dec 30, 2011, at 6:00 AM, Ashish Dalela (adalela) wrote:
>=20
>=20
> =20
> Folks,
> =20
> I have posted 2 drafts for this group to review.
> =20
> http://www.ietf.org/id/draft-dalela-dc-requirements-00.txt
> http://www.ietf.org/id/draft-dalela-dc-approaches-00.txt
> =20
> The first draft captures 10 requirements / problems to be addressed in =
the datacenter space. These were described over the email earlier.
> =20
> The second draft discusses various approaches to addressing these =
requirements. This draft analyzes the scaling properties of various =
approaches and makes recommendations at the end for future work that can =
be taken by this group.
> =20
> Request your feedback and discussion.
> =20
> Thanks, Ashish
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


--Apple-Mail=_47E8D238-820D-4780-B3AD-D12FCD468F6B
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=windows-1252

<html><head><base href=3D"x-msg://1/"></head><body style=3D"word-wrap: =
break-word; -webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space; "><div>Hi =
Ashish,&nbsp;</div><div><br></div><div>Hierarchical MAC should be =
considered for addition to the Ethernet toolkit for "legacy" Ethernet. =
&nbsp;However, I would want to see it combined with end-station and uPE =
UNI that allows for *network-assigned* MAC/MAC-prefix, and also support =
for MAC "default route" announcement. &nbsp;As you advocate, an ARP =
registry could further enhance scaling for ARP (if there are no show =
stopping limitations), but hierarchical MAC and ARP registry should =
operate independently. &nbsp;Support for virtual network contexts would =
be required for both UNI and registry. &nbsp;I am a big proponent of =
VRFs and virtual LANs so a solution that takes these off the table would =
be off my table too.</div><div><br></div><div>Best -- =
aldrin</div><div><br></div><br><div><div>On Dec 30, 2011, at 8:54 PM, =
Ashish Dalela (adalela) wrote:</div><br =
class=3D"Apple-interchange-newline"><blockquote type=3D"cite"><span =
class=3D"Apple-style-span" style=3D"border-collapse: separate; =
font-family: Monaco; font-style: normal; font-variant: normal; =
font-weight: normal; letter-spacing: normal; line-height: normal; =
orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: =
none; white-space: normal; widows: 2; word-spacing: 0px; =
-webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: =
0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div =
lang=3D"EN-US" link=3D"blue" vlink=3D"purple" style=3D"word-wrap: =
break-word; -webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space; "><div class=3D"WordSection1" style=3D"page: =
WordSection1; "><div style=3D"margin-top: 0in; margin-right: 0in; =
margin-left: 0in; margin-bottom: 0.0001pt; font-size: 12pt; font-family: =
'Times New Roman', serif; "><span style=3D"font-size: 11pt; font-family: =
Calibri, sans-serif; color: rgb(31, 73, 125); ">Hi =
Aldrin,<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
">&gt;&gt;<span class=3D"Apple-converted-space">&nbsp;</span></span>How =
would you address gratuitous ARP when using the hierarchical MAC =
addressing with a registry to store MAC-IP =
bindings?<o:p></o:p></div><div style=3D"margin-top: 0in; margin-right: =
0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: 12pt; =
font-family: 'Times New Roman', serif; "><span style=3D"font-size: 11pt; =
font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">We =
should ideally have a protocol that maps ARP messages into the registry =
and vice versa. Why new protocol? Greater security and reliability plus =
avoiding ARP broadcasts. Gratuitous ARP doesn=92t have an acknowledge, =
so if you want to perform some actions based on this, you need to get an =
=93acknowledge=94 for it. Gratuitous ARP can get =
dropped.<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">The =
other issue is that a user can misuse it do MAC hijacking. But if you =
have a hierarchical MAC you can=92t hijack because network knows your =
location. Things like dot1x which solved these issues in the campus =
space are not used in datacenter.<o:p></o:p></span></div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><span style=3D"font-size: 11pt; font-family: Calibri, =
sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
">&gt;&gt;<span class=3D"Apple-converted-space">&nbsp;</span></span>Also, =
when using MAC prefixes in complex L2VPN topologies how could we address =
the risk of a hub forwarding into the wrong spoke context on the leaf =
switch? &nbsp;Each egress port on the leaf switch may be in a different =
context (ex: EVI in EVPN).<o:p></o:p></div><div style=3D"margin-top: =
0in; margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; =
font-size: 12pt; font-family: 'Times New Roman', serif; "><span =
style=3D"font-size: 11pt; font-family: Calibri, sans-serif; color: =
rgb(31, 73, 125); "><o:p>&nbsp;</o:p></span></div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><span style=3D"font-size: 11pt; font-family: Calibri, =
sans-serif; color: rgb(31, 73, 125); ">We should avoid building any kind =
of VRF at the control plane, because that=92s just not scalable. With =
hierarchical MAC, you have global L2 addressing so from a forwarding =
perspective it looks like static L3 routing and anyone can send packets =
anywhere. When a VM moves, this doesn=92t change the global forwarding =
table (unless a virtual switch moves =96 which is also possible). To =
segment customers we tag packets with some tenant id, and it will be =
dropped at destination. The tags will be present both at ingress and =
egress ports.<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">There =
are only two possible approaches =96 drop at source or drop at =
destination. If we try to drop at source, we need a mapping to which =
destinations are allowed (too many, and the scale is unpredictable). If =
we drop at destination, we still need to know which destinations are =
allowed, but that=92s just comprised of local host entries (no new =
entries).<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">I=92m =
not totally certain if I answered your question, so let me =
know.<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
">Thanks, Ashish<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div><div style=3D"border-right-style: =
none; border-bottom-style: none; border-left-style: none; border-width: =
initial; border-color: initial; border-top-style: solid; =
border-top-color: rgb(181, 196, 223); border-top-width: 1pt; =
padding-top: 3pt; padding-right: 0in; padding-bottom: 0in; padding-left: =
0in; "><div style=3D"margin-top: 0in; margin-right: 0in; margin-left: =
0in; margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><b><span style=3D"font-size: 10pt; font-family: Tahoma, =
sans-serif; ">From:</span></b><span style=3D"font-size: 10pt; =
font-family: Tahoma, sans-serif; "><span =
class=3D"Apple-converted-space">&nbsp;</span>Aldrin Isaac =
[mailto:aldrin.isaac@gmail.com]<span =
class=3D"Apple-converted-space">&nbsp;</span><br><b>Sent:</b><span =
class=3D"Apple-converted-space">&nbsp;</span>Saturday, December 31, 2011 =
4:07 AM<br><b>To:</b><span =
class=3D"Apple-converted-space">&nbsp;</span>Ashish Dalela =
(adalela)<br><b>Cc:</b><span =
class=3D"Apple-converted-space">&nbsp;</span><a =
href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br><b>Subject:</b><span =
class=3D"Apple-converted-space">&nbsp;</span>Re: [dc] new =
drafts<o:p></o:p></span></div></div></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; =
"><o:p>&nbsp;</o:p></div><div style=3D"margin-top: 0in; margin-right: =
0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: 12pt; =
font-family: 'Times New Roman', serif; ">Hi =
Ashish,<o:p></o:p></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; =
"><o:p>&nbsp;</o:p></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; ">How would you address =
gratuitous ARP when using the hierarchical MAC addressing with a =
registry to store MAC-IP bindings?<o:p></o:p></div></div><div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><o:p>&nbsp;</o:p></div></div><div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; ">Also, when using MAC prefixes in complex L2VPN =
topologies how could we address the risk of a hub forwarding into the =
wrong spoke context on the leaf switch? &nbsp;Each egress port on the =
leaf switch may be in a different context (ex: EVI in =
EVPN).<o:p></o:p></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; =
"><o:p>&nbsp;</o:p></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; ">Thanks -- =
aldrin<o:p></o:p></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; =
"><o:p>&nbsp;</o:p></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; =
"><o:p>&nbsp;</o:p></div><div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; ">On Dec 30, 2011, at 6:00 =
AM, Ashish Dalela (adalela) wrote:<o:p></o:p></div></div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><br><br><o:p></o:p></div><div><div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><span style=3D"font-size: 11pt; font-family: Consolas; =
">&nbsp;</span><span style=3D"font-size: 11pt; font-family: Calibri, =
sans-serif; "><o:p></o:p></span></div></div><div><div style=3D"margin-top:=
 0in; margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; =
font-size: 12pt; font-family: 'Times New Roman', serif; "><span =
style=3D"font-size: 11pt; font-family: Consolas; ">Folks,</span><span =
style=3D"font-size: 11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; ">&nbsp;</span><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; ">I have posted 2 drafts for this group to =
review.</span><span style=3D"font-size: 11pt; font-family: Calibri, =
sans-serif; "><o:p></o:p></span></div></div><div><div style=3D"margin-top:=
 0in; margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; =
font-size: 12pt; font-family: 'Times New Roman', serif; "><span =
style=3D"font-size: 11pt; font-family: Consolas; ">&nbsp;</span><span =
style=3D"font-size: 11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; "><a =
href=3D"http://www.ietf.org/id/draft-dalela-dc-requirements-00.txt" =
style=3D"color: blue; text-decoration: underline; =
">http://www.ietf.org/id/draft-dalela-dc-requirements-00.txt</a></span><sp=
an style=3D"font-size: 11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; "><a =
href=3D"http://www.ietf.org/id/draft-dalela-dc-approaches-00.txt" =
style=3D"color: blue; text-decoration: underline; =
">http://www.ietf.org/id/draft-dalela-dc-approaches-00.txt</a></span><span=
 style=3D"font-size: 11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; ">&nbsp;</span><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; ">The first draft captures 10 requirements =
/ problems to be addressed in the datacenter space. These were described =
over the email earlier.</span><span style=3D"font-size: 11pt; =
font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; ">&nbsp;</span><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; ">The second draft discusses various =
approaches to addressing these requirements. This draft analyzes the =
scaling properties of various approaches and makes recommendations at =
the end for future work that can be taken by this group.</span><span =
style=3D"font-size: 11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; ">&nbsp;</span><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; ">Request your feedback and =
discussion.</span><span style=3D"font-size: 11pt; font-family: Calibri, =
sans-serif; "><o:p></o:p></span></div></div><div><div style=3D"margin-top:=
 0in; margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; =
font-size: 12pt; font-family: 'Times New Roman', serif; "><span =
style=3D"font-size: 11pt; font-family: Consolas; ">&nbsp;</span><span =
style=3D"font-size: 11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Consolas; ">Thanks, Ashish</span><span =
style=3D"font-size: 11pt; font-family: Calibri, sans-serif; =
"><o:p></o:p></span></div></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
13.5pt; font-family: Monaco, serif; =
">_______________________________________________<br>dc mailing =
list<br><a href=3D"mailto:dc@ietf.org" style=3D"color: blue; =
text-decoration: underline; ">dc@ietf.org</a><br><a =
href=3D"https://www.ietf.org/mailman/listinfo/dc" style=3D"color: blue; =
text-decoration: underline; =
">https://www.ietf.org/mailman/listinfo/dc</a><o:p></o:p></span></div></di=
v></div><p class=3D"MsoNormal" style=3D"margin-top: 0in; margin-right: =
0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: 12pt; =
font-family: 'Times New Roman', serif; =
"></p></div></div></div></span></blockquote></div><br></body></html>=

--Apple-Mail=_47E8D238-820D-4780-B3AD-D12FCD468F6B--

From adalela@cisco.com  Mon Jan  2 08:50:02 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D670211E80A0 for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 08:50:02 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.989
X-Spam-Level: 
X-Spam-Status: No, score=-1.989 tagged_above=-999 required=5 tests=[AWL=0.009,  BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_31=0.6]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YJ3vhTYfShQN for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 08:49:59 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 7DF0011E8089 for <dc@ietf.org>; Mon,  2 Jan 2012 08:49:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=25338; q=dns/txt; s=iport; t=1325522997; x=1326732597; h=mime-version:subject:date:message-id:in-reply-to: references:from:to:cc; bh=5v54FyKwa3OWWGVJnh7lPVBBmDXTYrwHOAqAj53MInQ=; b=GL2cqYCfKl2OuhcLhI5SG8BCc6eDYU8ng1RpMGvZlB2HFYAJi5I7jljR cqU+5AEDWZGhn1iFPt8sd3rT6pqaq39xiAFWexgWmdw7EouNAsQDUCWyh Lsnbw/JBraekS0Xv9J97xs4bQ3iTw4bSppxJdUxCX2MPynSY4wNnRrYeL I=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AmEGANPfAU9Io8UY/2dsb2JhbABEggVJmU2RQoFyAQEBAwEBAQEPAQkRAz4LEAIBCBEEAQELBhAHAQYBIAYfCQgBAQQLCAgBEgeHWAiWXQGdRIssYwSINZc8h00
X-IronPort-AV: E=Sophos;i="4.71,445,1320624000"; d="scan'208,217";a="2639777"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 02 Jan 2012 16:49:55 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q02Gnt1Q012942; Mon, 2 Jan 2012 16:49:55 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Mon, 2 Jan 2012 22:19:55 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CCC96E.8D9F6800"
Date: Mon, 2 Jan 2012 22:19:50 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B254D5@XMB-BGL-416.cisco.com>
In-Reply-To: <1ED01508-C24B-455A-90EA-FC9AE59FB020@gmail.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] new drafts
Thread-Index: AczJatp+V8DFzXaZRIu5/cXMXkR+RwAAUKlw
References: <618BE8B40039924EB9AED233D4A09C5102B2527A@XMB-BGL-416.cisco.com> <D96F76EF-0011-4F33-A1CF-EC9AD12BA411@gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com> <1ED01508-C24B-455A-90EA-FC9AE59FB020@gmail.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Aldrin Isaac" <aldrin.isaac@gmail.com>
X-OriginalArrivalTime: 02 Jan 2012 16:49:55.0273 (UTC) FILETIME=[8DFE2390:01CCC96E]
Cc: dc@ietf.org
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Jan 2012 16:50:03 -0000

This is a multi-part message in MIME format.

------_=_NextPart_001_01CCC96E.8D9F6800
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable

Hi Aldrin,

=20

I am 100% in agreement with everything you said below. What I meant by
VRF below was dynamically pushing host routes with every VM
creation/deletion over the control plane. That will cause convergence
issues and lead to instability. With large number of tenants, and
something or other always going on, we don't want the control plane in
flux always. We need a VRF-like segment that separates tenants (without
overlapping with VLANs because users need VLANs too) without loading the
control plane.=20

=20

I was trying to nail down this "approach" through the draft, by showing
that this is an approach that addresses all the problems in the most
scalable way. We can do L2 (Ethernet) or L3 (TCP/IP) using a common
forwarding plane, and that we remain compatible with legacy L2/L3
networks - they will always exist.

=20

Support/feedback to this approach probably needs to be discussed and
agreed upon. Solutions based on this are not very hard to achieve (I can
bring the proposals).

=20

Thanks, Ashish

=20

From: Aldrin Isaac [mailto:aldrin.isaac@gmail.com]=20
Sent: Monday, January 02, 2012 9:53 PM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: Re: [dc] new drafts

=20

Hi Ashish,=20

=20

Hierarchical MAC should be considered for addition to the Ethernet
toolkit for "legacy" Ethernet.  However, I would want to see it combined
with end-station and uPE UNI that allows for *network-assigned*
MAC/MAC-prefix, and also support for MAC "default route" announcement.
As you advocate, an ARP registry could further enhance scaling for ARP
(if there are no show stopping limitations), but hierarchical MAC and
ARP registry should operate independently.  Support for virtual network
contexts would be required for both UNI and registry.  I am a big
proponent of VRFs and virtual LANs so a solution that takes these off
the table would be off my table too.

=20

Best -- aldrin

=20

=20

On Dec 30, 2011, at 8:54 PM, Ashish Dalela (adalela) wrote:





Hi Aldrin,

=20

>> How would you address gratuitous ARP when using the hierarchical MAC
addressing with a registry to store MAC-IP bindings?

=20

We should ideally have a protocol that maps ARP messages into the
registry and vice versa. Why new protocol? Greater security and
reliability plus avoiding ARP broadcasts. Gratuitous ARP doesn't have an
acknowledge, so if you want to perform some actions based on this, you
need to get an "acknowledge" for it. Gratuitous ARP can get dropped.

=20

The other issue is that a user can misuse it do MAC hijacking. But if
you have a hierarchical MAC you can't hijack because network knows your
location. Things like dot1x which solved these issues in the campus
space are not used in datacenter.

=20

>> Also, when using MAC prefixes in complex L2VPN topologies how could
we address the risk of a hub forwarding into the wrong spoke context on
the leaf switch?  Each egress port on the leaf switch may be in a
different context (ex: EVI in EVPN).

=20

We should avoid building any kind of VRF at the control plane, because
that's just not scalable. With hierarchical MAC, you have global L2
addressing so from a forwarding perspective it looks like static L3
routing and anyone can send packets anywhere. When a VM moves, this
doesn't change the global forwarding table (unless a virtual switch
moves - which is also possible). To segment customers we tag packets
with some tenant id, and it will be dropped at destination. The tags
will be present both at ingress and egress ports.

=20

There are only two possible approaches - drop at source or drop at
destination. If we try to drop at source, we need a mapping to which
destinations are allowed (too many, and the scale is unpredictable). If
we drop at destination, we still need to know which destinations are
allowed, but that's just comprised of local host entries (no new
entries).

=20

I'm not totally certain if I answered your question, so let me know.

=20

Thanks, Ashish

=20

From: Aldrin Isaac [mailto:aldrin.isaac@gmail.com]=20
Sent: Saturday, December 31, 2011 4:07 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: Re: [dc] new drafts

=20

Hi Ashish,

=20

How would you address gratuitous ARP when using the hierarchical MAC
addressing with a registry to store MAC-IP bindings?

=20

Also, when using MAC prefixes in complex L2VPN topologies how could we
address the risk of a hub forwarding into the wrong spoke context on the
leaf switch?  Each egress port on the leaf switch may be in a different
context (ex: EVI in EVPN).

=20

Thanks -- aldrin

=20

=20

On Dec 30, 2011, at 6:00 AM, Ashish Dalela (adalela) wrote:






=20

Folks,

=20

I have posted 2 drafts for this group to review.

=20

http://www.ietf.org/id/draft-dalela-dc-requirements-00.txt

http://www.ietf.org/id/draft-dalela-dc-approaches-00.txt

=20

The first draft captures 10 requirements / problems to be addressed in
the datacenter space. These were described over the email earlier.

=20

The second draft discusses various approaches to addressing these
requirements. This draft analyzes the scaling properties of various
approaches and makes recommendations at the end for future work that can
be taken by this group.

=20

Request your feedback and discussion.

=20

Thanks, Ashish

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

=20


------_=_NextPart_001_01CCC96E.8D9F6800
Content-Type: text/html;
	charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" =
xmlns:p=3D"urn:schemas-microsoft-com:office:powerpoint" =
xmlns:a=3D"urn:schemas-microsoft-com:office:access" =
xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" =
xmlns:s=3D"uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" =
xmlns:rs=3D"urn:schemas-microsoft-com:rowset" xmlns:z=3D"#RowsetSchema" =
xmlns:b=3D"urn:schemas-microsoft-com:office:publisher" =
xmlns:ss=3D"urn:schemas-microsoft-com:office:spreadsheet" =
xmlns:c=3D"urn:schemas-microsoft-com:office:component:spreadsheet" =
xmlns:odc=3D"urn:schemas-microsoft-com:office:odc" =
xmlns:oa=3D"urn:schemas-microsoft-com:office:activation" =
xmlns:html=3D"http://www.w3.org/TR/REC-html40" =
xmlns:q=3D"http://schemas.xmlsoap.org/soap/envelope/" =
xmlns:rtc=3D"http://microsoft.com/officenet/conferencing" =
xmlns:D=3D"DAV:" xmlns:Repl=3D"http://schemas.microsoft.com/repl/" =
xmlns:mt=3D"http://schemas.microsoft.com/sharepoint/soap/meetings/" =
xmlns:x2=3D"http://schemas.microsoft.com/office/excel/2003/xml" =
xmlns:ppda=3D"http://www.passport.com/NameSpace.xsd" =
xmlns:ois=3D"http://schemas.microsoft.com/sharepoint/soap/ois/" =
xmlns:dir=3D"http://schemas.microsoft.com/sharepoint/soap/directory/" =
xmlns:ds=3D"http://www.w3.org/2000/09/xmldsig#" =
xmlns:dsp=3D"http://schemas.microsoft.com/sharepoint/dsp" =
xmlns:udc=3D"http://schemas.microsoft.com/data/udc" =
xmlns:xsd=3D"http://www.w3.org/2001/XMLSchema" =
xmlns:sub=3D"http://schemas.microsoft.com/sharepoint/soap/2002/1/alerts/"=
 xmlns:ec=3D"http://www.w3.org/2001/04/xmlenc#" =
xmlns:sp=3D"http://schemas.microsoft.com/sharepoint/" =
xmlns:sps=3D"http://schemas.microsoft.com/sharepoint/soap/" =
xmlns:xsi=3D"http://www.w3.org/2001/XMLSchema-instance" =
xmlns:udcs=3D"http://schemas.microsoft.com/data/udc/soap" =
xmlns:udcxf=3D"http://schemas.microsoft.com/data/udc/xmlfile" =
xmlns:udcp2p=3D"http://schemas.microsoft.com/data/udc/parttopart" =
xmlns:wf=3D"http://schemas.microsoft.com/sharepoint/soap/workflow/" =
xmlns:dsss=3D"http://schemas.microsoft.com/office/2006/digsig-setup" =
xmlns:dssi=3D"http://schemas.microsoft.com/office/2006/digsig" =
xmlns:mdssi=3D"http://schemas.openxmlformats.org/package/2006/digital-sig=
nature" =
xmlns:mver=3D"http://schemas.openxmlformats.org/markup-compatibility/2006=
" xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns:mrels=3D"http://schemas.openxmlformats.org/package/2006/relationshi=
ps" xmlns:spwp=3D"http://microsoft.com/sharepoint/webpartpages" =
xmlns:ex12t=3D"http://schemas.microsoft.com/exchange/services/2006/types"=
 =
xmlns:ex12m=3D"http://schemas.microsoft.com/exchange/services/2006/messag=
es" =
xmlns:pptsl=3D"http://schemas.microsoft.com/sharepoint/soap/SlideLibrary/=
" =
xmlns:spsl=3D"http://microsoft.com/webservices/SharePointPortalServer/Pub=
lishedLinksService" xmlns:Z=3D"urn:schemas-microsoft-com:" =
xmlns:st=3D"&#1;" xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 12 =
(filtered medium)"><base href=3D"x-msg://1/"><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
@font-face
	{font-family:Monaco;
	panose-1:0 0 0 0 0 0 0 0 0 0;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.apple-style-span
	{mso-style-name:apple-style-span;}
span.apple-converted-space
	{mso-style-name:apple-converted-space;}
span.EmailStyle19
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue =
vlink=3Dpurple style=3D'word-wrap: break-word;-webkit-nbsp-mode: =
space;-webkit-line-break: after-white-space'><div =
class=3DWordSection1><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Hi Aldrin,<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>I am 100% in agreement with everything you said below. What I meant =
by VRF below was dynamically pushing host routes with every VM =
creation/deletion over the control plane. That will cause convergence =
issues and lead to instability. With large number of tenants, and =
something or other always going on, we don&#8217;t want the control =
plane in flux always. We need a VRF-like segment that separates tenants =
(without overlapping with VLANs because users need VLANs too) without =
loading the control plane. <o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>I was trying to nail down this &#8220;approach&#8221; through the =
draft, by showing that this is an approach that addresses all the =
problems in the most scalable way. We can do L2 (Ethernet) or L3 =
(TCP/IP) using a common forwarding plane, and that we remain compatible =
with legacy L2/L3 networks &#8211; they will always =
exist.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Support/feedback to this approach probably needs to be discussed and =
agreed upon. Solutions based on this are not very hard to achieve (I can =
bring the proposals).<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Thanks, Ashish<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><div><div =
style=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in'><p class=3DMsoNormal><b><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>=
</b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> =
Aldrin Isaac [mailto:aldrin.isaac@gmail.com] <br><b>Sent:</b> Monday, =
January 02, 2012 9:53 PM<br><b>To:</b> Ashish Dalela =
(adalela)<br><b>Cc:</b> dc@ietf.org<br><b>Subject:</b> Re: [dc] new =
drafts<o:p></o:p></span></p></div></div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><div><p class=3DMsoNormal>Hi =
Ashish,&nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p></div><div><p =
class=3DMsoNormal>Hierarchical MAC should be considered for addition to =
the Ethernet toolkit for &quot;legacy&quot; Ethernet. &nbsp;However, I =
would want to see it combined with end-station and uPE UNI that allows =
for *network-assigned* MAC/MAC-prefix, and also support for MAC =
&quot;default route&quot; announcement. &nbsp;As you advocate, an ARP =
registry could further enhance scaling for ARP (if there are no show =
stopping limitations), but hierarchical MAC and ARP registry should =
operate independently. &nbsp;Support for virtual network contexts would =
be required for both UNI and registry. &nbsp;I am a big proponent of =
VRFs and virtual LANs so a solution that takes these off the table would =
be off my table too.<o:p></o:p></p></div><div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p></div><div><p =
class=3DMsoNormal>Best -- aldrin<o:p></o:p></p></div><div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p></div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><div><div><p class=3DMsoNormal>On =
Dec 30, 2011, at 8:54 PM, Ashish Dalela (adalela) =
wrote:<o:p></o:p></p></div><p =
class=3DMsoNormal><br><br><o:p></o:p></p><div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Hi Aldrin,</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&nbsp;</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&gt;&gt;<span class=3Dapple-converted-space>&nbsp;</span></span>How =
would you address gratuitous ARP when using the hierarchical MAC =
addressing with a registry to store MAC-IP =
bindings?<o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&nbsp;</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>We should ideally have a protocol that maps ARP messages into the =
registry and vice versa. Why new protocol? Greater security and =
reliability plus avoiding ARP broadcasts. Gratuitous ARP doesn&#8217;t =
have an acknowledge, so if you want to perform some actions based on =
this, you need to get an &#8220;acknowledge&#8221; for it. Gratuitous =
ARP can get dropped.</span><o:p></o:p></p></div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&nbsp;</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>The other issue is that a user can misuse it do MAC hijacking. But if =
you have a hierarchical MAC you can&#8217;t hijack because network knows =
your location. Things like dot1x which solved these issues in the campus =
space are not used in datacenter.</span><o:p></o:p></p></div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&nbsp;</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&gt;&gt;<span class=3Dapple-converted-space>&nbsp;</span></span>Also, =
when using MAC prefixes in complex L2VPN topologies how could we address =
the risk of a hub forwarding into the wrong spoke context on the leaf =
switch? &nbsp;Each egress port on the leaf switch may be in a different =
context (ex: EVI in EVPN).<o:p></o:p></p></div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&nbsp;</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>We should avoid building any kind of VRF at the control plane, =
because that&#8217;s just not scalable. With hierarchical MAC, you have =
global L2 addressing so from a forwarding perspective it looks like =
static L3 routing and anyone can send packets anywhere. When a VM moves, =
this doesn&#8217;t change the global forwarding table (unless a virtual =
switch moves &#8211; which is also possible). To segment customers we =
tag packets with some tenant id, and it will be dropped at destination. =
The tags will be present both at ingress and egress =
ports.</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&nbsp;</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>There are only two possible approaches &#8211; drop at source or drop =
at destination. If we try to drop at source, we need a mapping to which =
destinations are allowed (too many, and the scale is unpredictable). If =
we drop at destination, we still need to know which destinations are =
allowed, but that&#8217;s just comprised of local host entries (no new =
entries).</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&nbsp;</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>I&#8217;m not totally certain if I answered your question, so let me =
know.</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&nbsp;</span><o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Thanks, Ashish</span><o:p></o:p></p></div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&nbsp;</span><o:p></o:p></p></div><div><div =
style=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in;border-width:initial;border-color:initial'><div><p =
class=3DMsoNormal><b><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>=
</b><span class=3Dapple-converted-space><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>&nbsp;</span=
></span><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>Aldrin =
Isaac [mailto:aldrin.isaac@gmail.com]<span =
class=3Dapple-converted-space>&nbsp;</span><br><b>Sent:</b><span =
class=3Dapple-converted-space>&nbsp;</span>Saturday, December 31, 2011 =
4:07 AM<br><b>To:</b><span =
class=3Dapple-converted-space>&nbsp;</span>Ashish Dalela =
(adalela)<br><b>Cc:</b><span =
class=3Dapple-converted-space>&nbsp;</span><a =
href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br><b>Subject:</b><span =
class=3Dapple-converted-space>&nbsp;</span>Re: [dc] new =
drafts</span><o:p></o:p></p></div></div></div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal>Hi Ashish,<o:p></o:p></p></div><div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal>How would you address gratuitous ARP when using the =
hierarchical MAC addressing with a registry to store MAC-IP =
bindings?<o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal>Also, when using MAC prefixes in complex L2VPN =
topologies how could we address the risk of a hub forwarding into the =
wrong spoke context on the leaf switch? &nbsp;Each egress port on the =
leaf switch may be in a different context (ex: EVI in =
EVPN).<o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal>Thanks -- =
aldrin<o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div><div><div><div><p =
class=3DMsoNormal>On Dec 30, 2011, at 6:00 AM, Ashish Dalela (adalela) =
wrote:<o:p></o:p></p></div></div><div><p =
class=3DMsoNormal><br><br><br><o:p></o:p></p></div><div><div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>&nbsp;</span><o:p></o:p><=
/p></div></div><div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>Folks,</span><o:p></o:p><=
/p></div></div><div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>&nbsp;</span><o:p></o:p><=
/p></div></div><div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>I have posted 2 drafts =
for this group to review.</span><o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>&nbsp;</span><o:p></o:p><=
/p></div></div><div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'><a =
href=3D"http://www.ietf.org/id/draft-dalela-dc-requirements-00.txt">http:=
//www.ietf.org/id/draft-dalela-dc-requirements-00.txt</a></span><o:p></o:=
p></p></div></div><div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'><a =
href=3D"http://www.ietf.org/id/draft-dalela-dc-approaches-00.txt">http://=
www.ietf.org/id/draft-dalela-dc-approaches-00.txt</a></span><o:p></o:p></=
p></div></div><div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>&nbsp;</span><o:p></o:p><=
/p></div></div><div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>The first draft captures =
10 requirements / problems to be addressed in the datacenter space. =
These were described over the email =
earlier.</span><o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>&nbsp;</span><o:p></o:p><=
/p></div></div><div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>The second draft =
discusses various approaches to addressing these requirements. This =
draft analyzes the scaling properties of various approaches and makes =
recommendations at the end for future work that can be taken by this =
group.</span><o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>&nbsp;</span><o:p></o:p><=
/p></div></div><div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>Request your feedback =
and discussion.</span><o:p></o:p></p></div></div><div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>&nbsp;</span><o:p></o:p><=
/p></div></div><div><div><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:Consolas'>Thanks, =
Ashish</span><o:p></o:p></p></div></div><div><p class=3DMsoNormal><span =
style=3D'font-size:13.5pt;font-family:"Monaco","serif"'>_________________=
______________________________<br>dc mailing list<br><a =
href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br><a =
href=3D"https://www.ietf.org/mailman/listinfo/dc">https://www.ietf.org/ma=
ilman/listinfo/dc</a></span><o:p></o:p></p></div></div></div></div></div>=
</div><p class=3DMsoNormal><o:p>&nbsp;</o:p></p></div></body></html>
------_=_NextPart_001_01CCC96E.8D9F6800--

From pedro.r.marques@gmail.com  Mon Jan  2 09:40:53 2012
Return-Path: <pedro.r.marques@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9EE5C11E80A5 for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 09:40:53 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.902
X-Spam-Level: 
X-Spam-Status: No, score=-2.902 tagged_above=-999 required=5 tests=[AWL=0.697,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id uJjB1GeQuHk9 for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 09:40:53 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id DB9CD11E8089 for <dc@ietf.org>; Mon,  2 Jan 2012 09:40:52 -0800 (PST)
Received: by iabz21 with SMTP id z21so8218586iab.31 for <dc@ietf.org>; Mon, 02 Jan 2012 09:40:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=/t2+bSaZJT3JHR2aZCdJGg/nWhtqKN8dYRkU9UYjq3c=; b=YspyiNnX6aw3rxFff6Vn6SB7Uz0kyV+n8WOcfvL8/47oaFs3WMRQsy2cEO9KRN+srj LhlL+7KEL3Vh9lXWOWKfRLUmeCYAYJn3/Yy52oUmiRQaa45DOfumSJPC+EHyvnuPICgV xo6VyLW6A5gQ7VMgAwcf/B/8sle8mmzgCMPJE=
MIME-Version: 1.0
Received: by 10.42.131.136 with SMTP id z8mr33181545ics.5.1325526052382; Mon, 02 Jan 2012 09:40:52 -0800 (PST)
Received: by 10.231.201.82 with HTTP; Mon, 2 Jan 2012 09:40:52 -0800 (PST)
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com>
Date: Mon, 2 Jan 2012 09:40:52 -0800
Message-ID: <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com>
From: Pedro Marques <pedro.r.marques@gmail.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Jan 2012 17:40:53 -0000

Ashish,
The characteristics that historically made L2 the lowest cost solution
for data-center access and aggregation become strong drawbacks one you
assume that a member of the 'segment' can be anywhere in the
data-center.

One of the requirements that is important to note is that tenants need
multiple network 'segments' (i.e. a logical VLAN) and that these
segments must be able to exchange traffic with a control similar to
today's vlan-to-vlan routed ACLs.

In traditional L2/L3 designs the VLAN interfaces sit at the
aggregation layer, separating downstream L2 vlans. Once the members of
the "VLAN" can be anywhere in the DC, the inter-vlan traffic exchange
capability must now also be supported taking that into account.

Mobile IP-style home agent solutions are not acceptable given that
they imply traffic crossing the DC network twice.

If you want to follow a traditional 3 tier design, you end up with
having all the aggregation switches being members of all the vlans.
Creating very very large broadcast domains with typically a small
number of end-systems.

VM are also not managed the way that traditional servers are. The
traditional advantages of a mixed L2/L3 design not longer apply here.
When a VM is instantiated the management system that does instantiate
it knows its network address(es). The traditional L2 "plug-and-play"
simplicity is now irrelevant because at any reasonable scale VMs are
not "plug-and-play"... they are explicitly managed.

There is the need in the IETF to standardize solutions for "larger
broadcast domains". They are applicable to small data-centers and
there is a large number of people that just want to extend the
traditional model since that is what they are familiar with.

But there is also interest in standardizing solutions that do not use
broadcast domains. There are enough people that believe the later are
cheaper to operate at large scale.

  Pedro.

On Sun, Jan 1, 2012 at 9:25 PM, Ashish Dalela (adalela)
<adalela@cisco.com> wrote:
> Pedro,
>
>>> It would be preferable to assume in the conversation =A0basic security
> features such as the verification of the inner and outer addresses,
> whether this is IP in mac or IP in IP.
>
> That's another very important issue that all network services
> (firewalls, load-balancers, wan optimizer, ..) need to take into account
> the change in the header structure (mentioned in the "approaches"
> draft). Changes to these structures break network services. So, if there
> is no generally agreed upon way of doing this, then there are risks.
> Also, how does a service know how many headers to look at?
>
>>> Why do you need VLANs segmentation ? An exactly what is the scope of
> the VLANs that you need for segmentation ? interface-scope or
> data-center scope ?
>
> I believe this datacenter scope. Most of the reasons are historical.
> Broadcast discovery (including ARP resolve), simpler configuration.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: Pedro Marques [mailto:pedro.r.marques@gmail.com]
> Sent: Monday, January 02, 2012 10:26 AM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>
> Ashish,
> Please see inline.
>
> On Sun, Jan 1, 2012 at 7:58 PM, Ashish Dalela (adalela)
> <adalela@cisco.com> wrote:
>>
>> David,
>>
>> On a L3 network, VLAN will help you contain broadcasts but not enough
> security. E.g. If a tenant has more than one VLAN then you can route
> across the VLANs. Now, you need IP to identify. This now opens up to
> issues like IP duplication.
>>
>> VLANs will work if one customer equals one VLAN.
>
> Clearly not an useful assumption. A customer must be able to user
> multiple 'segments'/'security groups' and have traffic be exchanged
> among them.
>
> As for the 'issues like IP duplication'. It would be preferable to
> assume in the conversation =A0basic security features such as the
> verification of the inner and outer addresses, whether this is IP in
> mac or IP in IP.
>
>> The other thing I find interesting is that this conversation happened
> in the context of L3 vs. L2 and you mentioned that TCP/IP can be used
> for storage as a way to avoid L2. But, now, we need VLAN for
> segmentation.
>
> Why do you need VLANs segmentation ? An exactly what is the scope of
> the VLANs that you need for segmentation ? interface-scope or
> data-center scope ?
>
> =A0Pedro.

From david.black@emc.com  Mon Jan  2 14:34:00 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2B2AC11E80C1 for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 14:34:00 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.586
X-Spam-Level: 
X-Spam-Status: No, score=-106.586 tagged_above=-999 required=5 tests=[AWL=0.013, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id XYYUYdke1T9k for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 14:33:59 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 46FAE11E8073 for <dc@ietf.org>; Mon,  2 Jan 2012 14:33:58 -0800 (PST)
Received: from hop04-l1d11-si02.isus.emc.com (HOP04-L1D11-SI02.isus.emc.com [10.254.111.55]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q02MXvPN024300 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 2 Jan 2012 17:33:57 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.221.145]) by hop04-l1d11-si02.isus.emc.com (RSA Interceptor); Mon, 2 Jan 2012 17:33:46 -0500
Received: from mxhub20.corp.emc.com (mxhub20.corp.emc.com [10.254.93.49]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q02MXjTE009785; Mon, 2 Jan 2012 17:33:45 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub20.corp.emc.com ([10.254.93.49]) with mapi; Mon, 2 Jan 2012 17:33:45 -0500
From: <david.black@emc.com>
To: <adalela@cisco.com>, <dc@ietf.org>
Date: Mon, 2 Jan 2012 17:33:43 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczGRpJvomHShY+MR56d9MdLrJSTVACcc6/AAAOlMaAANRCooA==
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9BB8@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Jan 2012 22:34:00 -0000

Ashish,

> On a L3 network, VLAN will help you contain broadcasts but not enough sec=
urity. E.g. If a tenant has
> more than one VLAN then you can route across the VLANs. Now, you need IP =
to identify. This now opens
> up to issues like IP duplication.

That's not correct - IP duplication across tenants is already an issue with=
 one VLAN per tenant (e.g., multiple tenants use 10.0.0.0/8 addresses), and=
 allowing multiple VLANs per tenant doesn't change that situation significa=
ntly *as long as* the ability to "route across the VLANs" is scoped by tena=
nt - that routing is simply not set up to allow forwarding across VLANs for=
 different tenants, ditto for the per-tenant VPN interfaces (e.g., to conne=
ct to tenant data centers for hybrid cloud).  This approach extends to the =
virtual networks for NVGRE and VxLAN.

> The other thing I find interesting is that this
> conversation happened in the context of L3 vs. L2 and you mentioned that =
TCP/IP can be used for
> storage as a way to avoid L2.

Use of TCP/IP in storage protocols allows L3 routing, but does not require =
use of L3 routing.  VLANs for segmentation are also not incompatible with L=
3 routing; see above.

> But, now, we need VLAN for segmentation. Now when you do DCI, then you
> need MAC over IP (or EoMPLS or something else), but not IP over IP (the e=
ntire context of this
> discussion).

In the context of storage provided by the cloud provider as a service to te=
nants, I strongly disagree; neither iSCSI nor NFS requires L2 connectivity =
between the storage systems and servers (physical or virtual [e.g., VMs]) t=
hat use the storage.

Beyond that, I observe that in current data centers, physical and virtual s=
ervers/machines most commonly consume an L2 networking service, and hence I=
'm skeptical of approaches that start by declaring that only an L3 service =
will be offered on the grounds that such approaches may be disruptive.  Tha=
t causes me to prefer MAC-over-IP data-center-interconnect approaches to IP=
-over-IP - I don't object to IP-over-IP in principle, rather I'm skeptical =
that it effectively solves the entire problem for virtual machine mobility =
across data centers.

Thanks,
--David

> -----Original Message-----
> From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> Sent: Sunday, January 01, 2012 10:59 PM
> To: Black, David; russw@riw.us; dc@ietf.org
> Subject: RE: [dc] [armd] IP over IP solution for data center interconnect
>=20
>=20
> David,
>=20
> On a L3 network, VLAN will help you contain broadcasts but not enough sec=
urity. E.g. If a tenant has
> more than one VLAN then you can route across the VLANs. Now, you need IP =
to identify. This now opens
> up to issues like IP duplication.
>=20
> VLANs will work if one customer equals one VLAN. The other thing I find i=
nteresting is that this
> conversation happened in the context of L3 vs. L2 and you mentioned that =
TCP/IP can be used for
> storage as a way to avoid L2. But, now, we need VLAN for segmentation. No=
w when you do DCI, then you
> need MAC over IP (or EoMPLS or something else), but not IP over IP (the e=
ntire context of this
> discussion).
>=20
> http://tools.ietf.org/html/draft-dalela-dc-approaches-00 analyzes many of=
 these issues collectively. I
> believe the goal of this group should be to take the problem set as a who=
le - and we know there are
> many possible approaches that already exist.
>=20
> Thanks, Ashish
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of david=
.black@emc.com
> Sent: Monday, January 02, 2012 12:40 AM
> To: russw@riw.us; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
>=20
> > > That means that if you have to protect data, you have to know somehow=
 which host is allowed to see
> > which disks. Doing this end-to-end using TCP/IP is problematic because =
the NAS controller doesn't
> know
> > about tenant id, and whether the MAC and IP are being duplicated across=
 customers. To make the
> storage
> > aware of segmentation, you have to carry some segment (GRE, MPLS, VLAN,=
 etc.) into the storage box.
> > That's not true today.
> >
> > Maybe I'm dense, but wouldn't it be just as easy to add the capability
> > to differentiate based on IP address as it would be to design the entir=
e
> > network and all protocols around what the current software capabilities
> > on the hypervisor are?
>=20
> As previously noted, Ashish's original statement about what storage boxes=
 cannot do was just
> plain wrong, as there's plenty of VLAN support on current storage boxes. =
 FWIW, differentiation
> based on IP address is not difficult, although one has to think carefully=
 about scope of IP
> address as differentiating on source IP address assumes that the source I=
P address isn't forged
> (VLANs are easier to work with than IP blocks for this concern).
>=20
> Thanks,
> --David
> ----------------------------------------------------
> David L. Black, Distinguished Engineer
> EMC Corporation, 176 South St., Hopkinton, MA=A0 01748
> +1 (508) 293-7953=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 FAX: +1 (508) 293-7=
786
> david.black@emc.com=A0=A0=A0=A0=A0=A0=A0 Mobile: +1 (978) 394-7754
> ----------------------------------------------------
>=20
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


From robert@raszuk.net  Mon Jan  2 14:39:48 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 47A5C1F0C4D for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 14:39:48 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.524
X-Spam-Level: 
X-Spam-Status: No, score=-2.524 tagged_above=-999 required=5 tests=[AWL=0.075,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JOZbAqjft6zO for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 14:39:47 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 885ED1F0C35 for <dc@ietf.org>; Mon,  2 Jan 2012 14:39:47 -0800 (PST)
Received: (qmail 20325 invoked by uid 399); 2 Jan 2012 22:39:46 -0000
Received: from unknown (HELO ?192.168.1.57?) (83.9.70.134) by mail1310.opentransfer.com with ESMTP; 2 Jan 2012 22:39:46 -0000
X-Originating-IP: 83.9.70.134
Message-ID: <4F023231.7010904@raszuk.net>
Date: Mon, 02 Jan 2012 23:39:45 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: david.black@emc.com
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9BB8@MX14A.corp.emc.com>
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9BB8@MX14A.corp.emc.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: adalela@cisco.com, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 02 Jan 2012 22:39:48 -0000

Hi David,

 > I don't object to IP-over-IP in principle, rather I'm skeptical that
 > it effectively solves the entire problem for virtual machine mobility
 > across data centers.

Would you care to elaborate which part of the VM mobility could not be 
solved in your opinion by IP in IP ?

Many thx,
R.


From david.black@emc.com  Mon Jan  2 18:55:22 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 05A7321F8587 for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 18:55:22 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.587
X-Spam-Level: 
X-Spam-Status: No, score=-106.587 tagged_above=-999 required=5 tests=[AWL=0.012, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id XzKHEaRZthLb for <dc@ietfa.amsl.com>; Mon,  2 Jan 2012 18:55:21 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 4F9B421F8503 for <dc@ietf.org>; Mon,  2 Jan 2012 18:55:21 -0800 (PST)
Received: from hop04-l1d11-si01.isus.emc.com (HOP04-L1D11-SI01.isus.emc.com [10.254.111.54]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q032tE5H023016 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 2 Jan 2012 21:55:19 -0500
Received: from mailhub.lss.emc.com (mailhubhoprd03.lss.emc.com [10.254.221.145]) by hop04-l1d11-si01.isus.emc.com (RSA Interceptor); Mon, 2 Jan 2012 21:55:02 -0500
Received: from mxhub19.corp.emc.com (mxhub19.corp.emc.com [10.254.93.48]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q032t1Kv009396; Mon, 2 Jan 2012 21:55:01 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub19.corp.emc.com ([10.254.93.48]) with mapi; Mon, 2 Jan 2012 21:55:01 -0500
From: <david.black@emc.com>
To: <robert@raszuk.net>
Date: Mon, 2 Jan 2012 21:54:59 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczJn3xEgULkSl+6RHGi7LeX04Mv7AAHcLgg
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BA14@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9BB8@MX14A.corp.emc.com> <4F023231.7010904@raszuk.net>
In-Reply-To: <4F023231.7010904@raszuk.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: david.black@emc.com, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 02:55:22 -0000

Hi Robert,

> Would you care to elaborate which part of the VM mobility could not be
> solved in your opinion by IP in IP ?

Here's a start:

I'm interested in an IaaS public or hybrid cloud where the tenants bring
their own VMs.

For VMs that are using L2 networking service, changing a VM MAC address as
part of live migration of the VM is problematic, because traffic to the VM
is likely to be dropped until the new MAC propagates (e.g., have to wait
for ARP caches to update).  As part of this discussion, I'm assuming
no ability to modify the IP stacks in the VMs (e.g., the VMs may be based
on standard OS images).  In essence, it takes too long to rebind an IP to
a new MAC, and the result can be disruptive to a live VM migration.  There
are probably examples of VMs that don't care about this sort of disruption
(e.g., where it's ok to expect a user to occasionally hit Reload in the
browser), but there are plenty of examples where that disruption is not
acceptable.

If one is prepared to use only L3 networking service with VMs, I believe
IP in IP is workable. An example of what I mean by an L3 service is that th=
e
subnet consists of one link (i.e., no L2 bridging), which allows reuse of L=
2
addresses on the link (e.g., the VM's MAC doesn't change even though it's o=
n
a new link, and its ARP cache is still valid).  As already noted on this li=
st,
a significant concern with this sort of approach is the number of host rout=
es
needed at scale.

The idea of larger L2 subnets connected by IP-in-IP across data centers
seems interesting, but one has to somehow propagate the gratuitous ARP that
occurs after a live VM migration.  I haven't thought much about this -=20
it may be feasible to somehow preserve the MAC across the VM migration,
keep ARP working across the IP-in-IP connection, etc. without reconstructin=
g
a full MAC-in-IP connection.  If anyone has details on something useful
that's been done here, please share.

I don't want to say that people can't or shouldn't use L3 IP-in-IP across
data centers, but I do want to ensure that we can develop solutions that
support live VM migration without breaking (or requiring massive changes to=
)
the L2 networking service that that is commonly used by VMs.

If one is prepared to modify the IP stack, e.g., by inserting some sort of
locator/identifier separation protocol, then the rebinding problem changes
from IP-to-new-MAC to ID-to-new-LOC.  That's plausible if one has complete
control of that level of infrastructure, but I don't think it's a good
starting assumption for the IaaS scenarios that I'm interested in.

Does that help?

Thanks,
--David

> -----Original Message-----
> From: Robert Raszuk [mailto:robert@raszuk.net]
> Sent: Monday, January 02, 2012 5:40 PM
> To: Black, David
> Cc: adalela@cisco.com; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
>=20
> Hi David,
>=20
>  > I don't object to IP-over-IP in principle, rather I'm skeptical that
>  > it effectively solves the entire problem for virtual machine mobility
>  > across data centers.
>=20
> Would you care to elaborate which part of the VM mobility could not be
> solved in your opinion by IP in IP ?
>=20
> Many thx,
> R.


From adalela@cisco.com  Tue Jan  3 03:39:02 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2FD1321F8566 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 03:39:02 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.29
X-Spam-Level: 
X-Spam-Status: No, score=-2.29 tagged_above=-999 required=5 tests=[AWL=0.309,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ciWyyDYbA8W0 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 03:39:01 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 2C67221F856B for <dc@ietf.org>; Tue,  3 Jan 2012 03:38:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=6830; q=dns/txt; s=iport; t=1325590740; x=1326800340; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=6OkhP1VMfTfMIqBgIPuQFjzDHEOC+E/z31BTP0yvAns=; b=HA36uExihjwLxtvFzl/JulqUfQwEO3NeHN9i2+9DFJmsDgH9gvYjFRqQ ESbTadJS391MV0Fx+jYzxrTSM/QnDAs34AgbSl0atWUetR9c7V3hegOrW lpXwIvzoTnWGkxnmEMBDWBwj5Q9qHpvvPjRs1TtSU6yxsvclLGfnIynkw w=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AjIPAOXnAk9Io8UY/2dsb2JhbAA6CoIFq2CBcgEBAQQBAQEPAR01CQsMBAIBCBEEAQEBCgYFEgEGASAGHwkIAQEECwgIEweHYJcBAZ1gBIhWE4JDYwSIBDGXPIdN
X-IronPort-AV: E=Sophos;i="4.71,449,1320624000";  d="scan'208";a="2678483"
Received: from vla196-nat.cisco.com (HELO bgl-core-1.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 03 Jan 2012 11:38:55 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q03Bct8I028152; Tue, 3 Jan 2012 11:38:55 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Jan 2012 17:08:56 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 3 Jan 2012 17:08:49 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com>
In-Reply-To: <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczJdbS885F04DPGSDisCnI2uI60dwAk6KRg
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Pedro Marques" <pedro.r.marques@gmail.com>
X-OriginalArrivalTime: 03 Jan 2012 11:38:56.0143 (UTC) FILETIME=[46B299F0:01CCCA0C]
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 11:39:02 -0000

Pedro,

I'm not advocating classic L2. If you have the seen the "approaches" =
draft I forwarded to this alias, I have listed multiple deficiencies of =
L2, including map-encap L2. Also discussed is the hierarchical MAC =
approach that scales better than all approaches known so far.=20

L2 forwarding does not necessarily mean a broadcast domain. E.g. you can =
forward packets based on hierarchical MAC without a VLAN and hence never =
broadcast. The packets will be forwarded just like static IP routing but =
based on a MAC prefix.=20

As you have correctly mentioned, some people need broadcasts and some =
don't. So far L2 =3D broadcast and L3 =3D non-broadcast. Hierarchical =
MAC doesn't have that assumption. I would like to converge TCP/IP and =
non-TCP/IP, broadcast and non-broadcast, traffic into a single =
forwarding approach that will scale. If you have VLAN you get broadcast, =
multicast and unicast. If you don't have VLAN, you just get unicast and =
multicast. Shift from one to another should require only a configuration =
of a VLAN tag on the port, not a change of the entire infrastructure. =
Agree?

The thing to bear in mind is that there will be customers who want =
either, and a provider has to use the same infra to support both. If you =
lock a provider into a exclusively L2 or L3, then they can't use the =
same infrastructure for supporting either.=20

I don't want to split the requirements into multiple use-cases because =
then this DC group will be many groups - one doing L2 and another doing =
L3. That I think you will agree is not optimal for anyone. So, I'm =
looking for an approach that can converge all requirements into a single =
forwarding plane, on the same infrastructure. The issues of scale you =
mentioned don't exist in Hierarchical MACs, which scale better than =
anything we know of.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of =
Pedro Marques
Sent: Monday, January 02, 2012 11:11 PM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center =
interconnect

Ashish,
The characteristics that historically made L2 the lowest cost solution
for data-center access and aggregation become strong drawbacks one you
assume that a member of the 'segment' can be anywhere in the
data-center.

One of the requirements that is important to note is that tenants need
multiple network 'segments' (i.e. a logical VLAN) and that these
segments must be able to exchange traffic with a control similar to
today's vlan-to-vlan routed ACLs.

In traditional L2/L3 designs the VLAN interfaces sit at the
aggregation layer, separating downstream L2 vlans. Once the members of
the "VLAN" can be anywhere in the DC, the inter-vlan traffic exchange
capability must now also be supported taking that into account.

Mobile IP-style home agent solutions are not acceptable given that
they imply traffic crossing the DC network twice.

If you want to follow a traditional 3 tier design, you end up with
having all the aggregation switches being members of all the vlans.
Creating very very large broadcast domains with typically a small
number of end-systems.

VM are also not managed the way that traditional servers are. The
traditional advantages of a mixed L2/L3 design not longer apply here.
When a VM is instantiated the management system that does instantiate
it knows its network address(es). The traditional L2 "plug-and-play"
simplicity is now irrelevant because at any reasonable scale VMs are
not "plug-and-play"... they are explicitly managed.

There is the need in the IETF to standardize solutions for "larger
broadcast domains". They are applicable to small data-centers and
there is a large number of people that just want to extend the
traditional model since that is what they are familiar with.

But there is also interest in standardizing solutions that do not use
broadcast domains. There are enough people that believe the later are
cheaper to operate at large scale.

  Pedro.

On Sun, Jan 1, 2012 at 9:25 PM, Ashish Dalela (adalela)
<adalela@cisco.com> wrote:
> Pedro,
>
>>> It would be preferable to assume in the conversation =A0basic =
security
> features such as the verification of the inner and outer addresses,
> whether this is IP in mac or IP in IP.
>
> That's another very important issue that all network services
> (firewalls, load-balancers, wan optimizer, ..) need to take into =
account
> the change in the header structure (mentioned in the "approaches"
> draft). Changes to these structures break network services. So, if =
there
> is no generally agreed upon way of doing this, then there are risks.
> Also, how does a service know how many headers to look at?
>
>>> Why do you need VLANs segmentation ? An exactly what is the scope of
> the VLANs that you need for segmentation ? interface-scope or
> data-center scope ?
>
> I believe this datacenter scope. Most of the reasons are historical.
> Broadcast discovery (including ARP resolve), simpler configuration.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: Pedro Marques [mailto:pedro.r.marques@gmail.com]
> Sent: Monday, January 02, 2012 10:26 AM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>
> Ashish,
> Please see inline.
>
> On Sun, Jan 1, 2012 at 7:58 PM, Ashish Dalela (adalela)
> <adalela@cisco.com> wrote:
>>
>> David,
>>
>> On a L3 network, VLAN will help you contain broadcasts but not enough
> security. E.g. If a tenant has more than one VLAN then you can route
> across the VLANs. Now, you need IP to identify. This now opens up to
> issues like IP duplication.
>>
>> VLANs will work if one customer equals one VLAN.
>
> Clearly not an useful assumption. A customer must be able to user
> multiple 'segments'/'security groups' and have traffic be exchanged
> among them.
>
> As for the 'issues like IP duplication'. It would be preferable to
> assume in the conversation =A0basic security features such as the
> verification of the inner and outer addresses, whether this is IP in
> mac or IP in IP.
>
>> The other thing I find interesting is that this conversation happened
> in the context of L3 vs. L2 and you mentioned that TCP/IP can be used
> for storage as a way to avoid L2. But, now, we need VLAN for
> segmentation.
>
> Why do you need VLANs segmentation ? An exactly what is the scope of
> the VLANs that you need for segmentation ? interface-scope or
> data-center scope ?
>
> =A0Pedro.
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From robert@raszuk.net  Tue Jan  3 05:35:22 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id DB52921F852B for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 05:35:22 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vBA1afFV2saD for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 05:35:22 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 307CA21F850A for <dc@ietf.org>; Tue,  3 Jan 2012 05:35:22 -0800 (PST)
Received: (qmail 11301 invoked by uid 399); 3 Jan 2012 13:35:21 -0000
Received: from unknown (HELO ?192.168.1.91?) (83.31.238.24) by mail1310.opentransfer.com with ESMTP; 3 Jan 2012 13:35:21 -0000
X-Originating-IP: 83.31.238.24
Message-ID: <4F030418.1070202@raszuk.net>
Date: Tue, 03 Jan 2012 14:35:20 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: Pedro Marques <pedro.r.marques@gmail.com>, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 13:35:23 -0000

Ashish,

> The issues of scale you mentioned don't exist in Hierarchical MACs,
> which scale better than anything we know of.

So you are advocating solution which is based on encapsulation - that is
fine.

However how could you ever arrive at the conclusion that HMACs would
scale better then "anything we know". Well I don't know about you, but I
know that the key to scaling is ability to aggregate. And it is not that 
huge mystery that MACs aggregate rather poorly while there are quite 
well deployed protocols (be it IPv4 or IPv6) which aggregate natively.

For inter-dc this is IMHO a must. A must even if you build it using
traditional routers or OF enabled switches - does not matter.

> I don't want to split the requirements into multiple use-cases
> because then this DC group will be many groups - one doing L2 and
> another doing L3. That I think you will agree is not optimal for
> anyone

Why MAC-in-IP does not solve it for everyone ? And there are deployed 
solutions already ..

IMHO what this group should accomplish is not to try to reinvent the 
world, but perhaps as example discuss where is the right boundary of 
encapsulation, how should we communicate between network and hosts, what 
kind of DC instrumentation should be IETF blessed for easy integration 
(ie min subset of functionality it should possess etc .... )

R.


From robert@raszuk.net  Tue Jan  3 05:51:52 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7601C21F8552 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 05:51:52 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[AWL=0.000,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id eWeSNCd6-B-o for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 05:51:51 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 84DF721F8550 for <dc@ietf.org>; Tue,  3 Jan 2012 05:51:51 -0800 (PST)
Received: (qmail 25356 invoked by uid 399); 3 Jan 2012 13:51:50 -0000
Received: from unknown (HELO ?192.168.1.91?) (83.31.238.24) by mail1310.opentransfer.com with ESMTP; 3 Jan 2012 13:51:50 -0000
X-Originating-IP: 83.31.238.24
Message-ID: <4F0307F6.8020504@raszuk.net>
Date: Tue, 03 Jan 2012 14:51:50 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: david.black@emc.com
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9BB8@MX14A.corp.emc.com> <4F023231.7010904@raszuk.net> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BA14@MX14A.corp.emc.com>
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BA14@MX14A.corp.emc.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 13:51:52 -0000

Hello David,

Much thx for the below comments.

IaaS is indeed very good example where you don't have control over 
customer VMs .. you are not essentially able to provision such VMs. 
However I am assuming that VM mobility is within your network boundaries 
or boundary interfaces such VMs may connect to.

 > As part of this discussion, I'm assuming no ability to modify the IP
 > stacks in the VMs (e.g., the VMs may be based on standard OS images).

You know that for locator/identifier separation protocol there are both 
host based ILNP and network based proposals LISP. Only ILNP requires 
changes to the VM OS (which IMHO is not that big of a deal and provide 
OS images your customers could use in their VMs which support ILNP). But 
let's not too far on that.

--

So for scalable IaaS at the L2 level what seem to be missing is 
effectively an ARP servers which would provide VM MAC resolution plane 
to IP address used for it's encapsulation.

Just like DNS resolves char names to IP address or LISP ALT resolves IDs 
to locators we now need a way to within your domain (IaaS boundary) or 
maybe even one day beyond resolve MAC address to IP.

Do we have such proposal already on the table ?

Disclaimer: This is only for those cases where native L3 or ID/Loc 
schemes could not be used VM to VM.

Thx,
R.



> Hi Robert,
>
>> Would you care to elaborate which part of the VM mobility could not be
>> solved in your opinion by IP in IP ?
>
> Here's a start:
>
> I'm interested in an IaaS public or hybrid cloud where the tenants bring
> their own VMs.
>
> For VMs that are using L2 networking service, changing a VM MAC address as
> part of live migration of the VM is problematic, because traffic to the VM
> is likely to be dropped until the new MAC propagates (e.g., have to wait
> for ARP caches to update).  As part of this discussion, I'm assuming
> no ability to modify the IP stacks in the VMs (e.g., the VMs may be based
> on standard OS images).  In essence, it takes too long to rebind an IP to
> a new MAC, and the result can be disruptive to a live VM migration.  There
> are probably examples of VMs that don't care about this sort of disruption
> (e.g., where it's ok to expect a user to occasionally hit Reload in the
> browser), but there are plenty of examples where that disruption is not
> acceptable.
>
> If one is prepared to use only L3 networking service with VMs, I believe
> IP in IP is workable. An example of what I mean by an L3 service is that the
> subnet consists of one link (i.e., no L2 bridging), which allows reuse of L2
> addresses on the link (e.g., the VM's MAC doesn't change even though it's on
> a new link, and its ARP cache is still valid).  As already noted on this list,
> a significant concern with this sort of approach is the number of host routes
> needed at scale.
>
> The idea of larger L2 subnets connected by IP-in-IP across data centers
> seems interesting, but one has to somehow propagate the gratuitous ARP that
> occurs after a live VM migration.  I haven't thought much about this -
> it may be feasible to somehow preserve the MAC across the VM migration,
> keep ARP working across the IP-in-IP connection, etc. without reconstructing
> a full MAC-in-IP connection.  If anyone has details on something useful
> that's been done here, please share.
>
> I don't want to say that people can't or shouldn't use L3 IP-in-IP across
> data centers, but I do want to ensure that we can develop solutions that
> support live VM migration without breaking (or requiring massive changes to)
> the L2 networking service that that is commonly used by VMs.
>
> If one is prepared to modify the IP stack, e.g., by inserting some sort of
> locator/identifier separation protocol, then the rebinding problem changes
> from IP-to-new-MAC to ID-to-new-LOC.  That's plausible if one has complete
> control of that level of infrastructure, but I don't think it's a good
> starting assumption for the IaaS scenarios that I'm interested in.
>
> Does that help?
>
> Thanks,
> --David
>
>> -----Original Message-----
>> From: Robert Raszuk [mailto:robert@raszuk.net]
>> Sent: Monday, January 02, 2012 5:40 PM
>> To: Black, David
>> Cc: adalela@cisco.com; dc@ietf.org
>> Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
>>
>> Hi David,
>>
>>   >  I don't object to IP-over-IP in principle, rather I'm skeptical that
>>   >  it effectively solves the entire problem for virtual machine mobility
>>   >  across data centers.
>>
>> Would you care to elaborate which part of the VM mobility could not be
>> solved in your opinion by IP in IP ?
>>
>> Many thx,
>> R.
>
>
>


From adalela@cisco.com  Tue Jan  3 06:00:15 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E9C7721F8536 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 06:00:15 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.309
X-Spam-Level: 
X-Spam-Status: No, score=-2.309 tagged_above=-999 required=5 tests=[AWL=0.290,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id LYtEd1E8rKHM for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 06:00:15 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 99F2721F8535 for <dc@ietf.org>; Tue,  3 Jan 2012 06:00:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2721; q=dns/txt; s=iport; t=1325599213; x=1326808813; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=ceHIPdnn4I6/hODYnwlZeSmIxZ9d2WvV6+dd2KRN2pA=; b=jmAihLDIGp2rPYnFEfiXp9L6e7Gjln7+YZN8mYEyelWegoDeyu89s9nM h2pbbeq+NdZ3vNFDRde0KHzAe6MFIJJ2DDc6zmh9v7pS1z3+FQXk7o9Xy R0JTsZcgpvXCuN8XOmPazqarO8hk4DBrDua8r/AfhmhJqNkTbErpGbZa5 I=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AjAPAIkIA09Io8UY/2dsb2JhbABDggWrYIFyAQEBAwESAR0KPwUHBAIBCBEEAQELBhcBBgFFCQgBAQQLCAgTB4dYCJcZAZ1xiHWCN2MEiDWfCQ
X-IronPort-AV: E=Sophos;i="4.71,450,1320624000";  d="scan'208";a="2683496"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 03 Jan 2012 14:00:12 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q03E0C3p003196; Tue, 3 Jan 2012 14:00:12 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Jan 2012 19:30:12 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 3 Jan 2012 19:30:10 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com>
In-Reply-To: <4F030418.1070202@raszuk.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKHJbV6OFWAmihRQ6Av61rN7hZmQAAqFNw
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <robert@raszuk.net>
X-OriginalArrivalTime: 03 Jan 2012 14:00:12.0461 (UTC) FILETIME=[02FA15D0:01CCCA20]
Cc: Pedro Marques <pedro.r.marques@gmail.com>, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 14:00:16 -0000

Robert,

>> So you are advocating solution which is based on encapsulation - that
is fine.

No, I'm not. Did you read the draft I had mentioned?=20
Hierarchical MAC is not encapsulation. It is one 48 bit address.

>> However how could you ever arrive at the conclusion that HMACs would
>> scale better then "anything we know". Well I don't know about you,
but I
>> know that the key to scaling is ability to aggregate. And it is not
that=20
>> huge mystery that MACs aggregate rather poorly while there are quite=20
>> well deployed protocols (be it IPv4 or IPv6) which aggregate natively

You are hitting the issue on the nail. So, read the draft I mentioned.=20
Hierarchical MAC is higher bits "network prefix" and lower bits "host
id".=20
That's summarizable like IP address and aggregated.=20
It has 46 bits to modify so larger than IPv4 internet.

I won't comment on the rest, because you have made an assumption about
encapsulation.

I refer to this -
http://tools.ietf.org/html/draft-dalela-dc-approaches-00.=20

Thanks, Ashish


-----Original Message-----
From: Robert Raszuk [mailto:robert@raszuk.net]=20
Sent: Tuesday, January 03, 2012 7:05 PM
To: Ashish Dalela (adalela)
Cc: Pedro Marques; dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect

Ashish,

> The issues of scale you mentioned don't exist in Hierarchical MACs,
> which scale better than anything we know of.

So you are advocating solution which is based on encapsulation - that is
fine.

However how could you ever arrive at the conclusion that HMACs would
scale better then "anything we know". Well I don't know about you, but I
know that the key to scaling is ability to aggregate. And it is not that

huge mystery that MACs aggregate rather poorly while there are quite=20
well deployed protocols (be it IPv4 or IPv6) which aggregate natively.

For inter-dc this is IMHO a must. A must even if you build it using
traditional routers or OF enabled switches - does not matter.

> I don't want to split the requirements into multiple use-cases
> because then this DC group will be many groups - one doing L2 and
> another doing L3. That I think you will agree is not optimal for
> anyone

Why MAC-in-IP does not solve it for everyone ? And there are deployed=20
solutions already ..

IMHO what this group should accomplish is not to try to reinvent the=20
world, but perhaps as example discuss where is the right boundary of=20
encapsulation, how should we communicate between network and hosts, what

kind of DC instrumentation should be IETF blessed for easy integration=20
(ie min subset of functionality it should possess etc .... )

R.


From dave.mcdysan@verizon.com  Tue Jan  3 06:06:17 2012
Return-Path: <dave.mcdysan@verizon.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 616CB21F8539 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 06:06:17 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.284
X-Spam-Level: 
X-Spam-Status: No, score=-3.284 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TeZc8lREDqgt for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 06:06:16 -0800 (PST)
Received: from fldsmtpe02.verizon.com (fldsmtpe02.verizon.com [140.108.26.141]) by ietfa.amsl.com (Postfix) with ESMTP id 3AB0321F8536 for <dc@ietf.org>; Tue,  3 Jan 2012 06:06:15 -0800 (PST)
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145]) by fldsmtpe02.verizon.com with ESMTP; 03 Jan 2012 14:03:55 +0000
From: "Mcdysan, David E" <dave.mcdysan@verizon.com>
X-IronPort-AV: E=Sophos;i="4.71,450,1320624000"; d="scan'208";a="200647663"
Received: from fhdp1lumxc7hb01.verizon.com (HELO FHDP1LUMXC7HB01.us.one.verizon.com) ([166.68.59.188]) by fldsmtpi03.verizon.com with ESMTP; 03 Jan 2012 14:03:55 +0000
Received: from fhdp1lumxc7v11.us.one.verizon.com ([169.254.1.117]) by FHDP1LUMXC7HB01.us.one.verizon.com ([166.68.59.188]) with mapi; Tue, 3 Jan 2012 09:03:55 -0500
To: "dc@ietf.org" <dc@ietf.org>, Thomas Narten <narten@us.ibm.com>
Date: Tue, 3 Jan 2012 09:03:53 -0500
Thread-Topic: [dc] Elevator Pitch (was: Scoping the Interim meeting)
Thread-Index: AczKIIeztRkowjg6R3m6o2y3mkNG+w==
Message-ID: <CB287091.304A0%dave.mcdysan@one.verizon.com>
In-Reply-To: <201112240023.pBO0NaP2008085@cichlid.raleigh.ibm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Microsoft-MacOutlook/14.1.0.101012
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dc] Elevator Pitch (was: Scoping the Interim meeting)
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 14:06:17 -0000

Hi Tom,

Catching up on Email after vacation. Some (hopefully clarifying) responses
in line below.

Dave

On Friday12/23/11 7:23 PM, "Thomas Narten" <narten@us.ibm.com> wrote:

>Hi Dave.
>
>Thanks for this. Let me see if I can summarize and further tease this
>out.
>
>"Mcdysan, David E" <dave.mcdysan@verizon.com> writes:
>
>> Pitch on an Elevator in a shorter building. ;)
>
>> Support an infrastructure for multiple tenants, each with computing
>> applications and associated resources ranging from a single VM to a very
>> large tenant with a complex, multi-site computing application. An
>> important case is supporting a very large single tenant computing
>> application.
>
>I think you are stating that a key requirement is support for
>multi-tenancy.  (And I would assume pretty much everyone agrees this
>is a requirement...)
>
>> Ensure that each tenant is securely separated from others and that only
>> tenants who mutually agree to communicate and/or share resources can
>> do so.
>
>This sounds like sub-bullets under the definition of multi-tenancy.

Agreed, but the key word is "security," which may merit a separate
category with sub-bullets for privacy, authentication, access attempts.

>
>> Orchestrate all of the virtual and real appliances (e.g., firewalls.
>>load
>> balancers, security, etc.) needed for a complex computing application
>> distributed across multiple sites.
>
>This sounds like something very different and potential very big. By
>"orchestration" do you mean the system by which you can place or move VMs
>and
>all their associated dependent devices (FWs, LBs, etc.) around?
>How
>you place them, how you move components around?


Yes, and not only placement but reachability and forwarding rules. For
example, all packets to/from a set of machines for a particular matching
tuple must go through a specific function (e.g., a FW, or an LB).

>
>If so, this seems to me to be a combination of a (possibly)
>centralized controller (of sorts), with a lot of individual components
>distributed around the DC that provide necessary hooks/mechanisms for
>the orchestrator to do its job. I.e., moving a VM (and a FW) requires
>a number of steps. The orchestrator (at a high level) initiates those
>steps, but there are other network components that also play a role
>(the hypervisor, switches, etc.)

This is one form of a solution to the above requirement. Others may be
possible.
>
>Where do you see the standardization gaps in the above? What would be
>some example areas where IETF (or some other SDO) work is needed?

Some form of a schema and/or meta-data model for these functions (VMs,
devices through which all packets matching a tuple must pass (e.g., FW,
LB)) would be one example area.

Another would be a description of a rule set for how these functions and
resources would be composed. This rule set should include what happens in
response to changes in load and failures.
>
>> Provide methods to optimize the placement and interconnection of
>> computing, storage, appliance and networking resources.
>
>What would the standardization angle of this be? (FWIW, I don't see
>one right off, but I probably just don't understand what is
>meant/needed here).

One example being discussed in the alto wg is extending the model to have
multiple cost attributes. These could be used as inputs to a
multi-commodity optimization engine. IMO, defining the inputs and their
semantics/syntax is what should be standardized, and not the algorithms.
>
>> Work across a diverse set of networking technologies and architectures,
>> ranging from dedicated special-purpose networks (e.g., Ethernet/PBB
>>L2/L3
>> VPNs) to access over the Inernet.
>
>This is a sub-bullet of the orchestration requirements?

Could be. What I had in mind is a high-level statement of reachability for
VMs and functions (e.g., FW, LB). All major networking environments must
be supported.=20

>
>> Be scalable on the high end to support hundreds of millions of Virtual
>> Machines -- think one VM pre wireless subscriber.
>
>Is this a sub-requirement of orchestration? or multi-tenancy? or both?

Both. Dynamic orchestration is important because different classes of
users have different Time of Day, Day of Week and Day of Year needs for
computing, storage, networking and other resources.

>
>> Be capable of meeting the quality objectives across a range of
>>performance
>> profiles corresponding to classes of tenants.
>
>Could you elaborate on this? Is this about meeting SLAs?

Meeting SLAs is an important part of this. Availability of resources is a
very important SLA objective for which I believe more standards work is
needed. For example, a user who reserves resources in advance should have
a much higher likelihood of his/her needs being served as compared with a
user who requests resources on demand just before the resources are needed.

Many network SLA parameters (e.g., latency, loss, delay variation) are
already mostly described in existing IETF and other standards body
standards.

Standards for things such as computing power, virtual memory, disk
allocation (and access rate), other function throughput and capacities
(e.g., FW, LB) would be needed so that the resource request could be
completely specified. This may be best done in other standards bodies, but
I cannot point to a specific SDO.

>What is the
>standardization angle (i.e., where work is needed, where there is a
>gap today).

Some of the orchestration schema/meta-data standards work mentioned above
could be used as a means to communicate information about the resources
being requested. Additional semantics and syntax would be needed to
communicate the status of the resource request as well as the status of
such requests.

>
>Thomas
>


From yakov@juniper.net  Tue Jan  3 06:36:12 2012
Return-Path: <yakov@juniper.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 47A4F21F849D for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 06:36:12 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.284
X-Spam-Level: 
X-Spam-Status: No, score=-106.284 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TeyY+dVAXf63 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 06:36:11 -0800 (PST)
Received: from exprod7og119.obsmtp.com (exprod7og119.obsmtp.com [64.18.2.16]) by ietfa.amsl.com (Postfix) with ESMTP id 5CE0621F8485 for <dc@ietf.org>; Tue,  3 Jan 2012 06:36:11 -0800 (PST)
Received: from P-EMHUB02-HQ.jnpr.net ([66.129.224.36]) (using TLSv1) by exprod7ob119.postini.com ([64.18.6.12]) with SMTP ID DSNKTwMSTD6UmR6X7XXBIHdj6d6sOjFhk+kJ@postini.com; Tue, 03 Jan 2012 06:36:11 PST
Received: from magenta.juniper.net (172.17.27.123) by P-EMHUB02-HQ.jnpr.net (172.24.192.33) with Microsoft SMTP Server (TLS) id 8.3.213.0; Tue, 3 Jan 2012 06:32:43 -0800
Received: from juniper.net (sapphire.juniper.net [172.17.28.108])	by magenta.juniper.net (8.11.3/8.11.3) with ESMTP id q03EWhS44922; Tue, 3 Jan 2012 06:32:43 -0800 (PST)	(envelope-from yakov@juniper.net)
Message-ID: <201201031432.q03EWhS44922@magenta.juniper.net>
To: Xuxiaohu <xuxiaohu@huawei.com>
In-Reply-To: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE763ACD@szxeml525-mbs.china.huawei.com>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE763ACD@szxeml525-mbs.china.huawei.com>
X-MH-In-Reply-To: Xuxiaohu <xuxiaohu@huawei.com> message dated "Sat, 31 Dec 2011 02:33:42 +0000."
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-ID: <53069.1325601163.1@juniper.net>
Date: Tue, 3 Jan 2012 06:32:43 -0800
From: Yakov Rekhter <yakov@juniper.net>
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 14:36:12 -0000

Xuxiaohu,

> Hi all,
> 
> Since there are some differences in the problems and requirements
> between data center network (DCN) and data center interconnect
> (DCI), I try to list several problems and requirements for DCN and
> DCI separately as follows. Here the dat a centers mainly refer to
> those multi-tenant data centers which are operated by public cloud
> providers to deliver cloud service (i.e., IaaS) to their customers
> (i.e., tenants).
> 
> 1. DCN problems and requirements:
> 
> 1) VM mobility across multiple pods -> LAN/subnet extension across pods
> 
> 2) Some cluster applications use non-IP or link-local multicast (optional) ->
>     Layer2 networking
> 
> 3) Multi-tenancy isolation -> VPN/VLAN instance scalability
> 
> 4) Millions of VMs -> MAC/IP forwarding table scalability
> 
> 5) Increasing bandwidth demands for server-to-server connectivity 
>    (i.e., east-west traffic)-> ECMP and shortest path forwarding capabilities
> 
> 6) Network resiliency -> Fast convergence and multi-homing

Do you need fast routing convergence, or fast connectivity restoration ?

> 7) Thousands of network devices -> Simplified provisioning and operation
> 
> 
> 
> 2. DCI problems and requirements:
> 
> 1) VMs mobility across data centers -> LAN/subnet extension across 
>    data centers.
> 
> 2) Multi-tenancy isolation -> VLAN/VPN instance scalability
> 
> 3) Millions of VMs -> MAC/IP forwarding table scalability
> 
> 4) Optimal utilization of WAN bandwidth resource -> Unknown unicast 
>    and ARP broadcast suppression
> 
> 5) Network resiliency -> Fast convergence and multi-homing

Do you need fast routing convergence, or fast connectivity restoration ?

> 6) Load-balancing across data centers -> Active-active DC exits
> 
> 7) Suboptimal path caused by LAN/subnet extension across data center ->
>    Path optimization for both VPN access and Internet access

Yakov.

From yakov@juniper.net  Tue Jan  3 06:52:20 2012
Return-Path: <yakov@juniper.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8465D21F84EC for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 06:52:20 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.442
X-Spam-Level: 
X-Spam-Status: No, score=-106.442 tagged_above=-999 required=5 tests=[AWL=0.158, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qY7SdDc-1lEW for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 06:52:20 -0800 (PST)
Received: from exprod7og117.obsmtp.com (exprod7og117.obsmtp.com [64.18.2.6]) by ietfa.amsl.com (Postfix) with ESMTP id D28B321F84E0 for <dc@ietf.org>; Tue,  3 Jan 2012 06:52:19 -0800 (PST)
Received: from P-EMHUB02-HQ.jnpr.net ([66.129.224.36]) (using TLSv1) by exprod7ob117.postini.com ([64.18.6.12]) with SMTP ID DSNKTwMWIZ26mV3LG75CH6nvRfxrYqKYqVJa@postini.com; Tue, 03 Jan 2012 06:52:19 PST
Received: from magenta.juniper.net (172.17.27.123) by P-EMHUB02-HQ.jnpr.net (172.24.192.33) with Microsoft SMTP Server (TLS) id 8.3.213.0; Tue, 3 Jan 2012 06:50:26 -0800
Received: from juniper.net (sapphire.juniper.net [172.17.28.108])	by magenta.juniper.net (8.11.3/8.11.3) with ESMTP id q03EoKS53547; Tue, 3 Jan 2012 06:50:20 -0800 (PST)	(envelope-from yakov@juniper.net)
Message-ID: <201201031450.q03EoKS53547@magenta.juniper.net>
To: Russ White <russw@riw.us>
In-Reply-To: <4EFF0DCA.5090707@riw.us> 
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net><6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com><201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <4EFF0DCA.5090707@riw.us>
X-MH-In-Reply-To: Russ White <russw@riw.us> message dated "Sat, 31 Dec 2011 08:27:38 -0500."
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-ID: <53309.1325602220.1@juniper.net>
Date: Tue, 3 Jan 2012 06:50:20 -0800
From: Yakov Rekhter <yakov@juniper.net>
Cc: dc@ietf.org, Christopher LILJENSTOLPE <cdl@asgaard.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 14:52:20 -0000

Russ,

[clipped...]
 
> The issue of convergence presents another problem to think about... If
> convergence ends up being slower than the application timing out and
> searching for a new destination IP address, then why is switching IP
> addresses worse? The only way any of this makes sense is if it converges
> faster than a human would notice --and, increasingly, faster than a
> computer would notice.

I think we need to distinguish between routing convergence and
connectivity restoration time, as the two are not always the same.
With this in mind let me paraphrase what you said above as follows:

 The issue of connectivity restoration time presents another problem
 to think about... If this time ends up being longer than the
 application timing out and searching for a new destination IP
 address, then why is switching IP addresses worse? The only way
 any of this makes sense is if this time is faster than a human
 would notice --and, increasingly, faster than a computer would
 notice.

Yakov.

From robert@raszuk.net  Tue Jan  3 06:54:04 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1B89221F84F3 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 06:54:04 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id FCRc5+uZF14r for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 06:54:03 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 4069721F84EC for <dc@ietf.org>; Tue,  3 Jan 2012 06:54:03 -0800 (PST)
Received: (qmail 8778 invoked by uid 399); 3 Jan 2012 14:54:02 -0000
Received: from unknown (HELO ?192.168.1.91?) (83.31.238.24) by mail1310.opentransfer.com with ESMTP; 3 Jan 2012 14:54:02 -0000
X-Originating-IP: 83.31.238.24
Message-ID: <4F031689.1050303@raszuk.net>
Date: Tue, 03 Jan 2012 15:54:01 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: Pedro Marques <pedro.r.marques@gmail.com>, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 14:54:04 -0000

Ashish,

OK let's just discuss what is in your draft on Hierarchical Addressing.

1. You have 48 bits 32 go for host remaining 16 goes for switches. How 
do you aggregate at the TOR or AGGR switch boundary ? Are you assuming 
single HOST - SWITCH with max 65K flat macs ?

2. Can you deploy this on existing VMs and existing switches ?

3. What new protocol you envision to use to distribute those new MACs ?

4. What is the advantage of using this vs ILNP if we assume that hosts 
should be modified ?

5. The proposal does not support aggregation .. even the draft says it :)

"The total number of hardware entries anywhere in the network equals the 
total number of switches and remains agnostic of VM mobility."

So if I have 100K switches I can not do any aggregation and need to 
"route" 100K MAC addresses.

6. Who provides me the mapping between switch mac and host/vm mac behind 
such switch ? Do switches proxy arp globally within your domain ?

Thx,
R.


> Robert,
>
>>> So you are advocating solution which is based on encapsulation - that
> is fine.
>
> No, I'm not. Did you read the draft I had mentioned?
> Hierarchical MAC is not encapsulation. It is one 48 bit address.
>
>>> However how could you ever arrive at the conclusion that HMACs would
>>> scale better then "anything we know". Well I don't know about you,
> but I
>>> know that the key to scaling is ability to aggregate. And it is not
> that
>>> huge mystery that MACs aggregate rather poorly while there are quite
>>> well deployed protocols (be it IPv4 or IPv6) which aggregate natively
>
> You are hitting the issue on the nail. So, read the draft I mentioned.
> Hierarchical MAC is higher bits "network prefix" and lower bits "host
> id".
> That's summarizable like IP address and aggregated.
> It has 46 bits to modify so larger than IPv4 internet.
>
> I won't comment on the rest, because you have made an assumption about
> encapsulation.
>
> I refer to this -
> http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: Robert Raszuk [mailto:robert@raszuk.net]
> Sent: Tuesday, January 03, 2012 7:05 PM
> To: Ashish Dalela (adalela)
> Cc: Pedro Marques; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>
> Ashish,
>
>> The issues of scale you mentioned don't exist in Hierarchical MACs,
>> which scale better than anything we know of.
>
> So you are advocating solution which is based on encapsulation - that is
> fine.
>
> However how could you ever arrive at the conclusion that HMACs would
> scale better then "anything we know". Well I don't know about you, but I
> know that the key to scaling is ability to aggregate. And it is not that
>
> huge mystery that MACs aggregate rather poorly while there are quite
> well deployed protocols (be it IPv4 or IPv6) which aggregate natively.
>
> For inter-dc this is IMHO a must. A must even if you build it using
> traditional routers or OF enabled switches - does not matter.
>
>> I don't want to split the requirements into multiple use-cases
>> because then this DC group will be many groups - one doing L2 and
>> another doing L3. That I think you will agree is not optimal for
>> anyone
>
> Why MAC-in-IP does not solve it for everyone ? And there are deployed
> solutions already ..
>
> IMHO what this group should accomplish is not to try to reinvent the
> world, but perhaps as example discuss where is the right boundary of
> encapsulation, how should we communicate between network and hosts, what
>
> kind of DC instrumentation should be IETF blessed for easy integration
> (ie min subset of functionality it should possess etc .... )
>
> R.
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>
>


From diego@tid.es  Tue Jan  3 07:03:31 2012
Return-Path: <diego@tid.es>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9F6AD21F850E for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:03:31 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.476
X-Spam-Level: 
X-Spam-Status: No, score=-3.476 tagged_above=-999 required=5 tests=[AWL=1.634,  BAYES_05=-1.11, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Gjvl358Vzxax for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:03:30 -0800 (PST)
Received: from correo-bck.tid.es (correo-bck.tid.es [195.235.93.200]) by ietfa.amsl.com (Postfix) with ESMTP id 5428221F84ED for <dc@ietf.org>; Tue,  3 Jan 2012 07:03:29 -0800 (PST)
Received: from sbrightmailg02.hi.inet (Sbrightmailg02.hi.inet [10.95.78.105]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX80007I9TS98@tid.hi.inet> for dc@ietf.org; Tue, 03 Jan 2012 16:03:28 +0100 (MET)
Received: from vanvan (vanvan.hi.inet [10.95.78.49])	by sbrightmailg02.hi.inet (Symantec Messaging Gateway) with SMTP id 0A.28.02643.DB8130F4; Tue, 03 Jan 2012 16:03:26 +0100 (CET)
Received: from correo.tid.es (mailhost.hi.inet [10.95.64.100]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPS id <0LX80007C9TP98@tid.hi.inet> for dc@ietf.org; Tue, 03 Jan 2012 16:03:25 +0100 (MET)
Received: from EXCLU2K7.hi.inet ([10.95.67.65]) by htcasmad2.hi.inet ([192.168.0.2]) with mapi; Tue, 03 Jan 2012 16:03:25 +0100
Date: Tue, 03 Jan 2012 16:03:24 +0100
From: DIEGO LOPEZ GARCIA <diego@tid.es>
In-reply-to: <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com>
To: Pedro Marques <pedro.r.marques@gmail.com>
Message-id: <9DC3B363-2525-402D-AFE1-727A21A09895@tid.es>
MIME-version: 1.0
Content-type: text/plain; charset=Windows-1252
Content-language: en-US
Content-transfer-encoding: quoted-printable
Accept-Language: en-US
Thread-topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-index: AczKKNdstvFSfklzRmC+llIm4+POAA==
acceptlanguage: en-US
X-AuditID: 0a5f4e69-b7f6b6d000000a53-93-4f0318bd0c18
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprKKsWRmVeSWpSXmKPExsXCFe9nqHtAgtnfYNp2QYuW83dZHRg9liz5 yRTAGMVlk5Kak1mWWqRvl8CVceDZNeaCk2IVxyZINzDOFOpi5OSQEDCR+LbkGAuELSZx4d56 ti5GLg4hgW2MEvN6ZjNCOF8ZJY7++gWVaWSUWL/1MRNIC4uAqsTa/wfZQWw2AXWJlqPfwEYJ C3hKLOg8CWZzCgRLXHu7kbWLkYNDREBXYvPXQpAws4CfxMsTD1hBbF4BS4kFj5awQdiCEj8m 32OBqNGT+PjnNiOELS7R3HoTKq4t8eTdBbBeRqCrv59aA3aOiICXxM+976FsPYnL1+8yQ9SI StxpX88I8aWAxJI955khbFGJl4//sUL8tYdF4snFn6wTGMVnIbljFpI7ZiG5YxaSOxYwsqxi FCtOKspMzyjJTczMSTcw0svI1MvMSy3ZxAiJo8wdjMt3qhxiFOBgVOLh9XjH6C/EmlhWXJl7 iFGSg0lJlHeBOLO/EF9SfkplRmJxRnxRaU5q8SFGCQ5mJRFeoX1M/kK8KYmVValF+TApGQ4O JQneJSBtgkWp6akVaZk5wGQBk2bi4ARp5wFqnwNSw1tckJhbnJkOkT/FKCklzrsWJCEAksgo zYPrfcUoDnSkMO9tkCwPMK3Bdb0CGsgENHDXHpB7iksSEVJSDYwHBWR6Mmtzsrdp3U+o6jnV pMg0J0Dx7f34et6DqQdY9Y//rpvKNufhn6Rl9p6Fb+aXN1x48TM/beqFN599ospMnDmY+TXr BY2E7ltxRDJxCZvUZEZKGe1+Enoik/NvT32T5xahiBOuDibPskuMpH9L/ylbvp91amh+3M8f TfeUzKWTDgb1KLEUZyQaajEXFScCAAX6dE0oAwAA
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com>
Cc: "Ashish Dalela \(adalela\)" <adalela@cisco.com>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 15:03:31 -0000

On 2 Jan 2012, at 18:40 , Pedro Marques wrote:
> The characteristics that historically made L2 the lowest cost solution
> for data-center access and aggregation become strong drawbacks one you
> assume that a member of the 'segment' can be anywhere in the
> data-center.
>
> One of the requirements that is important to note is that tenants need
> multiple network 'segments' (i.e. a logical VLAN) and that these
> segments must be able to exchange traffic with a control similar to
> today's vlan-to-vlan routed ACLs.
>
> In traditional L2/L3 designs the VLAN interfaces sit at the
> aggregation layer, separating downstream L2 vlans. Once the members of
> the "VLAN" can be anywhere in the DC, the inter-vlan traffic exchange
> capability must now also be supported taking that into account.

Agreed.

> Mobile IP-style home agent solutions are not acceptable given that
> they imply traffic crossing the DC network twice.

Sure for the steady state, but I guess it could be an acceptable solution i=
n transient situations, when a migration is taking place.

> If you want to follow a traditional 3 tier design, you end up with
> having all the aggregation switches being members of all the vlans.
> Creating very very large broadcast domains with typically a small
> number of end-systems.
>
> VM are also not managed the way that traditional servers are. The
> traditional advantages of a mixed L2/L3 design not longer apply here.
> When a VM is instantiated the management system that does instantiate
> it knows its network address(es). The traditional L2 "plug-and-play"
> simplicity is now irrelevant because at any reasonable scale VMs are
> not "plug-and-play"... they are explicitly managed.

Agreed. Anyway, you'll agree that "sensible default values" would be useful=
...

> There is the need in the IETF to standardize solutions for "larger
> broadcast domains". They are applicable to small data-centers and
> there is a large number of people that just want to extend the
> traditional model since that is what they are familiar with.
>
> But there is also interest in standardizing solutions that do not use
> broadcast domains. There are enough people that believe the later are
> cheaper to operate at large scale.


You can count me in=85

Be goode,

--
"Esta vez no fallaremos, Doctor Infierno"

Dr Diego R. Lopez
Telefonica I+D

e-mail: diego@tid.es
Tel:      +34 913 129 041
Mobile: +34 682 051 091
-----------------------------------------


Este mensaje se dirige exclusivamente a su destinatario. Puede consultar nu=
estra pol=EDtica de env=EDo y recepci=F3n de correo electr=F3nico en el enl=
ace situado m=E1s abajo.
This message is intended exclusively for its addressee. We only send and re=
ceive email on the basis of the terms set out at.
http://www.tid.es/ES/PAGINAS/disclaimer.aspx

From yakov@juniper.net  Tue Jan  3 07:12:07 2012
Return-Path: <yakov@juniper.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E78BD21F847A for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:12:07 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.22
X-Spam-Level: 
X-Spam-Status: No, score=-106.22 tagged_above=-999 required=5 tests=[AWL=-0.221, BAYES_00=-2.599, J_CHICKENPOX_31=0.6, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id auaEqg8sH6+u for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:12:07 -0800 (PST)
Received: from exprod7og119.obsmtp.com (exprod7og119.obsmtp.com [64.18.2.16]) by ietfa.amsl.com (Postfix) with ESMTP id 2D8DC21F8472 for <dc@ietf.org>; Tue,  3 Jan 2012 07:12:07 -0800 (PST)
Received: from P-EMHUB02-HQ.jnpr.net ([66.129.224.36]) (using TLSv1) by exprod7ob119.postini.com ([64.18.6.12]) with SMTP ID DSNKTwMaub7laEnPs+JQDRM6y0NF0jMeV088@postini.com; Tue, 03 Jan 2012 07:12:07 PST
Received: from magenta.juniper.net (172.17.27.123) by P-EMHUB02-HQ.jnpr.net (172.24.192.33) with Microsoft SMTP Server (TLS) id 8.3.213.0; Tue, 3 Jan 2012 07:10:12 -0800
Received: from juniper.net (sapphire.juniper.net [172.17.28.108])	by magenta.juniper.net (8.11.3/8.11.3) with ESMTP id q03FABS62810; Tue, 3 Jan 2012 07:10:11 -0800 (PST)	(envelope-from yakov@juniper.net)
Message-ID: <201201031510.q03FABS62810@magenta.juniper.net>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com> 
References: <618BE8B40039924EB9AED233D4A09C5102B2527A@XMB-BGL-416.cisco.com> <D96F76EF-0011-4F33-A1CF-EC9AD12BA411@gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com>
X-MH-In-Reply-To: "Ashish Dalela (adalela)" <adalela@cisco.com> message dated "Sat, 31 Dec 2011 07:24:13 +0530."
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-ID: <53648.1325603411.1@juniper.net>
Date: Tue, 3 Jan 2012 07:10:11 -0800
From: Yakov Rekhter <yakov@juniper.net>
Cc: dc@ietf.org, Aldrin Isaac <aldrin.isaac@gmail.com>
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 15:12:08 -0000

Ashish,

> Hi Aldrin,
> 
> >> How would you address gratuitous ARP when using the hierarchical MAC
> addressing with a registry to store MAC-IP bindings?
> 
> We should ideally have a protocol that maps ARP messages into the
> registry and vice versa. Why new protocol? Greater security and
> reliability plus avoiding ARP broadcasts. Gratuitous ARP doesn't have an
> acknowledge, so if you want to perform some actions based on this, you
> need to get an "acknowledge" for it. Gratuitous ARP can get dropped.=20
> 
> The other issue is that a user can misuse it do MAC hijacking. But if
> you have a hierarchical MAC you can't hijack because network knows your
> location. Things like dot1x which solved these issues in the campus
> space are not used in datacenter.=20
> 
> >> Also, when using MAC prefixes in complex L2VPN topologies how could
> we address the risk of a hub forwarding into the wrong spoke context on
> the leaf switch?  Each egress port on the leaf switch may be in a
> different context (ex: EVI in EVPN).
> 
> We should avoid building any kind of VRF at the control plane, because
> that's just not scalable. 

Would you care to provide detailed technical analysis to support your
claim about VRFs "just not scalable", or should this claim be treated 
as proof by emphatic assertion ?

Yakov.

From diego@tid.es  Tue Jan  3 07:41:31 2012
Return-Path: <diego@tid.es>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CA54211E8072 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:41:31 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.765
X-Spam-Level: 
X-Spam-Status: No, score=-4.765 tagged_above=-999 required=5 tests=[AWL=1.834,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 18WaFRyUbWfJ for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:41:31 -0800 (PST)
Received: from correo-bck.tid.es (correo-bck.tid.es [195.235.93.200]) by ietfa.amsl.com (Postfix) with ESMTP id 5F34711E8071 for <dc@ietf.org>; Tue,  3 Jan 2012 07:41:30 -0800 (PST)
Received: from sbrightmailg02.hi.inet (Sbrightmailg02.hi.inet [10.95.78.105]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX8000MVBL598@tid.hi.inet> for dc@ietf.org; Tue, 03 Jan 2012 16:41:29 +0100 (MET)
Received: from vanvan (vanvan.hi.inet [10.95.78.49])	by sbrightmailg02.hi.inet (Symantec Messaging Gateway) with SMTP id C3.98.02643.9A1230F4; Tue, 03 Jan 2012 16:41:29 +0100 (CET)
Received: from correo.tid.es (mailhost.hi.inet [10.95.64.100]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPS id <0LX8000MQBL498@tid.hi.inet> for dc@ietf.org; Tue, 03 Jan 2012 16:41:29 +0100 (MET)
Received: from EXCLU2K7.hi.inet ([10.95.67.65]) by htcasmad1.hi.inet ([192.168.0.1]) with mapi; Tue, 03 Jan 2012 16:41:28 +0100
Date: Tue, 03 Jan 2012 16:41:27 +0100
From: DIEGO LOPEZ GARCIA <diego@tid.es>
In-reply-to: <4F0307F6.8020504@raszuk.net>
To: "robert@raszuk.net" <robert@raszuk.net>
Message-id: <1C25D028-67DE-4FCE-9A05-56D67BB4C00F@tid.es>
MIME-version: 1.0
Content-type: text/plain; charset=utf-8
Content-language: en-US
Content-transfer-encoding: base64
Accept-Language: en-US
Thread-topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-index: AczKLii6zPIOFhdtQiyPwo26adGjbQ==
acceptlanguage: en-US
X-AuditID: 0a5f4e69-b7f6b6d000000a53-3f-4f0321a928dc
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprKKsWRmVeSWpSXmKPExsXCFe9nqLtSkdnfoLlR0qLl/F1WB0aPJUt+ MgUwRnHZpKTmZJalFunbJXBlPHnRwlhwirdi2b8vjA2Mc3i7GDk5JARMJObeOcEGYYtJXLi3 Hsjm4hAS2MYoseX6J0YI5yujxLNFM1khnEZGifVfljCBtLAIqEq8OtoNZrMJqEu0HP3GAmIL C3hKLOg8CWZzCmhJbN55jhnEFhHQlti5aDHYOmYBH4mLp1aD2bwClhIXv85mgbAFJX5Mvgdk cwDVqEtMmZILUS4u0dx6kwXCVpSYtqiBEcRmBLr6+6k1TBDjvSR+7n0PZetJbDg+iQ2iRlTi Tvt6RogvBSSW7DnPDGGLSrx8/A/qr/UsEtt/vWObwCg+C8kZsxDOmIXkjFlIzljAyLKKUaw4 qSgzPaMkNzEzJ93ASC8jUy8zL7VkEyMkjjJ3MC7fqXKIUYCDUYmH1+Mdo78Qa2JZcWXuIUZJ DiYlUd7dcsz+QnxJ+SmVGYnFGfFFpTmpxYcYJTiYlUR4hfYx+QvxpiRWVqUW5cOkZDg4lCR4 zyoAtQkWpaanVqRl5gCTBUyaiYMTpJ0HqP0vSA1vcUFibnFmOkT+FKOklDjvPZCEAEgiozQP rvcVozjQkcK802WAsjzAtAbX9QpoIBPQwF17QO4pLklESEk1MCads7VrNtnOGf+jOr6nWIrZ 9alc/O7jJ2PbFNZ8e9Yubv7VSOH4q2UL+GVqF7+ZNv/T0UlNhvWh+7b6B7jpRdWxqy+INTJ/ m1543vxu2ruTLAvjVgv2eJocr81x0p6tnH1NOGJChBVrcOQ7zwq5L1OLebxO/jzo+O/BxFXd 77ew35lhnSRbr8RSnJFoqMVcVJwIAKTDiJEoAwAA
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9BB8@MX14A.corp.emc.com> <4F023231.7010904@raszuk.net> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BA14@MX14A.corp.emc.com> <4F0307F6.8020504@raszuk.net>
Cc: "david.black@emc.com" <david.black@emc.com>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 15:41:31 -0000

SGksDQoNCk9uIDMgSmFuIDIwMTIsIGF0IDE0OjUxICwgUm9iZXJ0IFJhc3p1ayB3cm90ZToNCj4g
U28gZm9yIHNjYWxhYmxlIElhYVMgYXQgdGhlIEwyIGxldmVsIHdoYXQgc2VlbSB0byBiZSBtaXNz
aW5nIGlzDQo+IGVmZmVjdGl2ZWx5IGFuIEFSUCBzZXJ2ZXJzIHdoaWNoIHdvdWxkIHByb3ZpZGUg
Vk0gTUFDIHJlc29sdXRpb24gcGxhbmUNCj4gdG8gSVAgYWRkcmVzcyB1c2VkIGZvciBpdCdzIGVu
Y2Fwc3VsYXRpb24uDQo+DQo+IEp1c3QgbGlrZSBETlMgcmVzb2x2ZXMgY2hhciBuYW1lcyB0byBJ
UCBhZGRyZXNzIG9yIExJU1AgQUxUIHJlc29sdmVzIElEcw0KPiB0byBsb2NhdG9ycyB3ZSBub3cg
bmVlZCBhIHdheSB0byB3aXRoaW4geW91ciBkb21haW4gKElhYVMgYm91bmRhcnkpIG9yDQo+IG1h
eWJlIGV2ZW4gb25lIGRheSBiZXlvbmQgcmVzb2x2ZSBNQUMgYWRkcmVzcyB0byBJUC4NCj4NCj4g
RG8gd2UgaGF2ZSBzdWNoIHByb3Bvc2FsIGFscmVhZHkgb24gdGhlIHRhYmxlID8NCg0KSSBhbSBw
cm9iYWJseSBtaXNzaW5nIHNvbWUgZGV0YWlsLCBidXQgY291bGQgbm90IHRoaXMgcmVzb2x1dGlv
biBiZSBkb25lIGF0IHRoZSB2aXJ0dWFsIG5ldHdvcmsgbWFuYWdlciBpbiB0aGUgaHlwZXJ2aXNv
cj8gVGhhdCB3b3VsZCByZXF1aXJlIHRvIG9ubHkgY2hhbmdlIGhvc3QgaHlwZXJ2aXNvcnMgYW5k
IG5vdCBndWVzdCBWTSBPU3MuDQoNCkJlIGdvb2RlLA0KDQotLQ0KIkVzdGEgdmV6IG5vIGZhbGxh
cmVtb3MsIERvY3RvciBJbmZpZXJubyINCg0KRHIgRGllZ28gUi4gTG9wZXoNClRlbGVmb25pY2Eg
SStEDQoNCmUtbWFpbDogZGllZ29AdGlkLmVzDQpUZWw6ICAgICAgKzM0IDkxMyAxMjkgMDQxDQpN
b2JpbGU6ICszNCA2ODIgMDUxIDA5MQ0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0NCg0KDQpFc3RlIG1lbnNhamUgc2UgZGlyaWdlIGV4Y2x1c2l2YW1lbnRlIGEgc3Ug
ZGVzdGluYXRhcmlvLiBQdWVkZSBjb25zdWx0YXIgbnVlc3RyYSBwb2zDrXRpY2EgZGUgZW52w61v
IHkgcmVjZXBjacOzbiBkZSBjb3JyZW8gZWxlY3Ryw7NuaWNvIGVuIGVsIGVubGFjZSBzaXR1YWRv
IG3DoXMgYWJham8uDQpUaGlzIG1lc3NhZ2UgaXMgaW50ZW5kZWQgZXhjbHVzaXZlbHkgZm9yIGl0
cyBhZGRyZXNzZWUuIFdlIG9ubHkgc2VuZCBhbmQgcmVjZWl2ZSBlbWFpbCBvbiB0aGUgYmFzaXMg
b2YgdGhlIHRlcm1zIHNldCBvdXQgYXQuDQpodHRwOi8vd3d3LnRpZC5lcy9FUy9QQUdJTkFTL2Rp
c2NsYWltZXIuYXNweA0K

From adalela@cisco.com  Tue Jan  3 07:44:36 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 162EA21F8543 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:44:36 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.025
X-Spam-Level: 
X-Spam-Status: No, score=-2.025 tagged_above=-999 required=5 tests=[AWL=-0.026, BAYES_00=-2.599, J_CHICKENPOX_31=0.6]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id c81uRTlYm6rK for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:44:35 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id EB26F21F851F for <dc@ietf.org>; Tue,  3 Jan 2012 07:44:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2100; q=dns/txt; s=iport; t=1325605475; x=1326815075; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=pDdH1PYqJgRkw7Fwjuij5QZpYcyy8/1wB1FO2CLSgD8=; b=BROngtJnG86BMlMaMab6///tc0QmTxvFNkgx+hmuoQYSaFRBC2zMAZ+S 58U0xJtOBwN0dejH2fm1pXFl1Jp9+a6RVf5HIjjqTkBbPdS9mXPKqxp0e eBtfic3iuPhGa146YS9VGDEqdE9SHbwc8BX9PXCdOwjMBGdvcqJgNgwyx w=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AjIPAB4iA09Io8UY/2dsb2JhbABDggWrYIFyAQEBAwEBAQEPAR0KNAsFBwQCAQgRBAEBCwYXAQYBJh8JCAEBBAsICBqHWAiXNgGdZwSLLGMEiDWfCQ
X-IronPort-AV: E=Sophos;i="4.71,450,1320624000";  d="scan'208";a="2691945"
Received: from vla196-nat.cisco.com (HELO bgl-core-1.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 03 Jan 2012 15:44:33 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q03FiXhW030277; Tue, 3 Jan 2012 15:44:33 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Jan 2012 21:14:33 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 3 Jan 2012 21:14:31 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B2569C@XMB-BGL-416.cisco.com>
In-Reply-To: <201201031510.q03FABS62810@magenta.juniper.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] new drafts
Thread-Index: AczKKhVx/FAUiTEXRmGHnComGGo4QwAAjnGA
References: <618BE8B40039924EB9AED233D4A09C5102B2527A@XMB-BGL-416.cisco.com><D96F76EF-0011-4F33-A1CF-EC9AD12BA411@gmail.com><618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com> <201201031510.q03FABS62810@magenta.juniper.net>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Yakov Rekhter" <yakov@juniper.net>
X-OriginalArrivalTime: 03 Jan 2012 15:44:33.0179 (UTC) FILETIME=[96A7D6B0:01CCCA2E]
Cc: Aldrin Isaac <aldrin.isaac@gmail.com>, dc@ietf.org
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 15:44:36 -0000

Yakov,

>> Would you care to provide detailed technical analysis to support your
claim about VRFs "just not scalable",

VM mobility requires insertion of host-routes. As VMs move and/or are
created/deleted, these host routes need to be propagated everywhere.
That's a control plane scaling problem, plus a problem about
convergence.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Yakov Rekhter
Sent: Tuesday, January 03, 2012 8:40 PM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org; Aldrin Isaac
Subject: Re: [dc] new drafts

Ashish,

> Hi Aldrin,
>=20
> >> How would you address gratuitous ARP when using the hierarchical
MAC
> addressing with a registry to store MAC-IP bindings?
>=20
> We should ideally have a protocol that maps ARP messages into the
> registry and vice versa. Why new protocol? Greater security and
> reliability plus avoiding ARP broadcasts. Gratuitous ARP doesn't have
an
> acknowledge, so if you want to perform some actions based on this, you
> need to get an "acknowledge" for it. Gratuitous ARP can get
dropped.=3D20
>=20
> The other issue is that a user can misuse it do MAC hijacking. But if
> you have a hierarchical MAC you can't hijack because network knows
your
> location. Things like dot1x which solved these issues in the campus
> space are not used in datacenter.=3D20
>=20
> >> Also, when using MAC prefixes in complex L2VPN topologies how could
> we address the risk of a hub forwarding into the wrong spoke context
on
> the leaf switch?  Each egress port on the leaf switch may be in a
> different context (ex: EVI in EVPN).
>=20
> We should avoid building any kind of VRF at the control plane, because
> that's just not scalable.=20

Would you care to provide detailed technical analysis to support your
claim about VRFs "just not scalable", or should this claim be treated=20
as proof by emphatic assertion ?

Yakov.
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From robert@raszuk.net  Tue Jan  3 07:46:32 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7CC8B21F8543 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:46:32 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[AWL=0.000,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DMSX30Ghd0W3 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:46:31 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 9153921F851F for <dc@ietf.org>; Tue,  3 Jan 2012 07:46:31 -0800 (PST)
Received: (qmail 23565 invoked by uid 399); 3 Jan 2012 15:46:31 -0000
Received: from unknown (HELO ?192.168.1.91?) (83.31.238.24) by mail1310.opentransfer.com with ESMTP; 3 Jan 2012 15:46:31 -0000
X-Originating-IP: 83.31.238.24
Message-ID: <4F0322D6.7020600@raszuk.net>
Date: Tue, 03 Jan 2012 16:46:30 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: DIEGO LOPEZ GARCIA <diego@tid.es>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9BB8@MX14A.corp.emc.com> <4F023231.7010904@raszuk.net> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BA14@MX14A.corp.emc.com> <4F0307F6.8020504@raszuk.net> <1C25D028-67DE-4FCE-9A05-56D67BB4C00F@tid.es>
In-Reply-To: <1C25D028-67DE-4FCE-9A05-56D67BB4C00F@tid.es>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Cc: "david.black@emc.com" <david.black@emc.com>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 15:46:32 -0000

Hi Diego,

The point is to avoid storing such mapping everywhere (read in every 
host's hypervisor). Only those mapping which are necessary for a given 
host should be stored locally if at all necessary.

Analogy ... Do you store all DNS entries locally on your pc/mac ?

Best,
R.

> Hi,
>
> On 3 Jan 2012, at 14:51 , Robert Raszuk wrote:
>> So for scalable IaaS at the L2 level what seem to be missing is
>> effectively an ARP servers which would provide VM MAC resolution plane
>> to IP address used for it's encapsulation.
>>
>> Just like DNS resolves char names to IP address or LISP ALT resolves IDs
>> to locators we now need a way to within your domain (IaaS boundary) or
>> maybe even one day beyond resolve MAC address to IP.
>>
>> Do we have such proposal already on the table ?
>
> I am probably missing some detail, but could not this resolution be done at the virtual network manager in the hypervisor? That would require to only change host hypervisors and not guest VM OSs.
>
> Be goode,
>
> --
> "Esta vez no fallaremos, Doctor Infierno"
>
> Dr Diego R. Lopez
> Telefonica I+D
>
> e-mail: diego@tid.es
> Tel:      +34 913 129 041
> Mobile: +34 682 051 091
> -----------------------------------------
>
>
> Este mensaje se dirige exclusivamente a su destinatario. Puede consultar nuestra política de envío y recepción de correo electrónico en el enlace situado más abajo.
> This message is intended exclusively for its addressee. We only send and receive email on the basis of the terms set out at.
> http://www.tid.es/ES/PAGINAS/disclaimer.aspx
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


From robert@raszuk.net  Tue Jan  3 07:53:57 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 37CBF11E8073 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:53:57 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.299
X-Spam-Level: 
X-Spam-Status: No, score=-2.299 tagged_above=-999 required=5 tests=[AWL=-0.300, BAYES_00=-2.599, J_CHICKENPOX_31=0.6]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ocnU848K+glO for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 07:53:56 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 7A8F611E8071 for <dc@ietf.org>; Tue,  3 Jan 2012 07:53:56 -0800 (PST)
Received: (qmail 27061 invoked by uid 399); 3 Jan 2012 15:53:55 -0000
Received: from unknown (HELO ?192.168.1.91?) (83.31.238.24) by mail1310.opentransfer.com with ESMTP; 3 Jan 2012 15:53:55 -0000
X-Originating-IP: 83.31.238.24
Message-ID: <4F032492.4030201@raszuk.net>
Date: Tue, 03 Jan 2012 16:53:54 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
References: <618BE8B40039924EB9AED233D4A09C5102B2527A@XMB-BGL-416.cisco.com><D96F76EF-0011-4F33-A1CF-EC9AD12BA411@gmail.com><618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com> <201201031510.q03FABS62810@magenta.juniper.net> <618BE8B40039924EB9AED233D4A09C5102B2569C@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B2569C@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: Yakov Rekhter <yakov@juniper.net>, dc@ietf.org, Aldrin Isaac <aldrin.isaac@gmail.com>
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 15:53:57 -0000

Ashish,

VM mobility based on re-propagating host routes even with your own 
domain looks to me like broken .. VM mobility timing requirements are in 
milliseconds or 10s of ms .. routing distribution or even FIB update 
with host routes (assuming that we could propagate host routes ahead of 
migration then just trigger the activation) would be in 100s of ms or 
seconds depending on the scope. I agree with your assertion that this is 
not scalable.

However this has nothing to do with stating that VRFs are "just not 
scalable". VRF instantiation or VRF scaling is very opaque topic. 
Especially one could claim that VRFs in control plane are a very 
scalable tools we have at our disposal today.

Best,
R.

> Yakov,
>
>>> Would you care to provide detailed technical analysis to support your
> claim about VRFs "just not scalable",
>
> VM mobility requires insertion of host-routes. As VMs move and/or are
> created/deleted, these host routes need to be propagated everywhere.
> That's a control plane scaling problem, plus a problem about
> convergence.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Yakov Rekhter
> Sent: Tuesday, January 03, 2012 8:40 PM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org; Aldrin Isaac
> Subject: Re: [dc] new drafts
>
> Ashish,
>
>> Hi Aldrin,
>>
>>>> How would you address gratuitous ARP when using the hierarchical
> MAC
>> addressing with a registry to store MAC-IP bindings?
>>
>> We should ideally have a protocol that maps ARP messages into the
>> registry and vice versa. Why new protocol? Greater security and
>> reliability plus avoiding ARP broadcasts. Gratuitous ARP doesn't have
> an
>> acknowledge, so if you want to perform some actions based on this, you
>> need to get an "acknowledge" for it. Gratuitous ARP can get
> dropped.=20
>>
>> The other issue is that a user can misuse it do MAC hijacking. But if
>> you have a hierarchical MAC you can't hijack because network knows
> your
>> location. Things like dot1x which solved these issues in the campus
>> space are not used in datacenter.=20
>>
>>>> Also, when using MAC prefixes in complex L2VPN topologies how could
>> we address the risk of a hub forwarding into the wrong spoke context
> on
>> the leaf switch?  Each egress port on the leaf switch may be in a
>> different context (ex: EVI in EVPN).
>>
>> We should avoid building any kind of VRF at the control plane, because
>> that's just not scalable.
>
> Would you care to provide detailed technical analysis to support your
> claim about VRFs "just not scalable", or should this claim be treated
> as proof by emphatic assertion ?
>
> Yakov.
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>
>


From diego@tid.es  Tue Jan  3 08:12:05 2012
Return-Path: <diego@tid.es>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 59DA921F843E for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:12:05 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.224
X-Spam-Level: 
X-Spam-Status: No, score=-3.224 tagged_above=-999 required=5 tests=[AWL=-0.624, BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id AqlatCkOwYnD for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:12:04 -0800 (PST)
Received: from tidos.tid.es (tidos.tid.es [195.235.93.44]) by ietfa.amsl.com (Postfix) with ESMTP id 4608521F84BB for <dc@ietf.org>; Tue,  3 Jan 2012 08:12:04 -0800 (PST)
Received: from sbrightmailg01.hi.inet (sbrightmailg01.hi.inet [10.95.64.104]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX800ACED03IE@tid.hi.inet> for dc@ietf.org; Tue, 03 Jan 2012 17:12:03 +0100 (MET)
Received: from tid (tid.hi.inet [10.95.64.10])	by sbrightmailg01.hi.inet (Symantec Messaging Gateway) with SMTP id 8E.23.02893.3D8230F4; Tue, 03 Jan 2012 17:12:03 +0100 (CET)
Received: from correo.tid.es (mailhost.hi.inet [10.95.64.100]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPS id <0LX800AC9D02IE@tid.hi.inet> for dc@ietf.org; Tue, 03 Jan 2012 17:12:03 +0100 (MET)
Received: from EXCLU2K7.hi.inet ([10.95.67.65]) by htcasmad1.hi.inet ([192.168.0.1]) with mapi; Tue, 03 Jan 2012 17:12:02 +0100
Date: Tue, 03 Jan 2012 17:12:01 +0100
From: DIEGO LOPEZ GARCIA <diego@tid.es>
In-reply-to: <4F0322D6.7020600@raszuk.net>
To: "robert@raszuk.net" <robert@raszuk.net>
Message-id: <657E9711-12E3-451C-B6AA-492ABCD7988B@tid.es>
MIME-version: 1.0
Content-type: text/plain; charset=utf-8
Content-language: en-US
Content-transfer-encoding: base64
Accept-Language: en-US
Thread-topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-index: AczKMm1yx3BL1XvaRK2Y4+hTYQ+BeA==
acceptlanguage: en-US
X-AuditID: 0a5f4068-b7f2d6d000000b4d-97-4f0328d36a6c
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprKKsWRmVeSWpSXmKPExsXCFe/ApXtZg9nf4NAvLouW83dZHRg9liz5 yRTAGMVlk5Kak1mWWqRvl8CVcfzbLJaCXdwV+1ctZm1gnMLdxcjJISFgIrFr1QpGCFtM4sK9 9WxdjFwcQgIbGCV6Xj5hhHC+Mkr8m/2KCcJpBHL+T2YDaWERUJWY03+WCcRmE1CXaDn6jQXE FhbwlFjQeRLM5hTQkvi7ZRkziC0ioC2xc9FisF5mAR+Ji6dWg9m8ApYSk9qnMUHYghI/Jt8D 6uUAqlGXmDIlF6JcXKK59SYLhK0oMW1RA9jVjEBXfz+1hglivJfEz73voWw9ib7uPnaIGlGJ O+3rob4UkFiy5zwzhC0q8fLxP1aIv/6ySDw8u5tpAqP4LCRnzEI4YxaSM2YhOWMBI8sqRrHi pKLM9IyS3MTMnHQDQ72MTL3MvNSSTYyQOMrYwbh8p8ohRgEORiUeXoZfjP5CrIllxZW5hxgl OZiURHmT1Zj9hfiS8lMqMxKLM+KLSnNSiw8xSnAwK4nwLlUFyvGmJFZWpRblw6RkODiUJHjd gDEvJFiUmp5akZaZA0wWMGkmDk6Qdh6g9kdyIO3FBYm5xZnpEPlTjJJS4rxhIM0CIImM0jy4 3leM4kBHCvNOlwHK8gDTGlzXK6CBTEADd+1hAhlYkoiQkmpgNDjHUrD5WopmWr5b1A+dm4cY er55xd2W+1W9QY2xXmyTGN+Bjx94NlvYFgtMbSl/0pfV4LNsd8u/0LPWjKxakx1ml/0Q+tL4 8PkTn6J597ulXAzafnv7F6zhqJ3MaTFlFZ+vwBnuWFnLzS9PSlSu2MhRsmnDAe+GmNOul66e evwtT0iC69lkJZbijERDLeai4kQAa4Ep1igDAAA=
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9BB8@MX14A.corp.emc.com> <4F023231.7010904@raszuk.net> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BA14@MX14A.corp.emc.com> <4F0307F6.8020504@raszuk.net> <1C25D028-67DE-4FCE-9A05-56D67BB4C00F@tid.es> <4F0322D6.7020600@raszuk.net>
Cc: "david.black@emc.com" <david.black@emc.com>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 16:12:05 -0000

SGkgUm9iZXJ0LA0KDQpPbiAzIEphbiAyMDEyLCBhdCAxNjo0NiAsIFJvYmVydCBSYXN6dWsgd3Jv
dGU6DQo+IFRoZSBwb2ludCBpcyB0byBhdm9pZCBzdG9yaW5nIHN1Y2ggbWFwcGluZyBldmVyeXdo
ZXJlIChyZWFkIGluIGV2ZXJ5DQo+IGhvc3QncyBoeXBlcnZpc29yKS4gT25seSB0aG9zZSBtYXBw
aW5nIHdoaWNoIGFyZSBuZWNlc3NhcnkgZm9yIGEgZ2l2ZW4NCj4gaG9zdCBzaG91bGQgYmUgc3Rv
cmVkIGxvY2FsbHkgaWYgYXQgYWxsIG5lY2Vzc2FyeS4NCj4NCj4gQW5hbG9neSAuLi4gRG8geW91
IHN0b3JlIGFsbCBETlMgZW50cmllcyBsb2NhbGx5IG9uIHlvdXIgcGMvbWFjID8NCg0KDQpJIGNh
bm5vdCBhZ3JlZSBtb3JlLiBZb3Ugc3RvcmUgZGF0YSBpbiBzb21lIGNvbW1vbiAoZGlzdHJpYnV0
ZWQpIHNlcnZpY2UsIGJ1dCBtYWtlIHRoZSBoeXBlcnZpc29yIGRvIHRoZSByZXNvbHV0aW9uLiBN
dWNoIGxpa2UgcmVzb2x2ZXJzIGFuZCBzZXJ2ZXJzIGluIEROUyBhcyB3ZWxsLg0KDQpCZSBnb29k
ZSwNCg0KLS0NCiJFc3RhIHZleiBubyBmYWxsYXJlbW9zLCBEb2N0b3IgSW5maWVybm8iDQoNCkRy
IERpZWdvIFIuIExvcGV6DQpUZWxlZm9uaWNhIEkrRA0KDQplLW1haWw6IGRpZWdvQHRpZC5lcw0K
VGVsOiAgICAgICszNCA5MTMgMTI5IDA0MQ0KTW9iaWxlOiArMzQgNjgyIDA1MSAwOTENCi0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQoNCg0KRXN0ZSBtZW5zYWplIHNl
IGRpcmlnZSBleGNsdXNpdmFtZW50ZSBhIHN1IGRlc3RpbmF0YXJpby4gUHVlZGUgY29uc3VsdGFy
IG51ZXN0cmEgcG9sw610aWNhIGRlIGVudsOtbyB5IHJlY2VwY2nDs24gZGUgY29ycmVvIGVsZWN0
csOzbmljbyBlbiBlbCBlbmxhY2Ugc2l0dWFkbyBtw6FzIGFiYWpvLg0KVGhpcyBtZXNzYWdlIGlz
IGludGVuZGVkIGV4Y2x1c2l2ZWx5IGZvciBpdHMgYWRkcmVzc2VlLiBXZSBvbmx5IHNlbmQgYW5k
IHJlY2VpdmUgZW1haWwgb24gdGhlIGJhc2lzIG9mIHRoZSB0ZXJtcyBzZXQgb3V0IGF0Lg0KaHR0
cDovL3d3dy50aWQuZXMvRVMvUEFHSU5BUy9kaXNjbGFpbWVyLmFzcHgNCg==

From adalela@cisco.com  Tue Jan  3 08:15:15 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 694EC21F859D for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:15:15 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.323
X-Spam-Level: 
X-Spam-Status: No, score=-2.323 tagged_above=-999 required=5 tests=[AWL=0.276,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YLVNwt6pG2Bo for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:15:14 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 560E121F8592 for <dc@ietf.org>; Tue,  3 Jan 2012 08:15:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=6629; q=dns/txt; s=iport; t=1325607313; x=1326816913; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=Mrkm7CjIlEIK3u4FtD8xUtLn4UqH/Ee+SsPchk/ooLo=; b=l0mUoYasC2l0HnWVNZf2H+Wd21XreFkKYPo36JCdpYLWbK36oieqlC0b k6vTWRwscheJIbl/ua+bQNJDDu83QKenKNmZQ+vJQxp86q2HZyqXSxSuK M8Rd81jLUwMeBQWjCJtMb1KMM/C8FjgVYDwZ/izzUTONpJ1Epuqpf90GI k=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AjIPABUoA09Io8UY/2dsb2JhbAA6CoIFq2CBcgEBAQMBAQEBDwEdCjQLBQcEAgEIEQQBAQsGFwEGASYfCQgBAQQLCAgTB4dYCJdBAZ1viFaCVmMEiDWfCQ
X-IronPort-AV: E=Sophos;i="4.71,450,1320624000";  d="scan'208";a="2687978"
Received: from vla196-nat.cisco.com (HELO bgl-core-3.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 03 Jan 2012 16:15:11 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-3.cisco.com (8.14.3/8.14.3) with ESMTP id q03GFBRb014046; Tue, 3 Jan 2012 16:15:11 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Jan 2012 21:45:11 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 3 Jan 2012 21:45:09 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com>
In-Reply-To: <4F031689.1050303@raszuk.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKJ4svx6tIhjxTSoCEVFtst/+CNAAB77UA
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <robert@raszuk.net>
X-OriginalArrivalTime: 03 Jan 2012 16:15:11.0551 (UTC) FILETIME=[DE6924F0:01CCCA32]
Cc: Pedro Marques <pedro.r.marques@gmail.com>, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 16:15:15 -0000

Robert,

Please see inline.

-----Original Message-----
From: Robert Raszuk [mailto:robert@raszuk.net]=20
Sent: Tuesday, January 03, 2012 8:24 PM
To: Ashish Dalela (adalela)
Cc: Pedro Marques; dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect

Ashish,

OK let's just discuss what is in your draft on Hierarchical Addressing.

1. You have 48 bits 32 go for host remaining 16 goes for switches. How=20
do you aggregate at the TOR or AGGR switch boundary ? Are you assuming=20
single HOST - SWITCH with max 65K flat macs ?

[AD] The higher bits identify a switch - it's a switch-id. The hosts are
dynamically assigned a host-id under that switch. Let's assume 23 bits
are for switch-id and 23 bits for host-id. To forward a packet to the
host, you only have to look at the first 23 bits. That's a MAC prefix to
route against.

[AD] You can have 2^23 switches in a network and 2^23 hosts under each
switch.

2. Can you deploy this on existing VMs and existing switches ?

[AD] What do you mean by this? Any VM can be configured with any MAC.
Any physical host can be configured with any MAC on any logical
interface. Configuration standpoint this is possible. Forwarding
standpoint, that's another question.

3. What new protocol you envision to use to distribute those new MACs ?

[AD] IS-IS extensions. It can be TRILL extensions.

4. What is the advantage of using this vs ILNP if we assume that hosts=20
should be modified ?

[AD] I'm not familiar with the ILNP work, but I'm assuming you are
talking about Loc-Id separation. If not, correct me. If yes, each Loc-Id
binding can be a host route, with mobility. These host-routes are a
scaling problem. Traditional IP packet have IP as ID and MAC as LOC. We
are just extending this LOC to make it actually location aware rather
than a flat address which is fixed regardless of where the location is.

5. The proposal does not support aggregation .. even the draft says it
:)

"The total number of hardware entries anywhere in the network equals the

total number of switches and remains agnostic of VM mobility."

[AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts. With
48 port access switches, you need 833 switches. That's the routing table
size for any switch in the datacenter - core, aggregation, access.
Contrast this with host-routes, if each VM talks to 100 VMs, then each
access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just because
the network prefix is 23 bits does not mean we have to store 10^23
prefixes. We have to store only as many switches as there are in the
network. Ratio between VM : switch is 1000 : 1 (today, assuming 48 port
access and 20 VM per port). That means instead of storing host-routes
which will grow proportional to VM growth, we store switch-id, which
will grow at 1000 times slower rate. As VM density increases, this
growth rate is further slowed down. There are other techniques to
further reduce the rate of growth. But in any case, 1000 times slower is
a lot slow.

So if I have 100K switches I can not do any aggregation and need to=20
"route" 100K MAC addresses.

[AD] I don't know how you came to that conclusion. Think of HMAC as an
IP address. Instead of 32 bits it is 46 bits. You route by prefixes in
L3, and you are routing by the same prefixes here. Just as you aggregate
IP, same way you aggregate MAC. It's not different.

6. Who provides me the mapping between switch mac and host/vm mac behind

such switch ? Do switches proxy arp globally within your domain ?

[AD] Variation of the same question. Above should answer it.

Thx,
R.


> Robert,
>
>>> So you are advocating solution which is based on encapsulation -
that
> is fine.
>
> No, I'm not. Did you read the draft I had mentioned?
> Hierarchical MAC is not encapsulation. It is one 48 bit address.
>
>>> However how could you ever arrive at the conclusion that HMACs would
>>> scale better then "anything we know". Well I don't know about you,
> but I
>>> know that the key to scaling is ability to aggregate. And it is not
> that
>>> huge mystery that MACs aggregate rather poorly while there are quite
>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
natively
>
> You are hitting the issue on the nail. So, read the draft I mentioned.
> Hierarchical MAC is higher bits "network prefix" and lower bits "host
> id".
> That's summarizable like IP address and aggregated.
> It has 46 bits to modify so larger than IPv4 internet.
>
> I won't comment on the rest, because you have made an assumption about
> encapsulation.
>
> I refer to this -
> http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: Robert Raszuk [mailto:robert@raszuk.net]
> Sent: Tuesday, January 03, 2012 7:05 PM
> To: Ashish Dalela (adalela)
> Cc: Pedro Marques; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>
> Ashish,
>
>> The issues of scale you mentioned don't exist in Hierarchical MACs,
>> which scale better than anything we know of.
>
> So you are advocating solution which is based on encapsulation - that
is
> fine.
>
> However how could you ever arrive at the conclusion that HMACs would
> scale better then "anything we know". Well I don't know about you, but
I
> know that the key to scaling is ability to aggregate. And it is not
that
>
> huge mystery that MACs aggregate rather poorly while there are quite
> well deployed protocols (be it IPv4 or IPv6) which aggregate natively.
>
> For inter-dc this is IMHO a must. A must even if you build it using
> traditional routers or OF enabled switches - does not matter.
>
>> I don't want to split the requirements into multiple use-cases
>> because then this DC group will be many groups - one doing L2 and
>> another doing L3. That I think you will agree is not optimal for
>> anyone
>
> Why MAC-in-IP does not solve it for everyone ? And there are deployed
> solutions already ..
>
> IMHO what this group should accomplish is not to try to reinvent the
> world, but perhaps as example discuss where is the right boundary of
> encapsulation, how should we communicate between network and hosts,
what
>
> kind of DC instrumentation should be IETF blessed for easy integration
> (ie min subset of functionality it should possess etc .... )
>
> R.
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>
>


From robert@raszuk.net  Tue Jan  3 08:37:33 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2243521F84F7 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:37:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.556
X-Spam-Level: 
X-Spam-Status: No, score=-2.556 tagged_above=-999 required=5 tests=[AWL=0.043,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Nq7ka4l-WIhO for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:37:32 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 4C49521F84F0 for <dc@ietf.org>; Tue,  3 Jan 2012 08:37:32 -0800 (PST)
Received: (qmail 3408 invoked by uid 399); 3 Jan 2012 16:37:31 -0000
Received: from unknown (HELO ?192.168.1.91?) (83.31.238.24) by mail1310.opentransfer.com with ESMTP; 3 Jan 2012 16:37:31 -0000
X-Originating-IP: 83.31.238.24
Message-ID: <4F032ECA.9030106@raszuk.net>
Date: Tue, 03 Jan 2012 17:37:30 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: Pedro Marques <pedro.r.marques@gmail.com>, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 16:37:33 -0000

Ashish,

> 1. You have 48 bits 32 go for host remaining 16 goes for switches. How
> do you aggregate at the TOR or AGGR switch boundary ? Are you assuming
> single HOST - SWITCH with max 65K flat macs ?
>
> [AD] The higher bits identify a switch - it's a switch-id. The hosts are
> dynamically assigned a host-id under that switch. Let's assume 23 bits
> are for switch-id and 23 bits for host-id. To forward a packet to the
> host, you only have to look at the first 23 bits. That's a MAC prefix to
> route against.

I don't think we are at the point where the biggest worry is how to 
optimize forwarding to be on as little bits as necessary. I think quite 
a few real hardware can forward today at line rate even by 128 bit lookup.

> 2. Can you deploy this on existing VMs and existing switches ?
>
> [AD] What do you mean by this? Any VM can be configured with any MAC.
> Any physical host can be configured with any MAC on any logical
> interface. Configuration standpoint this is possible. Forwarding
> standpoint, that's another question.

I mean forwarding and ARP IP to new MAC resolution changes required.

> 4. What is the advantage of using this vs ILNP if we assume that hosts
> should be modified ?
>
> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
> talking about Loc-Id separation. If not, correct me. If yes, each Loc-Id
> binding can be a host route, with mobility. These host-routes are a
> scaling problem.

Can you provide any substantial evidence that DNS/DNSSec can not scale 
with host routes ?

> Traditional IP packet have IP as ID and MAC as LOC.

Wrong. Traditional IP address is both locator and ID today. Hint: 
Locators and IDs must be able to be resolved globally for system to work 
globally.

> We
> are just extending this LOC to make it actually location aware rather
> than a flat address which is fixed regardless of where the location is.

That is what Loc/ID split is all about. Perhaps your proposal could be 
discussed in RRG or better in LISP WGs ....

> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts. With
> 48 port access switches, you need 833 switches. That's the routing table
> size for any switch in the datacenter - core, aggregation, access.

So if I move a VM from one switch to the other I need to change it's MAC 
address. Here I already have issue ... if I just change the switch 
portion or entire address .. how are my TCP sessions are going to stay 
up ? I am assuming IP address v4 or v6 did not change - correct ?

Could you provide walk step by step the sequence of events for VM 
mobility in your solution ?

> Contrast this with host-routes, if each VM talks to 100 VMs, then each
> access switch needs 48 * 25 * 100 = 120,000 host routes. Just because
> the network prefix is 23 bits does not mean we have to store 10^23
> prefixes.

You are persistently stuck with thinking flat. Think DNS or any other 
mapping plane. There is zero requirement that any entity should store 
all entries. All it needs to story is sufficient information to know how 
to get information where needed. That is significant difference.

> We have to store only as many switches as there are in the
> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48 port
> access and 20 VM per port). That means instead of storing host-routes
> which will grow proportional to VM growth, we store switch-id, which
> will grow at 1000 times slower rate. As VM density increases, this
> growth rate is further slowed down. There are other techniques to
> further reduce the rate of growth. But in any case, 1000 times slower is
> a lot slow.

Nope. I am not storing host routes. In fact in the previous email I 
explicitly commented that flooding and storing host routes flat is not 
scalable.

R.

From adalela@cisco.com  Tue Jan  3 08:39:53 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0541A21F8539 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:39:53 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.88
X-Spam-Level: 
X-Spam-Status: No, score=-1.88 tagged_above=-999 required=5 tests=[AWL=-0.196,  BAYES_00=-2.599, J_CHICKENPOX_31=0.6, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id M2DvFEfmnMHm for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:39:52 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 4D40E21F84FA for <dc@ietf.org>; Tue,  3 Jan 2012 08:39:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=6007; q=dns/txt; s=iport; t=1325608791; x=1326818391; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=5MLT8iL6k1p6RYl27VqQruHQ5GBokrfaNXo9QpjlyF0=; b=kj6mY6zyX/C49892aMCtXk4SaoYQUcQeeqdwgFFUz6H78nYZhQkqycKP qQRQQCvcGMsxke+oVj7vHdVcxFZle77lsL2HK4pXwqaHEcJlBJw71jUSZ SuPMVK4Kso1sMnd2XNvdzBkZa4s1Han0eZY8QxKM8xfLSy04Re3pwPuWm I=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AjIPAN0uA09Io8UY/2dsb2JhbABEggWrYIFyAQEBAwEBAQEPAR0KMwELBQcEAgEIEQQBAQsGFwEGASYfCQgBAQQLCAgah1gIl0EBnW0EiyxjBIg1nwk
X-IronPort-AV: E=Sophos;i="4.71,450,1320624000";  d="scan'208";a="2693920"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 03 Jan 2012 16:39:49 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q03GdnxW025369; Tue, 3 Jan 2012 16:39:49 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Jan 2012 22:09:49 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
x-cr-hashedpuzzle: Caft GEdk JhMU LRe6 MLeg MNLw Mr3F Qu99 TzKX bfhb ou8W pq98 qdx3 v+pW wPxL yTnm; 4; YQBsAGQAcgBpAG4ALgBpAHMAYQBhAGMAQABnAG0AYQBpAGwALgBjAG8AbQA7AGQAYwBAAGkAZQB0AGYALgBvAHIAZwA7AHIAbwBiAGUAcgB0AEAAcgBhAHMAegB1AGsALgBuAGUAdAA7AHkAYQBrAG8AdgBAAGoAdQBuAGkAcABlAHIALgBuAGUAdAA=; Sosha1_v1; 7; {AFA117C3-5254-4088-96F7-C763B0A1DC7C}; YQBkAGEAbABlAGwAYQBAAGMAaQBzAGMAbwAuAGMAbwBtAA==; Tue, 03 Jan 2012 16:39:17 GMT; UgBFADoAIABbAGQAYwBdACAAbgBlAHcAIABkAHIAYQBmAHQAcwA=
x-cr-puzzleid: {AFA117C3-5254-4088-96F7-C763B0A1DC7C}
Content-class: urn:content-classes:message
Date: Tue, 3 Jan 2012 22:09:17 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B256AC@XMB-BGL-416.cisco.com>
In-Reply-To: <4F032492.4030201@raszuk.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] new drafts
Thread-Index: AczKL+soQRCPiUDfTLiOU9/7ylh43gAA8hvA
References: <618BE8B40039924EB9AED233D4A09C5102B2527A@XMB-BGL-416.cisco.com><D96F76EF-0011-4F33-A1CF-EC9AD12BA411@gmail.com><618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com> <201201031510.q03FABS62810@magenta.juniper.net> <618BE8B40039924EB9AED233D4A09C5102B2569C@XMB-BGL-416.cisco.com> <4F032492.4030201@raszuk.net>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <robert@raszuk.net>
X-OriginalArrivalTime: 03 Jan 2012 16:39:49.0273 (UTC) FILETIME=[4F33A090:01CCCA36]
Cc: Yakov Rekhter <yakov@juniper.net>, dc@ietf.org, Aldrin Isaac <aldrin.isaac@gmail.com>
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 16:39:53 -0000

Robert,

Here are some things to evaluate scalability against.

Assume a simple case that under a switch there are 250 VM, split amongst
10 customers. Each customer has a unique VRF. Normally, we would have
advertized a /24 route for that switch. In this case your routes to a
single switch are segmented and there are 10 VRFs, and you will very
likely have 250 route table entries total segmented by VRF-ids. That's a
routing table bloat from 1 entry to 250 entry. This happens everywhere.
I have assumed a public IP addressing, but the same thing will happen
for the private addressing as well.

Then, typically the number of VRFs you can support on a router is about
4K. These # of VRFs have to be supported at the access, so you have to
assume this is the limit from the access viewpoint. 4K is nothing - we
have 4K VLANs today to segment and that's nothing. Every segmentation
technique being talked about speaks of a million plus segments. Take
that to VRFs, you need a million VRFs on the control plane at the access
switch. Another problem with a VRF is that it will get and store a route
for a host, even when there is no host talking to it. With dynamic
learning or learning based on packet arrival you avoid these host routes
and limit them to active conversations only. That's a huge saving
because not every host talks to every host.

Then, at massive scale, the failure rates are also massive. At 5 nines
reliability, a hardware entity out of 100,000 will fail every 5.25
minutes. Access switches don't have high availability. Software fails
even faster - OS is generally 4 9's, which means one out of 10,000 fails
every 5.25 minutes. At millions of instances of such entities, there are
rapid failures happening. You have to only look at massive datacenters
today run by Web 2.0 companies, and they all echo this view. They
basically form clusters of the same application. Software moves the
workload from one cluster to another. The whole cluster can fail over.
That's not what you do in a consumer cloud, where you have to recover.
At massive failure rates, and rapid recovery rates, you are moving
things around and injecting host routes for reachability. It's a
convergence problem, especially with link-state algorithms.

If the VM can be moved, then all you need to do is install a temporary
redirect of packets to the new location. Each host will refresh the MAC
after 15-30 seconds. If the packets are redirected from old to new
location for these 30 seconds, the redirect can be aged automatically.
This happens all the time in mobile networks in what is called a "fast
handoff" where you redirect the packets until handoff is completed.

Thanks, Ashish


-----Original Message-----
From: Robert Raszuk [mailto:robert@raszuk.net]=20
Sent: Tuesday, January 03, 2012 9:24 PM
To: Ashish Dalela (adalela)
Cc: Yakov Rekhter; Aldrin Isaac; dc@ietf.org
Subject: Re: [dc] new drafts

Ashish,

VM mobility based on re-propagating host routes even with your own=20
domain looks to me like broken .. VM mobility timing requirements are in

milliseconds or 10s of ms .. routing distribution or even FIB update=20
with host routes (assuming that we could propagate host routes ahead of=20
migration then just trigger the activation) would be in 100s of ms or=20
seconds depending on the scope. I agree with your assertion that this is

not scalable.

However this has nothing to do with stating that VRFs are "just not=20
scalable". VRF instantiation or VRF scaling is very opaque topic.=20
Especially one could claim that VRFs in control plane are a very=20
scalable tools we have at our disposal today.

Best,
R.

> Yakov,
>
>>> Would you care to provide detailed technical analysis to support
your
> claim about VRFs "just not scalable",
>
> VM mobility requires insertion of host-routes. As VMs move and/or are
> created/deleted, these host routes need to be propagated everywhere.
> That's a control plane scaling problem, plus a problem about
> convergence.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Yakov Rekhter
> Sent: Tuesday, January 03, 2012 8:40 PM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org; Aldrin Isaac
> Subject: Re: [dc] new drafts
>
> Ashish,
>
>> Hi Aldrin,
>>
>>>> How would you address gratuitous ARP when using the hierarchical
> MAC
>> addressing with a registry to store MAC-IP bindings?
>>
>> We should ideally have a protocol that maps ARP messages into the
>> registry and vice versa. Why new protocol? Greater security and
>> reliability plus avoiding ARP broadcasts. Gratuitous ARP doesn't have
> an
>> acknowledge, so if you want to perform some actions based on this,
you
>> need to get an "acknowledge" for it. Gratuitous ARP can get
> dropped.=3D20
>>
>> The other issue is that a user can misuse it do MAC hijacking. But if
>> you have a hierarchical MAC you can't hijack because network knows
> your
>> location. Things like dot1x which solved these issues in the campus
>> space are not used in datacenter.=3D20
>>
>>>> Also, when using MAC prefixes in complex L2VPN topologies how could
>> we address the risk of a hub forwarding into the wrong spoke context
> on
>> the leaf switch?  Each egress port on the leaf switch may be in a
>> different context (ex: EVI in EVPN).
>>
>> We should avoid building any kind of VRF at the control plane,
because
>> that's just not scalable.
>
> Would you care to provide detailed technical analysis to support your
> claim about VRFs "just not scalable", or should this claim be treated
> as proof by emphatic assertion ?
>
> Yakov.
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>
>


From david.black@emc.com  Tue Jan  3 08:41:58 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4E84021F8504 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:41:58 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.588
X-Spam-Level: 
X-Spam-Status: No, score=-106.588 tagged_above=-999 required=5 tests=[AWL=0.011, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id hX60vYWqMAub for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:41:57 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 1A69121F84FB for <dc@ietf.org>; Tue,  3 Jan 2012 08:41:56 -0800 (PST)
Received: from hop04-l1d11-si01.isus.emc.com (HOP04-L1D11-SI01.isus.emc.com [10.254.111.54]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q03Gfrox018829 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 3 Jan 2012 11:41:55 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.222.130]) by hop04-l1d11-si01.isus.emc.com (RSA Interceptor); Tue, 3 Jan 2012 11:41:39 -0500
Received: from mxhub11.corp.emc.com (mxhub11.corp.emc.com [10.254.92.106]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q03Gfcc9014264; Tue, 3 Jan 2012 11:41:39 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub11.corp.emc.com ([10.254.92.106]) with mapi; Tue, 3 Jan 2012 11:41:38 -0500
From: <david.black@emc.com>
To: <adalela@cisco.com>
Date: Tue, 3 Jan 2012 11:41:36 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKJ4svx6tIhjxTSoCEVFtst/+CNAAB77UAAAHJgoA=
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 16:41:58 -0000

Ashish,

> > [AD] The higher bits identify a switch - it's a switch-id.

That breaks VM migration across switches by forcing a MAC change.

Thanks,
--David

> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Ashis=
h Dalela (adalela)
> Sent: Tuesday, January 03, 2012 11:15 AM
> To: robert@raszuk.net
> Cc: Pedro Marques; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
>=20
> Robert,
>=20
> Please see inline.
>=20
> -----Original Message-----
> From: Robert Raszuk [mailto:robert@raszuk.net]
> Sent: Tuesday, January 03, 2012 8:24 PM
> To: Ashish Dalela (adalela)
> Cc: Pedro Marques; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>=20
> Ashish,
>=20
> OK let's just discuss what is in your draft on Hierarchical Addressing.
>=20
> 1. You have 48 bits 32 go for host remaining 16 goes for switches. How
> do you aggregate at the TOR or AGGR switch boundary ? Are you assuming
> single HOST - SWITCH with max 65K flat macs ?
>=20
> [AD] The higher bits identify a switch - it's a switch-id. The hosts are
> dynamically assigned a host-id under that switch. Let's assume 23 bits
> are for switch-id and 23 bits for host-id. To forward a packet to the
> host, you only have to look at the first 23 bits. That's a MAC prefix to
> route against.
>=20
> [AD] You can have 2^23 switches in a network and 2^23 hosts under each
> switch.
>=20
> 2. Can you deploy this on existing VMs and existing switches ?
>=20
> [AD] What do you mean by this? Any VM can be configured with any MAC.
> Any physical host can be configured with any MAC on any logical
> interface. Configuration standpoint this is possible. Forwarding
> standpoint, that's another question.
>=20
> 3. What new protocol you envision to use to distribute those new MACs ?
>=20
> [AD] IS-IS extensions. It can be TRILL extensions.
>=20
> 4. What is the advantage of using this vs ILNP if we assume that hosts
> should be modified ?
>=20
> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
> talking about Loc-Id separation. If not, correct me. If yes, each Loc-Id
> binding can be a host route, with mobility. These host-routes are a
> scaling problem. Traditional IP packet have IP as ID and MAC as LOC. We
> are just extending this LOC to make it actually location aware rather
> than a flat address which is fixed regardless of where the location is.
>=20
> 5. The proposal does not support aggregation .. even the draft says it
> :)
>=20
> "The total number of hardware entries anywhere in the network equals the
>=20
> total number of switches and remains agnostic of VM mobility."
>=20
> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts. With
> 48 port access switches, you need 833 switches. That's the routing table
> size for any switch in the datacenter - core, aggregation, access.
> Contrast this with host-routes, if each VM talks to 100 VMs, then each
> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just because
> the network prefix is 23 bits does not mean we have to store 10^23
> prefixes. We have to store only as many switches as there are in the
> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48 port
> access and 20 VM per port). That means instead of storing host-routes
> which will grow proportional to VM growth, we store switch-id, which
> will grow at 1000 times slower rate. As VM density increases, this
> growth rate is further slowed down. There are other techniques to
> further reduce the rate of growth. But in any case, 1000 times slower is
> a lot slow.
>=20
> So if I have 100K switches I can not do any aggregation and need to
> "route" 100K MAC addresses.
>=20
> [AD] I don't know how you came to that conclusion. Think of HMAC as an
> IP address. Instead of 32 bits it is 46 bits. You route by prefixes in
> L3, and you are routing by the same prefixes here. Just as you aggregate
> IP, same way you aggregate MAC. It's not different.
>=20
> 6. Who provides me the mapping between switch mac and host/vm mac behind
>=20
> such switch ? Do switches proxy arp globally within your domain ?
>=20
> [AD] Variation of the same question. Above should answer it.
>=20
> Thx,
> R.
>=20
>=20
> > Robert,
> >
> >>> So you are advocating solution which is based on encapsulation -
> that
> > is fine.
> >
> > No, I'm not. Did you read the draft I had mentioned?
> > Hierarchical MAC is not encapsulation. It is one 48 bit address.
> >
> >>> However how could you ever arrive at the conclusion that HMACs would
> >>> scale better then "anything we know". Well I don't know about you,
> > but I
> >>> know that the key to scaling is ability to aggregate. And it is not
> > that
> >>> huge mystery that MACs aggregate rather poorly while there are quite
> >>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> natively
> >
> > You are hitting the issue on the nail. So, read the draft I mentioned.
> > Hierarchical MAC is higher bits "network prefix" and lower bits "host
> > id".
> > That's summarizable like IP address and aggregated.
> > It has 46 bits to modify so larger than IPv4 internet.
> >
> > I won't comment on the rest, because you have made an assumption about
> > encapsulation.
> >
> > I refer to this -
> > http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> >
> > Thanks, Ashish
> >
> >
> > -----Original Message-----
> > From: Robert Raszuk [mailto:robert@raszuk.net]
> > Sent: Tuesday, January 03, 2012 7:05 PM
> > To: Ashish Dalela (adalela)
> > Cc: Pedro Marques; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> > Ashish,
> >
> >> The issues of scale you mentioned don't exist in Hierarchical MACs,
> >> which scale better than anything we know of.
> >
> > So you are advocating solution which is based on encapsulation - that
> is
> > fine.
> >
> > However how could you ever arrive at the conclusion that HMACs would
> > scale better then "anything we know". Well I don't know about you, but
> I
> > know that the key to scaling is ability to aggregate. And it is not
> that
> >
> > huge mystery that MACs aggregate rather poorly while there are quite
> > well deployed protocols (be it IPv4 or IPv6) which aggregate natively.
> >
> > For inter-dc this is IMHO a must. A must even if you build it using
> > traditional routers or OF enabled switches - does not matter.
> >
> >> I don't want to split the requirements into multiple use-cases
> >> because then this DC group will be many groups - one doing L2 and
> >> another doing L3. That I think you will agree is not optimal for
> >> anyone
> >
> > Why MAC-in-IP does not solve it for everyone ? And there are deployed
> > solutions already ..
> >
> > IMHO what this group should accomplish is not to try to reinvent the
> > world, but perhaps as example discuss where is the right boundary of
> > encapsulation, how should we communicate between network and hosts,
> what
> >
> > kind of DC instrumentation should be IETF blessed for easy integration
> > (ie min subset of functionality it should possess etc .... )
> >
> > R.
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
> >
> >
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


From robert@raszuk.net  Tue Jan  3 08:56:01 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2EC815E800B for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:56:01 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.404
X-Spam-Level: 
X-Spam-Status: No, score=-2.404 tagged_above=-999 required=5 tests=[AWL=-0.120, BAYES_00=-2.599, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 41Nr5StLl4pm for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 08:56:00 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 70B315E800A for <dc@ietf.org>; Tue,  3 Jan 2012 08:56:00 -0800 (PST)
Received: (qmail 18548 invoked by uid 399); 3 Jan 2012 16:55:59 -0000
Received: from unknown (HELO ?192.168.1.91?) (83.31.238.24) by mail1310.opentransfer.com with ESMTP; 3 Jan 2012 16:55:59 -0000
X-Originating-IP: 83.31.238.24
Message-ID: <4F03331E.9020104@raszuk.net>
Date: Tue, 03 Jan 2012 17:55:58 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
References: <618BE8B40039924EB9AED233D4A09C5102B2527A@XMB-BGL-416.cisco.com><D96F76EF-0011-4F33-A1CF-EC9AD12BA411@gmail.com><618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com> <201201031510.q03FABS62810@magenta.juniper.net> <618BE8B40039924EB9AED233D4A09C5102B2569C@XMB-BGL-416.cisco.com> <4F032492.4030201@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256AC@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B256AC@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: Yakov Rekhter <yakov@juniper.net>, Aldrin Isaac <aldrin.isaac@gmail.com>, dc@ietf.org
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 16:56:01 -0000

Ashish,

> Assume a simple case that under a switch there are 250 VM, split amongst
> 10 customers. Each customer has a unique VRF. Normally, we would have
> advertized a /24 route for that switch. In this case your routes to a
> single switch are segmented and there are 10 VRFs, and you will very
> likely have 250 route table entries total segmented by VRF-ids. That's a
> routing table bloat from 1 entry to 250 entry. This happens everywhere.
> I have assumed a public IP addressing, but the same thing will happen
> for the private addressing as well.

Normally in this case I would have 10 routes + 1 route for switch 
loopback not 250. However when those VMs will start moving between PEs 
you are right that worse case one could end up would be 250 non 
aggregetable routes per VRF.

First I don't think this is a problem scaling wise today as I would not 
assume that everyone will be moving.

Second we know today how to handle millions of routes in BGP.

Third I am not saying that this model should be used.

I am advocating that a hierarchical IP in IP model should be used. VRFs 
on those PEs could be used for isolation purposes. And any VM move needs 
to be only reflected in mapping plane and not in routing infrastructure 
of the network.


> Then, typically the number of VRFs you can support on a router is about
> 4K. These # of VRFs have to be supported at the access, so you have to
> assume this is the limit from the access viewpoint.

Nope ... Control plane VRFs have no bounds for limits. 4K comes from 
platform limitations.

Hint: Think about control plane and data plane separation. Pedro's draft 
already provides an example on how such separation can be accomplished.

> Then, at massive scale, the failure rates are also massive. At 5 nines
> reliability, a hardware entity out of 100,000 will fail every 5.25
> minutes. Access switches don't have high availability. Software fails
> even faster - OS is generally 4 9's, which means one out of 10,000 fails
> every 5.25 minutes. At millions of instances of such entities, there are
> rapid failures happening. You have to only look at massive datacenters
> today run by Web 2.0 companies, and they all echo this view. They
> basically form clusters of the same application. Software moves the
> workload from one cluster to another. The whole cluster can fail over.
> That's not what you do in a consumer cloud, where you have to recover.
> At massive failure rates, and rapid recovery rates, you are moving
> things around and injecting host routes for reachability. It's a
> convergence problem, especially with link-state algorithms.

Not applicable to what I am advocating.


> If the VM can be moved, then all you need to do is install a temporary
> redirect of packets to the new location.

What is this redirection ? Where do you install it ? In all switching 
elements of the network ? Redirection works when you encapsulate. 
Without any encapsulation how do you redirect by just touching a single 
network element ?

> Each host will refresh the MAC
> after 15-30 seconds. If the packets are redirected from old to new
> location for these 30 seconds, the redirect can be aged automatically.
> This happens all the time in mobile networks in what is called a "fast
> handoff" where you redirect the packets until handoff is completed.

Hmmmm interesting. We even have a draft which can be used for such 
redirection today ... draft-rekhter-l3vpn-virtual-hub

Cheers,
R.

From adalela@cisco.com  Tue Jan  3 09:02:04 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 46E1811E8072 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:02:04 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.328
X-Spam-Level: 
X-Spam-Status: No, score=-2.328 tagged_above=-999 required=5 tests=[AWL=0.271,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YJaq8kdjFh2Y for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:02:03 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 4541E11E8086 for <dc@ietf.org>; Tue,  3 Jan 2012 09:02:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=7015; q=dns/txt; s=iport; t=1325610122; x=1326819722; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=nbaIjBfKTNbRK72KznER9fQ/Nz1MGOacRfwmgva6wI4=; b=d43ybRYJdFS/NGetqk/6awDcnlTTgN0RRhU8ssb2juSBBmS8RrKZqWK5 cHofJ1hWBhMgPPJz3zZahMACLzv8KPutdNVKz6G2nd2gUDm0a2d6rLc6w P9/8ufDpZjlxFrbhXOJnPMPWhNXbVLLzqzex5u9Qpj/OksLLVazC1//5C Q=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AjEPAMczA09Io8UY/2dsb2JhbAA5CoIFq2CBcgEBAQMBEgEdCj8FBwQCAQgRBAEBCwYXAQYBRQkIAQEECwgIEweHWJdPAZ1ziFaCVmMEiDWfCQ
X-IronPort-AV: E=Sophos;i="4.71,450,1320624000";  d="scan'208";a="2694490"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 03 Jan 2012 17:02:00 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q03H20Qi028181; Tue, 3 Jan 2012 17:02:00 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Jan 2012 22:32:00 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 3 Jan 2012 22:31:58 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B256B6@XMB-BGL-416.cisco.com>
In-Reply-To: <4F032ECA.9030106@raszuk.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKNgDnTfy7/5tbQmGOlxt9zlAqtQAALayQ
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <4F032ECA.9030106@raszuk.net>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <robert@raszuk.net>
X-OriginalArrivalTime: 03 Jan 2012 17:02:00.0415 (UTC) FILETIME=[689FC6F0:01CCCA39]
Cc: Pedro Marques <pedro.r.marques@gmail.com>, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:02:04 -0000

Robert,

>> I don't think we are at the point where the biggest worry is how to=20
optimize forwarding to be on as little bits as necessary. I think quite=20
a few real hardware can forward today at line rate even by 128 bit
lookup

What's the point of saying this? I'm just describing what a HMAC is. It
will be at most 48 bits.

>> I mean forwarding and ARP IP to new MAC resolution changes required.

NO changes to ARP. Changes to forwarding yes. Instead of looking up 48
bits of a MAC, we look up fewer bits. CAM vs. TCAM.

>> Can you provide any substantial evidence that DNS/DNSSec can not
scale=20
with host routes ?

What does this have to do with DNS? You have an ID and you need a LOC,
you need a binding. That binding is a host-route. I'm talking about
hardware entries.=20

>> Wrong. Traditional IP address is both locator and ID today. Hint:=20
Locators and IDs must be able to be resolved globally for system to work

globally.

You are probably a routing guy :-) Think of switching. You unplug a host
from one port and plug into another. The switch learns the new location
based on the MAC. There is no separate locator assigned in switching.
MAC is the locator.

>> So if I move a VM from one switch to the other I need to change it's
MAC=20
address. Here I already have issue ... if I just change the switch=20
portion or entire address .. how are my TCP sessions are going to stay=20
up ? I am assuming IP address v4 or v6 did not change - correct ?

Today on VM mobility MAC can change (it is implementation dependant).
Host sends a Grat ARP announcing its new IP-MAC binding. IP never
changes.

>> Could you provide walk step by step the sequence of events for VM=20
mobility in your solution ?

1. VM is tied to switch S1, and has a host id H1. Its MAC is S1:H1
2. VM is moved to switch S2, and gets a host id H2. Its MAC is S2:H2.
3. VM sends a Grat ARP for S2:H2.
4. Network installs a temporary redirect at S1 to redirect all packets
sent to S1:H1 to S2:H2.
5. In 15-30 seconds every host ARPs for the moved VM, and gets S2:H2.
6. Every host starts sending packet to S2:H2 in 15-30 seconds.
7. Temporary redirect installed at step 4 ages in 60 seconds.

>> You are persistently stuck with thinking flat. Think DNS or any other

mapping plane. There is zero requirement that any entity should store=20
all entries. All it needs to story is sufficient information to know how

to get information where needed. That is significant difference.

DNS is control plane. I'm talking about host-route entries installed in
hardware. This is used for forwarding.

We've been thru this discussion any number of times in the past. Look at
any map-encap for mobility and it needs to store host routes. You might
want to look at the older emails.

Thanks, Ashish


-----Original Message-----
From: Robert Raszuk [mailto:robert@raszuk.net]=20
Sent: Tuesday, January 03, 2012 10:08 PM
To: Ashish Dalela (adalela)
Cc: Pedro Marques; dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect

Ashish,

> 1. You have 48 bits 32 go for host remaining 16 goes for switches. How
> do you aggregate at the TOR or AGGR switch boundary ? Are you assuming
> single HOST - SWITCH with max 65K flat macs ?
>
> [AD] The higher bits identify a switch - it's a switch-id. The hosts
are
> dynamically assigned a host-id under that switch. Let's assume 23 bits
> are for switch-id and 23 bits for host-id. To forward a packet to the
> host, you only have to look at the first 23 bits. That's a MAC prefix
to
> route against.

I don't think we are at the point where the biggest worry is how to=20
optimize forwarding to be on as little bits as necessary. I think quite=20
a few real hardware can forward today at line rate even by 128 bit
lookup.

> 2. Can you deploy this on existing VMs and existing switches ?
>
> [AD] What do you mean by this? Any VM can be configured with any MAC.
> Any physical host can be configured with any MAC on any logical
> interface. Configuration standpoint this is possible. Forwarding
> standpoint, that's another question.

I mean forwarding and ARP IP to new MAC resolution changes required.

> 4. What is the advantage of using this vs ILNP if we assume that hosts
> should be modified ?
>
> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
> talking about Loc-Id separation. If not, correct me. If yes, each
Loc-Id
> binding can be a host route, with mobility. These host-routes are a
> scaling problem.

Can you provide any substantial evidence that DNS/DNSSec can not scale=20
with host routes ?

> Traditional IP packet have IP as ID and MAC as LOC.

Wrong. Traditional IP address is both locator and ID today. Hint:=20
Locators and IDs must be able to be resolved globally for system to work

globally.

> We
> are just extending this LOC to make it actually location aware rather
> than a flat address which is fixed regardless of where the location
is.

That is what Loc/ID split is all about. Perhaps your proposal could be=20
discussed in RRG or better in LISP WGs ....

> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts. With
> 48 port access switches, you need 833 switches. That's the routing
table
> size for any switch in the datacenter - core, aggregation, access.

So if I move a VM from one switch to the other I need to change it's MAC

address. Here I already have issue ... if I just change the switch=20
portion or entire address .. how are my TCP sessions are going to stay=20
up ? I am assuming IP address v4 or v6 did not change - correct ?

Could you provide walk step by step the sequence of events for VM=20
mobility in your solution ?

> Contrast this with host-routes, if each VM talks to 100 VMs, then each
> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just =
because
> the network prefix is 23 bits does not mean we have to store 10^23
> prefixes.

You are persistently stuck with thinking flat. Think DNS or any other=20
mapping plane. There is zero requirement that any entity should store=20
all entries. All it needs to story is sufficient information to know how

to get information where needed. That is significant difference.

> We have to store only as many switches as there are in the
> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48
port
> access and 20 VM per port). That means instead of storing host-routes
> which will grow proportional to VM growth, we store switch-id, which
> will grow at 1000 times slower rate. As VM density increases, this
> growth rate is further slowed down. There are other techniques to
> further reduce the rate of growth. But in any case, 1000 times slower
is
> a lot slow.

Nope. I am not storing host routes. In fact in the previous email I=20
explicitly commented that flooding and storing host routes flat is not=20
scalable.

R.

From david.black@emc.com  Tue Jan  3 09:03:01 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8954A21F85AF for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:03:01 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.588
X-Spam-Level: 
X-Spam-Status: No, score=-106.588 tagged_above=-999 required=5 tests=[AWL=0.011, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tjFlubsM0lp5 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:03:00 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 658F921F8462 for <dc@ietf.org>; Tue,  3 Jan 2012 09:03:00 -0800 (PST)
Received: from hop04-l1d11-si02.isus.emc.com (HOP04-L1D11-SI02.isus.emc.com [10.254.111.55]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q03H2wPR011747 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 3 Jan 2012 12:02:58 -0500
Received: from mailhub.lss.emc.com (mailhubhoprd04.lss.emc.com [10.254.222.226]) by hop04-l1d11-si02.isus.emc.com (RSA Interceptor); Tue, 3 Jan 2012 12:02:45 -0500
Received: from mxhub12.corp.emc.com (mxhub12.corp.emc.com [10.254.92.107]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q03H2i3C025013; Tue, 3 Jan 2012 12:02:44 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub12.corp.emc.com ([10.254.92.107]) with mapi; Tue, 3 Jan 2012 12:02:44 -0500
From: <david.black@emc.com>
To: <adalela@cisco.com>
Date: Tue, 3 Jan 2012 12:02:37 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKJ4svx6tIhjxTSoCEVFtst/+CNAAB77UAAAHJgoAAABTxIA==
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBC5@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com>
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-puzzleid: {792B1E50-00FB-46CD-95BA-E991C13E936C}
x-cr-hashedpuzzle: DxwQ Ffzd FjsZ Gkcp HMfz Lmg1 NCtl NDQp OwY6 Q6gB SAUU Saie VcAH YQ+p YyFj Y5od; 2; YQBkAGEAbABlAGwAYQBAAGMAaQBzAGMAbwAuAGMAbwBtADsAZABjAEAAaQBlAHQAZgAuAG8AcgBnAA==; Sosha1_v1; 7; {792B1E50-00FB-46CD-95BA-E991C13E936C}; ZABhAHYAaQBkAC4AYgBsAGEAYwBrAEAAZQBtAGMALgBjAG8AbQA=; Tue, 03 Jan 2012 17:02:37 GMT; UgBFADoAIABbAGQAYwBdACAAWwBhAHIAbQBkAF0AIABJAFAAIABvAHYAZQByACAASQBQACAAcwBvAGwAdQB0AGkAbwBuACAAZgBvAHIAIABkAGEAdABhACAAYwBlAG4AdABlAHIAIABpAG4AdABlAHIAYwBvAG4AbgBlAGMAdAA=
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:03:01 -0000

Just saw this:

> If the VM can be moved, then all you need to do is install a temporary
> redirect of packets to the new location. Each host will refresh the MAC
> after 15-30 seconds. If the packets are redirected from old to new
> location for these 30 seconds, the redirect can be aged automatically.
> This happens all the time in mobile networks in what is called a "fast
> handoff" where you redirect the packets until handoff is completed.

"all you need to do" should incur a $5 fine for the network version of
"a simple matter of programming" :-).

I see a bunch of problems here:

Mobile networks aren't running VMs.  ARP caches matter - a little bit of
Googling turns up multiple minute timeouts for some older OS versions and 4
hours for some routers.

In addition, some VMs need static MACs (e.g., software license key or
encryption key may be bound to the MAC).  There will be other reasons why
a VM's MAC cannot be changed when it is live migrated.

This also doesn't cover physical host usage of the burnt-in hardware MAC
on the NIC.  I assume that those need to continue to work as-is.  For
46-bits, I'm guessing that the Hierarchical MAC approach takes all the
locally administered MACs.  If any of those MACs are already in use,
that'll be a problem.

It also looks like it's seriously disruptive to existing L2 network managem=
ent.

I see no mention of TRILL, SPB or PBB in the draft that advocates Hierarchi=
cal
MACs - those are three existing approaches to this problem that can aggrega=
te
without forcing MAC changes on live VM migration.  At the very least, a sol=
id
comparison to existing technology is in order.

Finally, this sort of MAC hierarchy work probably belongs in IEEE, not IETF=
.

Thanks,
--David


> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of david=
.black@emc.com
> Sent: Tuesday, January 03, 2012 11:42 AM
> To: adalela@cisco.com
> Cc: dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
>=20
> Ashish,
>=20
> > > [AD] The higher bits identify a switch - it's a switch-id.
>=20
> That breaks VM migration across switches by forcing a MAC change.
>=20
> Thanks,
> --David
>=20
> > -----Original Message-----
> > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Ash=
ish Dalela (adalela)
> > Sent: Tuesday, January 03, 2012 11:15 AM
> > To: robert@raszuk.net
> > Cc: Pedro Marques; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center interconne=
ct
> >
> > Robert,
> >
> > Please see inline.
> >
> > -----Original Message-----
> > From: Robert Raszuk [mailto:robert@raszuk.net]
> > Sent: Tuesday, January 03, 2012 8:24 PM
> > To: Ashish Dalela (adalela)
> > Cc: Pedro Marques; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> > Ashish,
> >
> > OK let's just discuss what is in your draft on Hierarchical Addressing.
> >
> > 1. You have 48 bits 32 go for host remaining 16 goes for switches. How
> > do you aggregate at the TOR or AGGR switch boundary ? Are you assuming
> > single HOST - SWITCH with max 65K flat macs ?
> >
> > [AD] The higher bits identify a switch - it's a switch-id. The hosts ar=
e
> > dynamically assigned a host-id under that switch. Let's assume 23 bits
> > are for switch-id and 23 bits for host-id. To forward a packet to the
> > host, you only have to look at the first 23 bits. That's a MAC prefix t=
o
> > route against.
> >
> > [AD] You can have 2^23 switches in a network and 2^23 hosts under each
> > switch.
> >
> > 2. Can you deploy this on existing VMs and existing switches ?
> >
> > [AD] What do you mean by this? Any VM can be configured with any MAC.
> > Any physical host can be configured with any MAC on any logical
> > interface. Configuration standpoint this is possible. Forwarding
> > standpoint, that's another question.
> >
> > 3. What new protocol you envision to use to distribute those new MACs ?
> >
> > [AD] IS-IS extensions. It can be TRILL extensions.
> >
> > 4. What is the advantage of using this vs ILNP if we assume that hosts
> > should be modified ?
> >
> > [AD] I'm not familiar with the ILNP work, but I'm assuming you are
> > talking about Loc-Id separation. If not, correct me. If yes, each Loc-I=
d
> > binding can be a host route, with mobility. These host-routes are a
> > scaling problem. Traditional IP packet have IP as ID and MAC as LOC. We
> > are just extending this LOC to make it actually location aware rather
> > than a flat address which is fixed regardless of where the location is.
> >
> > 5. The proposal does not support aggregation .. even the draft says it
> > :)
> >
> > "The total number of hardware entries anywhere in the network equals th=
e
> >
> > total number of switches and remains agnostic of VM mobility."
> >
> > [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts. With
> > 48 port access switches, you need 833 switches. That's the routing tabl=
e
> > size for any switch in the datacenter - core, aggregation, access.
> > Contrast this with host-routes, if each VM talks to 100 VMs, then each
> > access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just because
> > the network prefix is 23 bits does not mean we have to store 10^23
> > prefixes. We have to store only as many switches as there are in the
> > network. Ratio between VM : switch is 1000 : 1 (today, assuming 48 port
> > access and 20 VM per port). That means instead of storing host-routes
> > which will grow proportional to VM growth, we store switch-id, which
> > will grow at 1000 times slower rate. As VM density increases, this
> > growth rate is further slowed down. There are other techniques to
> > further reduce the rate of growth. But in any case, 1000 times slower i=
s
> > a lot slow.
> >
> > So if I have 100K switches I can not do any aggregation and need to
> > "route" 100K MAC addresses.
> >
> > [AD] I don't know how you came to that conclusion. Think of HMAC as an
> > IP address. Instead of 32 bits it is 46 bits. You route by prefixes in
> > L3, and you are routing by the same prefixes here. Just as you aggregat=
e
> > IP, same way you aggregate MAC. It's not different.
> >
> > 6. Who provides me the mapping between switch mac and host/vm mac behin=
d
> >
> > such switch ? Do switches proxy arp globally within your domain ?
> >
> > [AD] Variation of the same question. Above should answer it.
> >
> > Thx,
> > R.
> >
> >
> > > Robert,
> > >
> > >>> So you are advocating solution which is based on encapsulation -
> > that
> > > is fine.
> > >
> > > No, I'm not. Did you read the draft I had mentioned?
> > > Hierarchical MAC is not encapsulation. It is one 48 bit address.
> > >
> > >>> However how could you ever arrive at the conclusion that HMACs woul=
d
> > >>> scale better then "anything we know". Well I don't know about you,
> > > but I
> > >>> know that the key to scaling is ability to aggregate. And it is not
> > > that
> > >>> huge mystery that MACs aggregate rather poorly while there are quit=
e
> > >>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > natively
> > >
> > > You are hitting the issue on the nail. So, read the draft I mentioned=
.
> > > Hierarchical MAC is higher bits "network prefix" and lower bits "host
> > > id".
> > > That's summarizable like IP address and aggregated.
> > > It has 46 bits to modify so larger than IPv4 internet.
> > >
> > > I won't comment on the rest, because you have made an assumption abou=
t
> > > encapsulation.
> > >
> > > I refer to this -
> > > http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> > >
> > > Thanks, Ashish
> > >
> > >
> > > -----Original Message-----
> > > From: Robert Raszuk [mailto:robert@raszuk.net]
> > > Sent: Tuesday, January 03, 2012 7:05 PM
> > > To: Ashish Dalela (adalela)
> > > Cc: Pedro Marques; dc@ietf.org
> > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > > interconnect
> > >
> > > Ashish,
> > >
> > >> The issues of scale you mentioned don't exist in Hierarchical MACs,
> > >> which scale better than anything we know of.
> > >
> > > So you are advocating solution which is based on encapsulation - that
> > is
> > > fine.
> > >
> > > However how could you ever arrive at the conclusion that HMACs would
> > > scale better then "anything we know". Well I don't know about you, bu=
t
> > I
> > > know that the key to scaling is ability to aggregate. And it is not
> > that
> > >
> > > huge mystery that MACs aggregate rather poorly while there are quite
> > > well deployed protocols (be it IPv4 or IPv6) which aggregate natively=
.
> > >
> > > For inter-dc this is IMHO a must. A must even if you build it using
> > > traditional routers or OF enabled switches - does not matter.
> > >
> > >> I don't want to split the requirements into multiple use-cases
> > >> because then this DC group will be many groups - one doing L2 and
> > >> another doing L3. That I think you will agree is not optimal for
> > >> anyone
> > >
> > > Why MAC-in-IP does not solve it for everyone ? And there are deployed
> > > solutions already ..
> > >
> > > IMHO what this group should accomplish is not to try to reinvent the
> > > world, but perhaps as example discuss where is the right boundary of
> > > encapsulation, how should we communicate between network and hosts,
> > what
> > >
> > > kind of DC instrumentation should be IETF blessed for easy integratio=
n
> > > (ie min subset of functionality it should possess etc .... )
> > >
> > > R.
> > >
> > > _______________________________________________
> > > dc mailing list
> > > dc@ietf.org
> > > https://www.ietf.org/mailman/listinfo/dc
> > >
> > >
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


From adalela@cisco.com  Tue Jan  3 09:10:07 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9AE4C21F853A for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:10:07 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.34
X-Spam-Level: 
X-Spam-Status: No, score=-2.34 tagged_above=-999 required=5 tests=[AWL=0.259,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id G4gsetkadra2 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:10:06 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 82B1A21F85C6 for <dc@ietf.org>; Tue,  3 Jan 2012 09:10:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=8288; q=dns/txt; s=iport; t=1325610605; x=1326820205; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=cDHl/5/Zz/DxOwaBhRKfH15VFyNE1pFqonvSc2iDdiA=; b=M6FR/nyZ6FiglOz6qsYaG4uVQACTyThOs7aDlL+mA0PAQuCPcAJwChs7 pS/aKES85o1mB593Gqn7G42TY94S8dbYyQTx3JqiEdwYPASqBHLe6RYAT TfYeenIQWgDX2MHI1M1LV06FGHXxnIFxHRtHShlCMi+nlIR7WlKGX++F5 o=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AjIPANg0A09Io8UY/2dsb2JhbAA5CoIFq2CBcgEBAQQBAQEPAR0KNAsMBAIBCBEEAQELBhcBBgEmHwkIAQEECwgIEweHYJdPAZ10iFaCVmMEiDWfCQ
X-IronPort-AV: E=Sophos;i="4.71,450,1320624000";  d="scan'208";a="2689420"
Received: from vla196-nat.cisco.com (HELO bgl-core-1.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 03 Jan 2012 17:10:03 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q03HA3t1009522; Tue, 3 Jan 2012 17:10:03 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Jan 2012 22:40:03 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 3 Jan 2012 22:40:01 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B256BA@XMB-BGL-416.cisco.com>
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKJ4svx6tIhjxTSoCEVFtst/+CNAAB77UAAAHJgoAAAMUAQA==
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <david.black@emc.com>
X-OriginalArrivalTime: 03 Jan 2012 17:10:03.0109 (UTC) FILETIME=[88550950:01CCCA3A]
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:10:07 -0000

David,

It doesn't have to. A VM will send a Grat ARP. It's informing about its
MAC address to IP binding. If you shut the interface before move and
unshut it after move, and use the unshut to get a new MAC it can be
done. It's implementation that we have to talk about. We are inventing
so many host based solutions - can't we discuss this change?

Thanks, Ashish


-----Original Message-----
From: david.black@emc.com [mailto:david.black@emc.com]=20
Sent: Tuesday, January 03, 2012 10:12 PM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: RE: [dc] [armd] IP over IP solution for data center
interconnect

Ashish,

> > [AD] The higher bits identify a switch - it's a switch-id.

That breaks VM migration across switches by forcing a MAC change.

Thanks,
--David

> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Ashish Dalela (adalela)
> Sent: Tuesday, January 03, 2012 11:15 AM
> To: robert@raszuk.net
> Cc: Pedro Marques; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect
>=20
> Robert,
>=20
> Please see inline.
>=20
> -----Original Message-----
> From: Robert Raszuk [mailto:robert@raszuk.net]
> Sent: Tuesday, January 03, 2012 8:24 PM
> To: Ashish Dalela (adalela)
> Cc: Pedro Marques; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>=20
> Ashish,
>=20
> OK let's just discuss what is in your draft on Hierarchical
Addressing.
>=20
> 1. You have 48 bits 32 go for host remaining 16 goes for switches. How
> do you aggregate at the TOR or AGGR switch boundary ? Are you assuming
> single HOST - SWITCH with max 65K flat macs ?
>=20
> [AD] The higher bits identify a switch - it's a switch-id. The hosts
are
> dynamically assigned a host-id under that switch. Let's assume 23 bits
> are for switch-id and 23 bits for host-id. To forward a packet to the
> host, you only have to look at the first 23 bits. That's a MAC prefix
to
> route against.
>=20
> [AD] You can have 2^23 switches in a network and 2^23 hosts under each
> switch.
>=20
> 2. Can you deploy this on existing VMs and existing switches ?
>=20
> [AD] What do you mean by this? Any VM can be configured with any MAC.
> Any physical host can be configured with any MAC on any logical
> interface. Configuration standpoint this is possible. Forwarding
> standpoint, that's another question.
>=20
> 3. What new protocol you envision to use to distribute those new MACs
?
>=20
> [AD] IS-IS extensions. It can be TRILL extensions.
>=20
> 4. What is the advantage of using this vs ILNP if we assume that hosts
> should be modified ?
>=20
> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
> talking about Loc-Id separation. If not, correct me. If yes, each
Loc-Id
> binding can be a host route, with mobility. These host-routes are a
> scaling problem. Traditional IP packet have IP as ID and MAC as LOC.
We
> are just extending this LOC to make it actually location aware rather
> than a flat address which is fixed regardless of where the location
is.
>=20
> 5. The proposal does not support aggregation .. even the draft says it
> :)
>=20
> "The total number of hardware entries anywhere in the network equals
the
>=20
> total number of switches and remains agnostic of VM mobility."
>=20
> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts. With
> 48 port access switches, you need 833 switches. That's the routing
table
> size for any switch in the datacenter - core, aggregation, access.
> Contrast this with host-routes, if each VM talks to 100 VMs, then each
> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just =
because
> the network prefix is 23 bits does not mean we have to store 10^23
> prefixes. We have to store only as many switches as there are in the
> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48
port
> access and 20 VM per port). That means instead of storing host-routes
> which will grow proportional to VM growth, we store switch-id, which
> will grow at 1000 times slower rate. As VM density increases, this
> growth rate is further slowed down. There are other techniques to
> further reduce the rate of growth. But in any case, 1000 times slower
is
> a lot slow.
>=20
> So if I have 100K switches I can not do any aggregation and need to
> "route" 100K MAC addresses.
>=20
> [AD] I don't know how you came to that conclusion. Think of HMAC as an
> IP address. Instead of 32 bits it is 46 bits. You route by prefixes in
> L3, and you are routing by the same prefixes here. Just as you
aggregate
> IP, same way you aggregate MAC. It's not different.
>=20
> 6. Who provides me the mapping between switch mac and host/vm mac
behind
>=20
> such switch ? Do switches proxy arp globally within your domain ?
>=20
> [AD] Variation of the same question. Above should answer it.
>=20
> Thx,
> R.
>=20
>=20
> > Robert,
> >
> >>> So you are advocating solution which is based on encapsulation -
> that
> > is fine.
> >
> > No, I'm not. Did you read the draft I had mentioned?
> > Hierarchical MAC is not encapsulation. It is one 48 bit address.
> >
> >>> However how could you ever arrive at the conclusion that HMACs
would
> >>> scale better then "anything we know". Well I don't know about you,
> > but I
> >>> know that the key to scaling is ability to aggregate. And it is
not
> > that
> >>> huge mystery that MACs aggregate rather poorly while there are
quite
> >>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> natively
> >
> > You are hitting the issue on the nail. So, read the draft I
mentioned.
> > Hierarchical MAC is higher bits "network prefix" and lower bits
"host
> > id".
> > That's summarizable like IP address and aggregated.
> > It has 46 bits to modify so larger than IPv4 internet.
> >
> > I won't comment on the rest, because you have made an assumption
about
> > encapsulation.
> >
> > I refer to this -
> > http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> >
> > Thanks, Ashish
> >
> >
> > -----Original Message-----
> > From: Robert Raszuk [mailto:robert@raszuk.net]
> > Sent: Tuesday, January 03, 2012 7:05 PM
> > To: Ashish Dalela (adalela)
> > Cc: Pedro Marques; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> > Ashish,
> >
> >> The issues of scale you mentioned don't exist in Hierarchical MACs,
> >> which scale better than anything we know of.
> >
> > So you are advocating solution which is based on encapsulation -
that
> is
> > fine.
> >
> > However how could you ever arrive at the conclusion that HMACs would
> > scale better then "anything we know". Well I don't know about you,
but
> I
> > know that the key to scaling is ability to aggregate. And it is not
> that
> >
> > huge mystery that MACs aggregate rather poorly while there are quite
> > well deployed protocols (be it IPv4 or IPv6) which aggregate
natively.
> >
> > For inter-dc this is IMHO a must. A must even if you build it using
> > traditional routers or OF enabled switches - does not matter.
> >
> >> I don't want to split the requirements into multiple use-cases
> >> because then this DC group will be many groups - one doing L2 and
> >> another doing L3. That I think you will agree is not optimal for
> >> anyone
> >
> > Why MAC-in-IP does not solve it for everyone ? And there are
deployed
> > solutions already ..
> >
> > IMHO what this group should accomplish is not to try to reinvent the
> > world, but perhaps as example discuss where is the right boundary of
> > encapsulation, how should we communicate between network and hosts,
> what
> >
> > kind of DC instrumentation should be IETF blessed for easy
integration
> > (ie min subset of functionality it should possess etc .... )
> >
> > R.
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
> >
> >
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


From pedro.r.marques@gmail.com  Tue Jan  3 09:12:34 2012
Return-Path: <pedro.r.marques@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 974DB21F84F7 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:12:34 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.134
X-Spam-Level: 
X-Spam-Status: No, score=-3.134 tagged_above=-999 required=5 tests=[AWL=0.465,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5uxJSAYvMXuv for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:12:34 -0800 (PST)
Received: from mail-gx0-f172.google.com (mail-gx0-f172.google.com [209.85.161.172]) by ietfa.amsl.com (Postfix) with ESMTP id E18C421F8449 for <dc@ietf.org>; Tue,  3 Jan 2012 09:12:33 -0800 (PST)
Received: by ggnk5 with SMTP id k5so11225848ggn.31 for <dc@ietf.org>; Tue, 03 Jan 2012 09:12:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=Dp4/jplJDJS5pTSIk8otVS/0o5hwGARec1rO0fnMJrI=; b=fAVL+CcFj02o3YG4Td7hn4cIV4tfKceigs/Ezm0lSqC/BO5xR4QL/paO0aX6di3mjc WtTXQZKxeiDuwixBjoIboDaBjOPm/jtDMW/9yUk3QLIhf0+YA7lK8ZL8PagS0OsYIM59 rgUPEo9vB/InjowUr8WBmxgHz4Z7Jm7N1mKtM=
MIME-Version: 1.0
Received: by 10.50.15.161 with SMTP id y1mr74567600igc.4.1325610753360; Tue, 03 Jan 2012 09:12:33 -0800 (PST)
Received: by 10.231.60.193 with HTTP; Tue, 3 Jan 2012 09:12:33 -0800 (PST)
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com>
Date: Tue, 3 Jan 2012 09:12:33 -0800
Message-ID: <CAMXVrt7j8sgFg8pWO7QVr9WQKNATSnnyVJDP-c-ryykD3sEmTA@mail.gmail.com>
From: Pedro Marques <pedro.r.marques@gmail.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:12:34 -0000

Ashish,

On Tue, Jan 3, 2012 at 3:38 AM, Ashish Dalela (adalela)
<adalela@cisco.com> wrote:
>
> Pedro,
>
> I'm not advocating classic L2.

> L2 forwarding does not necessarily mean a broadcast domain.

The forwarding algorithm is perhaps an implementation detail. It would
be great to understand what is the service model that you propose. A
IEEE 802 service model does imply a broadcast domain.

>. E.g. you can forward packets based on hierarchical MAC without a VLAN an=
d hence never broadcast. The packets will be forwarded just like static IP =
routing but based on a MAC prefix.

If you service model only provide IP transport, why do you care about
the MAC address ? It seems totally irrelevant to the service being
provided. On the other hand to implement an IEEE 802 compatible
service, MAC addresses are required.

> So far L2 =3D broadcast and L3 =3D non-broadcast.

It would be best to speak in terms of the service functionality to applicat=
ions.

>Hierarchical MAC doesn't have that assumption. I would like to converge TC=
P/IP and non-TCP/IP, broadcast and non-broadcast, traffic into a single for=
warding approach that will scale.

As long as that is free it sounds good... is however important to keep
in mind that there is a class of deployments for which non-IP and
broadcast traffic is irrelevant and for which cost is the single most
important factor.

> If you have VLAN you get broadcast, multicast and unicast. If you don't h=
ave VLAN, you just get unicast and multicast. Shift from one to another sho=
uld require only a configuration of a VLAN tag on the port, not a change of=
 the entire infrastructure. Agree?
>

I'm sorry but i'm not able to understand what you mean.

> The thing to bear in mind is that there will be customers who want either=
, and a provider has to use the same infra to support both. If you lock a p=
rovider into a exclusively L2 or L3, then they can't use the same infrastru=
cture for supporting either.
>

That is very confusing to me. Why would multiple infrastructures be
necessary to support different services ? For instance in the carrier
space, L3 overlays and L2 overlays are supported on top of the same
infrastructure every day.

> I don't want to split the requirements into multiple use-cases because th=
en this DC group will be many groups - one doing L2 and another doing L3.

If it would split in many groups each solving a specific focused
problem then progress could actually be made. For instance both an L2
and L3 groups could actually converge on requirements.

That sounds a much better option than to search for the holly grail of
the solution that provides an IEEE 802 service at no extra cost.

> That I think you will agree is not optimal for anyone.

It is not clear what you are trying to optimize.

> So, I'm looking for an approach that can converge all requirements into a=
 single forwarding plane, on the same infrastructure.

The infrastructure and the service of the overlay network to VMs are
different layers. Conflicting the two is not useful.

  Pedro.

From david.black@emc.com  Tue Jan  3 09:16:03 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 59E8D21F85AD for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:16:03 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.589
X-Spam-Level: 
X-Spam-Status: No, score=-106.589 tagged_above=-999 required=5 tests=[AWL=0.010, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id W6bAMk2vy9gH for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:16:02 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 1A4F021F8449 for <dc@ietf.org>; Tue,  3 Jan 2012 09:16:01 -0800 (PST)
Received: from hop04-l1d11-si02.isus.emc.com (HOP04-L1D11-SI02.isus.emc.com [10.254.111.55]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q03HFrfj026583 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 3 Jan 2012 12:15:55 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.222.226]) by hop04-l1d11-si02.isus.emc.com (RSA Interceptor); Tue, 3 Jan 2012 12:15:35 -0500
Received: from mxhub09.corp.emc.com (mxhub09.corp.emc.com [10.254.92.104]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q03HDVrQ005744; Tue, 3 Jan 2012 12:15:33 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub09.corp.emc.com ([10.254.92.104]) with mapi; Tue, 3 Jan 2012 12:14:39 -0500
From: <david.black@emc.com>
To: <adalela@cisco.com>
Date: Tue, 3 Jan 2012 12:14:37 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKJ4svx6tIhjxTSoCEVFtst/+CNAAB77UAAAHJgoAAAMUAQAAATmQA
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBD6@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B256BA@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B256BA@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:16:03 -0000

It's not compatible with widely deployed VM live migration mechanisms that =
don't
change the MAC.  As noted in my other follow-up email, things will break, a=
nd I'd
prefer to look at solutions that don't break things that currently work.

IMHO, this thread is also well off-track, as you're advocating a solution a=
s
opposed to coming up with a concise problem statement.

Thanks,
--David


> -----Original Message-----
> From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> Sent: Tuesday, January 03, 2012 12:10 PM
> To: Black, David
> Cc: dc@ietf.org
> Subject: RE: [dc] [armd] IP over IP solution for data center interconnect
>=20
> David,
>=20
> It doesn't have to. A VM will send a Grat ARP. It's informing about its
> MAC address to IP binding. If you shut the interface before move and
> unshut it after move, and use the unshut to get a new MAC it can be
> done. It's implementation that we have to talk about. We are inventing
> so many host based solutions - can't we discuss this change?
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: david.black@emc.com [mailto:david.black@emc.com]
> Sent: Tuesday, January 03, 2012 10:12 PM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: RE: [dc] [armd] IP over IP solution for data center
> interconnect
>=20
> Ashish,
>=20
> > > [AD] The higher bits identify a switch - it's a switch-id.
>=20
> That breaks VM migration across switches by forcing a MAC change.
>=20
> Thanks,
> --David
>=20
> > -----Original Message-----
> > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Ashish Dalela (adalela)
> > Sent: Tuesday, January 03, 2012 11:15 AM
> > To: robert@raszuk.net
> > Cc: Pedro Marques; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
> >
> > Robert,
> >
> > Please see inline.
> >
> > -----Original Message-----
> > From: Robert Raszuk [mailto:robert@raszuk.net]
> > Sent: Tuesday, January 03, 2012 8:24 PM
> > To: Ashish Dalela (adalela)
> > Cc: Pedro Marques; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> > Ashish,
> >
> > OK let's just discuss what is in your draft on Hierarchical
> Addressing.
> >
> > 1. You have 48 bits 32 go for host remaining 16 goes for switches. How
> > do you aggregate at the TOR or AGGR switch boundary ? Are you assuming
> > single HOST - SWITCH with max 65K flat macs ?
> >
> > [AD] The higher bits identify a switch - it's a switch-id. The hosts
> are
> > dynamically assigned a host-id under that switch. Let's assume 23 bits
> > are for switch-id and 23 bits for host-id. To forward a packet to the
> > host, you only have to look at the first 23 bits. That's a MAC prefix
> to
> > route against.
> >
> > [AD] You can have 2^23 switches in a network and 2^23 hosts under each
> > switch.
> >
> > 2. Can you deploy this on existing VMs and existing switches ?
> >
> > [AD] What do you mean by this? Any VM can be configured with any MAC.
> > Any physical host can be configured with any MAC on any logical
> > interface. Configuration standpoint this is possible. Forwarding
> > standpoint, that's another question.
> >
> > 3. What new protocol you envision to use to distribute those new MACs
> ?
> >
> > [AD] IS-IS extensions. It can be TRILL extensions.
> >
> > 4. What is the advantage of using this vs ILNP if we assume that hosts
> > should be modified ?
> >
> > [AD] I'm not familiar with the ILNP work, but I'm assuming you are
> > talking about Loc-Id separation. If not, correct me. If yes, each
> Loc-Id
> > binding can be a host route, with mobility. These host-routes are a
> > scaling problem. Traditional IP packet have IP as ID and MAC as LOC.
> We
> > are just extending this LOC to make it actually location aware rather
> > than a flat address which is fixed regardless of where the location
> is.
> >
> > 5. The proposal does not support aggregation .. even the draft says it
> > :)
> >
> > "The total number of hardware entries anywhere in the network equals
> the
> >
> > total number of switches and remains agnostic of VM mobility."
> >
> > [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts. With
> > 48 port access switches, you need 833 switches. That's the routing
> table
> > size for any switch in the datacenter - core, aggregation, access.
> > Contrast this with host-routes, if each VM talks to 100 VMs, then each
> > access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just because
> > the network prefix is 23 bits does not mean we have to store 10^23
> > prefixes. We have to store only as many switches as there are in the
> > network. Ratio between VM : switch is 1000 : 1 (today, assuming 48
> port
> > access and 20 VM per port). That means instead of storing host-routes
> > which will grow proportional to VM growth, we store switch-id, which
> > will grow at 1000 times slower rate. As VM density increases, this
> > growth rate is further slowed down. There are other techniques to
> > further reduce the rate of growth. But in any case, 1000 times slower
> is
> > a lot slow.
> >
> > So if I have 100K switches I can not do any aggregation and need to
> > "route" 100K MAC addresses.
> >
> > [AD] I don't know how you came to that conclusion. Think of HMAC as an
> > IP address. Instead of 32 bits it is 46 bits. You route by prefixes in
> > L3, and you are routing by the same prefixes here. Just as you
> aggregate
> > IP, same way you aggregate MAC. It's not different.
> >
> > 6. Who provides me the mapping between switch mac and host/vm mac
> behind
> >
> > such switch ? Do switches proxy arp globally within your domain ?
> >
> > [AD] Variation of the same question. Above should answer it.
> >
> > Thx,
> > R.
> >
> >
> > > Robert,
> > >
> > >>> So you are advocating solution which is based on encapsulation -
> > that
> > > is fine.
> > >
> > > No, I'm not. Did you read the draft I had mentioned?
> > > Hierarchical MAC is not encapsulation. It is one 48 bit address.
> > >
> > >>> However how could you ever arrive at the conclusion that HMACs
> would
> > >>> scale better then "anything we know". Well I don't know about you,
> > > but I
> > >>> know that the key to scaling is ability to aggregate. And it is
> not
> > > that
> > >>> huge mystery that MACs aggregate rather poorly while there are
> quite
> > >>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > natively
> > >
> > > You are hitting the issue on the nail. So, read the draft I
> mentioned.
> > > Hierarchical MAC is higher bits "network prefix" and lower bits
> "host
> > > id".
> > > That's summarizable like IP address and aggregated.
> > > It has 46 bits to modify so larger than IPv4 internet.
> > >
> > > I won't comment on the rest, because you have made an assumption
> about
> > > encapsulation.
> > >
> > > I refer to this -
> > > http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> > >
> > > Thanks, Ashish
> > >
> > >
> > > -----Original Message-----
> > > From: Robert Raszuk [mailto:robert@raszuk.net]
> > > Sent: Tuesday, January 03, 2012 7:05 PM
> > > To: Ashish Dalela (adalela)
> > > Cc: Pedro Marques; dc@ietf.org
> > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > > interconnect
> > >
> > > Ashish,
> > >
> > >> The issues of scale you mentioned don't exist in Hierarchical MACs,
> > >> which scale better than anything we know of.
> > >
> > > So you are advocating solution which is based on encapsulation -
> that
> > is
> > > fine.
> > >
> > > However how could you ever arrive at the conclusion that HMACs would
> > > scale better then "anything we know". Well I don't know about you,
> but
> > I
> > > know that the key to scaling is ability to aggregate. And it is not
> > that
> > >
> > > huge mystery that MACs aggregate rather poorly while there are quite
> > > well deployed protocols (be it IPv4 or IPv6) which aggregate
> natively.
> > >
> > > For inter-dc this is IMHO a must. A must even if you build it using
> > > traditional routers or OF enabled switches - does not matter.
> > >
> > >> I don't want to split the requirements into multiple use-cases
> > >> because then this DC group will be many groups - one doing L2 and
> > >> another doing L3. That I think you will agree is not optimal for
> > >> anyone
> > >
> > > Why MAC-in-IP does not solve it for everyone ? And there are
> deployed
> > > solutions already ..
> > >
> > > IMHO what this group should accomplish is not to try to reinvent the
> > > world, but perhaps as example discuss where is the right boundary of
> > > encapsulation, how should we communicate between network and hosts,
> > what
> > >
> > > kind of DC instrumentation should be IETF blessed for easy
> integration
> > > (ie min subset of functionality it should possess etc .... )
> > >
> > > R.
> > >
> > > _______________________________________________
> > > dc mailing list
> > > dc@ietf.org
> > > https://www.ietf.org/mailman/listinfo/dc
> > >
> > >
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
>=20


From pedro.r.marques@gmail.com  Tue Jan  3 09:16:27 2012
Return-Path: <pedro.r.marques@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A070B21F85BD for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:16:27 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.25
X-Spam-Level: 
X-Spam-Status: No, score=-3.25 tagged_above=-999 required=5 tests=[AWL=0.349,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ObIywXB4Ja8y for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:16:26 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id 8BEF921F8449 for <dc@ietf.org>; Tue,  3 Jan 2012 09:16:26 -0800 (PST)
Received: by iabz21 with SMTP id z21so9991265iab.31 for <dc@ietf.org>; Tue, 03 Jan 2012 09:16:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=UpY1QYLdp/8sVMBauVNt+g01sRmyDMb1FCn3Z8x/v3M=; b=FvWYE3rq+7eOINg9ur/AZer+SeU0k07XmJdohwSfQMLzaTfji8jz7Wst0Hdnjk1ya2 AYPRZmGSpzkuBlRoFiGPvfz5n6bSRk7VLjMzDvqiD/lk0L3AUU1ryKhe/CpG0P/9hEU4 21ByUG/wkbMDSBm5O47OIyTHZE3a1e+lSVxPw=
MIME-Version: 1.0
Received: by 10.50.15.161 with SMTP id y1mr74584019igc.4.1325610982972; Tue, 03 Jan 2012 09:16:22 -0800 (PST)
Received: by 10.231.60.193 with HTTP; Tue, 3 Jan 2012 09:16:22 -0800 (PST)
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com>
Date: Tue, 3 Jan 2012 09:16:22 -0800
Message-ID: <CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com>
From: Pedro Marques <pedro.r.marques@gmail.com>
To: david.black@emc.com
Content-Type: text/plain; charset=ISO-8859-1
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:16:27 -0000

That assumes that the MAC has relevance in the network. It is possible
to build solutions such that packets are forwarded based on their IP
addresses rather than their MACs.

  Pedro.

On Tue, Jan 3, 2012 at 8:41 AM,  <david.black@emc.com> wrote:
> Ashish,
>
>> > [AD] The higher bits identify a switch - it's a switch-id.
>
> That breaks VM migration across switches by forcing a MAC change.
>
> Thanks,
> --David
>
>> -----Original Message-----
>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Ashish Dalela (adalela)
>> Sent: Tuesday, January 03, 2012 11:15 AM
>> To: robert@raszuk.net
>> Cc: Pedro Marques; dc@ietf.org
>> Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
>>
>> Robert,
>>
>> Please see inline.
>>
>> -----Original Message-----
>> From: Robert Raszuk [mailto:robert@raszuk.net]
>> Sent: Tuesday, January 03, 2012 8:24 PM
>> To: Ashish Dalela (adalela)
>> Cc: Pedro Marques; dc@ietf.org
>> Subject: Re: [dc] [armd] IP over IP solution for data center
>> interconnect
>>
>> Ashish,
>>
>> OK let's just discuss what is in your draft on Hierarchical Addressing.
>>
>> 1. You have 48 bits 32 go for host remaining 16 goes for switches. How
>> do you aggregate at the TOR or AGGR switch boundary ? Are you assuming
>> single HOST - SWITCH with max 65K flat macs ?
>>
>> [AD] The higher bits identify a switch - it's a switch-id. The hosts are
>> dynamically assigned a host-id under that switch. Let's assume 23 bits
>> are for switch-id and 23 bits for host-id. To forward a packet to the
>> host, you only have to look at the first 23 bits. That's a MAC prefix to
>> route against.
>>
>> [AD] You can have 2^23 switches in a network and 2^23 hosts under each
>> switch.
>>
>> 2. Can you deploy this on existing VMs and existing switches ?
>>
>> [AD] What do you mean by this? Any VM can be configured with any MAC.
>> Any physical host can be configured with any MAC on any logical
>> interface. Configuration standpoint this is possible. Forwarding
>> standpoint, that's another question.
>>
>> 3. What new protocol you envision to use to distribute those new MACs ?
>>
>> [AD] IS-IS extensions. It can be TRILL extensions.
>>
>> 4. What is the advantage of using this vs ILNP if we assume that hosts
>> should be modified ?
>>
>> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
>> talking about Loc-Id separation. If not, correct me. If yes, each Loc-Id
>> binding can be a host route, with mobility. These host-routes are a
>> scaling problem. Traditional IP packet have IP as ID and MAC as LOC. We
>> are just extending this LOC to make it actually location aware rather
>> than a flat address which is fixed regardless of where the location is.
>>
>> 5. The proposal does not support aggregation .. even the draft says it
>> :)
>>
>> "The total number of hardware entries anywhere in the network equals the
>>
>> total number of switches and remains agnostic of VM mobility."
>>
>> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts. With
>> 48 port access switches, you need 833 switches. That's the routing table
>> size for any switch in the datacenter - core, aggregation, access.
>> Contrast this with host-routes, if each VM talks to 100 VMs, then each
>> access switch needs 48 * 25 * 100 = 120,000 host routes. Just because
>> the network prefix is 23 bits does not mean we have to store 10^23
>> prefixes. We have to store only as many switches as there are in the
>> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48 port
>> access and 20 VM per port). That means instead of storing host-routes
>> which will grow proportional to VM growth, we store switch-id, which
>> will grow at 1000 times slower rate. As VM density increases, this
>> growth rate is further slowed down. There are other techniques to
>> further reduce the rate of growth. But in any case, 1000 times slower is
>> a lot slow.
>>
>> So if I have 100K switches I can not do any aggregation and need to
>> "route" 100K MAC addresses.
>>
>> [AD] I don't know how you came to that conclusion. Think of HMAC as an
>> IP address. Instead of 32 bits it is 46 bits. You route by prefixes in
>> L3, and you are routing by the same prefixes here. Just as you aggregate
>> IP, same way you aggregate MAC. It's not different.
>>
>> 6. Who provides me the mapping between switch mac and host/vm mac behind
>>
>> such switch ? Do switches proxy arp globally within your domain ?
>>
>> [AD] Variation of the same question. Above should answer it.
>>
>> Thx,
>> R.
>>
>>
>> > Robert,
>> >
>> >>> So you are advocating solution which is based on encapsulation -
>> that
>> > is fine.
>> >
>> > No, I'm not. Did you read the draft I had mentioned?
>> > Hierarchical MAC is not encapsulation. It is one 48 bit address.
>> >
>> >>> However how could you ever arrive at the conclusion that HMACs would
>> >>> scale better then "anything we know". Well I don't know about you,
>> > but I
>> >>> know that the key to scaling is ability to aggregate. And it is not
>> > that
>> >>> huge mystery that MACs aggregate rather poorly while there are quite
>> >>> well deployed protocols (be it IPv4 or IPv6) which aggregate
>> natively
>> >
>> > You are hitting the issue on the nail. So, read the draft I mentioned.
>> > Hierarchical MAC is higher bits "network prefix" and lower bits "host
>> > id".
>> > That's summarizable like IP address and aggregated.
>> > It has 46 bits to modify so larger than IPv4 internet.
>> >
>> > I won't comment on the rest, because you have made an assumption about
>> > encapsulation.
>> >
>> > I refer to this -
>> > http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
>> >
>> > Thanks, Ashish
>> >
>> >
>> > -----Original Message-----
>> > From: Robert Raszuk [mailto:robert@raszuk.net]
>> > Sent: Tuesday, January 03, 2012 7:05 PM
>> > To: Ashish Dalela (adalela)
>> > Cc: Pedro Marques; dc@ietf.org
>> > Subject: Re: [dc] [armd] IP over IP solution for data center
>> > interconnect
>> >
>> > Ashish,
>> >
>> >> The issues of scale you mentioned don't exist in Hierarchical MACs,
>> >> which scale better than anything we know of.
>> >
>> > So you are advocating solution which is based on encapsulation - that
>> is
>> > fine.
>> >
>> > However how could you ever arrive at the conclusion that HMACs would
>> > scale better then "anything we know". Well I don't know about you, but
>> I
>> > know that the key to scaling is ability to aggregate. And it is not
>> that
>> >
>> > huge mystery that MACs aggregate rather poorly while there are quite
>> > well deployed protocols (be it IPv4 or IPv6) which aggregate natively.
>> >
>> > For inter-dc this is IMHO a must. A must even if you build it using
>> > traditional routers or OF enabled switches - does not matter.
>> >
>> >> I don't want to split the requirements into multiple use-cases
>> >> because then this DC group will be many groups - one doing L2 and
>> >> another doing L3. That I think you will agree is not optimal for
>> >> anyone
>> >
>> > Why MAC-in-IP does not solve it for everyone ? And there are deployed
>> > solutions already ..
>> >
>> > IMHO what this group should accomplish is not to try to reinvent the
>> > world, but perhaps as example discuss where is the right boundary of
>> > encapsulation, how should we communicate between network and hosts,
>> what
>> >
>> > kind of DC instrumentation should be IETF blessed for easy integration
>> > (ie min subset of functionality it should possess etc .... )
>> >
>> > R.
>> >
>> > _______________________________________________
>> > dc mailing list
>> > dc@ietf.org
>> > https://www.ietf.org/mailman/listinfo/dc
>> >
>> >
>>
>> _______________________________________________
>> dc mailing list
>> dc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dc
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From adalela@cisco.com  Tue Jan  3 09:19:37 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4294411E808A for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:19:37 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.194
X-Spam-Level: 
X-Spam-Status: No, score=-2.194 tagged_above=-999 required=5 tests=[AWL=0.090,  BAYES_00=-2.599, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id rtiSgQr--vXs for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:19:36 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id AFA4911E8072 for <dc@ietf.org>; Tue,  3 Jan 2012 09:19:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=4134; q=dns/txt; s=iport; t=1325611176; x=1326820776; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=sx/MoIzL6OxZSb7J1/SY2ZS2XD4DUDNth773uRNqd28=; b=Nt8ST7i9CgC9ustAcUUNdd/BgrnBHhQF0Z+/wr9/SucJxldPHbXwroMn Qq01ZP00JmKnQDE/MTgdlhsBn1BdWBQiVR6rV97MX+6scXcNXRtgXV9Gc tImcL4GnRwWcVehZ9drQXTcQ0ZMmin9QioTHq9Jbj3slLH38BAMw1iak5 c=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AjAPACs4A09Io8UY/2dsb2JhbABDggWrXoFyAQEBAwESAR0KPwUHBAIBCBEEAQELBhcBBgFFCQgBAQQLCAgah1iXWQGddossYwSINZ8J
X-IronPort-AV: E=Sophos;i="4.71,450,1320624000";  d="scan'208";a="2694877"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 03 Jan 2012 17:19:34 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q03HJYaf017814; Tue, 3 Jan 2012 17:19:34 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Jan 2012 22:49:34 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 3 Jan 2012 22:49:32 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B256C1@XMB-BGL-416.cisco.com>
In-Reply-To: <4F03331E.9020104@raszuk.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] new drafts
Thread-Index: AczKOJawTyvyh9HISFahPpQrRgN+mQAAmtTw
References: <618BE8B40039924EB9AED233D4A09C5102B2527A@XMB-BGL-416.cisco.com><D96F76EF-0011-4F33-A1CF-EC9AD12BA411@gmail.com><618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com> <201201031510.q03FABS62810@magenta.juniper.net> <618BE8B40039924EB9AED233D4A09C5102B2569C@XMB-BGL-416.cisco.com> <4F032492.4030201@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256AC@XMB-BGL-416.cisco.com> <4F03331E.9020104@raszuk.net>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <robert@raszuk.net>
X-OriginalArrivalTime: 03 Jan 2012 17:19:34.0096 (UTC) FILETIME=[DCAABD00:01CCCA3B]
Cc: Yakov Rekhter <yakov@juniper.net>, Aldrin Isaac <aldrin.isaac@gmail.com>, dc@ietf.org
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:19:37 -0000

Robert,

I think you and I talking past each other.

I'm talking about datacenter switching and not BGP, PE, etc. The context
is scaling of access switches also called Top of Rack or ToR.=20
So, your comment about millions of routes and BGP assumes a core router,
not an access switch.=20

Reset context.

Thanks, Ashish


-----Original Message-----
From: Robert Raszuk [mailto:robert@raszuk.net]=20
Sent: Tuesday, January 03, 2012 10:26 PM
To: Ashish Dalela (adalela)
Cc: Yakov Rekhter; dc@ietf.org; Aldrin Isaac
Subject: Re: [dc] new drafts

Ashish,

> Assume a simple case that under a switch there are 250 VM, split
amongst
> 10 customers. Each customer has a unique VRF. Normally, we would have
> advertized a /24 route for that switch. In this case your routes to a
> single switch are segmented and there are 10 VRFs, and you will very
> likely have 250 route table entries total segmented by VRF-ids. That's
a
> routing table bloat from 1 entry to 250 entry. This happens
everywhere.
> I have assumed a public IP addressing, but the same thing will happen
> for the private addressing as well.

Normally in this case I would have 10 routes + 1 route for switch=20
loopback not 250. However when those VMs will start moving between PEs=20
you are right that worse case one could end up would be 250 non=20
aggregetable routes per VRF.

First I don't think this is a problem scaling wise today as I would not=20
assume that everyone will be moving.

Second we know today how to handle millions of routes in BGP.

Third I am not saying that this model should be used.

I am advocating that a hierarchical IP in IP model should be used. VRFs=20
on those PEs could be used for isolation purposes. And any VM move needs

to be only reflected in mapping plane and not in routing infrastructure=20
of the network.


> Then, typically the number of VRFs you can support on a router is
about
> 4K. These # of VRFs have to be supported at the access, so you have to
> assume this is the limit from the access viewpoint.

Nope ... Control plane VRFs have no bounds for limits. 4K comes from=20
platform limitations.

Hint: Think about control plane and data plane separation. Pedro's draft

already provides an example on how such separation can be accomplished.

> Then, at massive scale, the failure rates are also massive. At 5 nines
> reliability, a hardware entity out of 100,000 will fail every 5.25
> minutes. Access switches don't have high availability. Software fails
> even faster - OS is generally 4 9's, which means one out of 10,000
fails
> every 5.25 minutes. At millions of instances of such entities, there
are
> rapid failures happening. You have to only look at massive datacenters
> today run by Web 2.0 companies, and they all echo this view. They
> basically form clusters of the same application. Software moves the
> workload from one cluster to another. The whole cluster can fail over.
> That's not what you do in a consumer cloud, where you have to recover.
> At massive failure rates, and rapid recovery rates, you are moving
> things around and injecting host routes for reachability. It's a
> convergence problem, especially with link-state algorithms.

Not applicable to what I am advocating.


> If the VM can be moved, then all you need to do is install a temporary
> redirect of packets to the new location.

What is this redirection ? Where do you install it ? In all switching=20
elements of the network ? Redirection works when you encapsulate.=20
Without any encapsulation how do you redirect by just touching a single=20
network element ?

> Each host will refresh the MAC
> after 15-30 seconds. If the packets are redirected from old to new
> location for these 30 seconds, the redirect can be aged automatically.
> This happens all the time in mobile networks in what is called a "fast
> handoff" where you redirect the packets until handoff is completed.

Hmmmm interesting. We even have a draft which can be used for such=20
redirection today ... draft-rekhter-l3vpn-virtual-hub

Cheers,
R.

From adalela@cisco.com  Tue Jan  3 09:31:25 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 62A8621F859D for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:31:25 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.355
X-Spam-Level: 
X-Spam-Status: No, score=-2.355 tagged_above=-999 required=5 tests=[AWL=0.244,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UeyNPMd23+Nb for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:31:24 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id F09A621F856B for <dc@ietf.org>; Tue,  3 Jan 2012 09:31:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=10217; q=dns/txt; s=iport; t=1325611883; x=1326821483; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=7JtqdjlNPI0kiyb+6DXJIJXGPwyMMUWjIjHfi+1hzkc=; b=T4bJL4QJoLnR0PHxehbSBIMb8G/5RFTfW5bTIV6wSI2Ess9f/mZSVrPj 2WziqYDvNlKbECMRtDLr9Ebf4yZ39a9WgeYz/x44tgLzqkEZX+9KsWIa+ JmfomG71nRlH9phCvGbAhUvPoZTtZ+MxkUY4QrHWWLalj5RSI07YaF5Zo k=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AjIPALs6A09Io8UY/2dsb2JhbAA5CoIFq16BcgEBAQMBAQEBDwEdCjQLDAQCAQgRBAEBCwYXAQYBJh8JCAEBBAsICAESB4dYCJdRAZ14iFaCVmMEiDWfCQ
X-IronPort-AV: E=Sophos;i="4.71,450,1320624000";  d="scan'208";a="2695125"
Received: from vla196-nat.cisco.com (HELO bgl-core-1.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 03 Jan 2012 17:31:21 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q03HVLCC012413; Tue, 3 Jan 2012 17:31:21 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Jan 2012 23:01:20 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 3 Jan 2012 23:01:20 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B256CE@XMB-BGL-416.cisco.com>
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBD6@MX14A.corp.emc.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKJ4svx6tIhjxTSoCEVFtst/+CNAAB77UAAAHJgoAAAMUAQAAATmQAAACD1aA=
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B256BA@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBD6@MX14A.corp.emc.co m>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <david.black@emc.com>
X-OriginalArrivalTime: 03 Jan 2012 17:31:20.0916 (UTC) FILETIME=[81F6E940:01CCCA3D]
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:31:25 -0000

>> IMHO, this thread is also well off-track, as you're advocating a
solution as
opposed to coming up with a concise problem statement.

I had sent a problem statement earlier, but did not see comments.
http://www.ietf.org/id/draft-dalela-dc-requirements-00.txt

Let's see which ones are solved, and which aren't.=20

>> As noted in my other follow-up email, things will break, and I'd
>> prefer to look at solutions that don't break things that currently
work

Scale will break the network while it preserves the VM functionality.=20

Thanks, Ashish

-----Original Message-----
From: david.black@emc.com [mailto:david.black@emc.com]=20
Sent: Tuesday, January 03, 2012 10:45 PM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: RE: [dc] [armd] IP over IP solution for data center
interconnect

It's not compatible with widely deployed VM live migration mechanisms
that don't
change the MAC.  As noted in my other follow-up email, things will
break, and I'd
prefer to look at solutions that don't break things that currently work.

IMHO, this thread is also well off-track, as you're advocating a
solution as
opposed to coming up with a concise problem statement.

Thanks,
--David


> -----Original Message-----
> From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> Sent: Tuesday, January 03, 2012 12:10 PM
> To: Black, David
> Cc: dc@ietf.org
> Subject: RE: [dc] [armd] IP over IP solution for data center
interconnect
>=20
> David,
>=20
> It doesn't have to. A VM will send a Grat ARP. It's informing about
its
> MAC address to IP binding. If you shut the interface before move and
> unshut it after move, and use the unshut to get a new MAC it can be
> done. It's implementation that we have to talk about. We are inventing
> so many host based solutions - can't we discuss this change?
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: david.black@emc.com [mailto:david.black@emc.com]
> Sent: Tuesday, January 03, 2012 10:12 PM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: RE: [dc] [armd] IP over IP solution for data center
> interconnect
>=20
> Ashish,
>=20
> > > [AD] The higher bits identify a switch - it's a switch-id.
>=20
> That breaks VM migration across switches by forcing a MAC change.
>=20
> Thanks,
> --David
>=20
> > -----Original Message-----
> > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Ashish Dalela (adalela)
> > Sent: Tuesday, January 03, 2012 11:15 AM
> > To: robert@raszuk.net
> > Cc: Pedro Marques; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
> >
> > Robert,
> >
> > Please see inline.
> >
> > -----Original Message-----
> > From: Robert Raszuk [mailto:robert@raszuk.net]
> > Sent: Tuesday, January 03, 2012 8:24 PM
> > To: Ashish Dalela (adalela)
> > Cc: Pedro Marques; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> > Ashish,
> >
> > OK let's just discuss what is in your draft on Hierarchical
> Addressing.
> >
> > 1. You have 48 bits 32 go for host remaining 16 goes for switches.
How
> > do you aggregate at the TOR or AGGR switch boundary ? Are you
assuming
> > single HOST - SWITCH with max 65K flat macs ?
> >
> > [AD] The higher bits identify a switch - it's a switch-id. The hosts
> are
> > dynamically assigned a host-id under that switch. Let's assume 23
bits
> > are for switch-id and 23 bits for host-id. To forward a packet to
the
> > host, you only have to look at the first 23 bits. That's a MAC
prefix
> to
> > route against.
> >
> > [AD] You can have 2^23 switches in a network and 2^23 hosts under
each
> > switch.
> >
> > 2. Can you deploy this on existing VMs and existing switches ?
> >
> > [AD] What do you mean by this? Any VM can be configured with any
MAC.
> > Any physical host can be configured with any MAC on any logical
> > interface. Configuration standpoint this is possible. Forwarding
> > standpoint, that's another question.
> >
> > 3. What new protocol you envision to use to distribute those new
MACs
> ?
> >
> > [AD] IS-IS extensions. It can be TRILL extensions.
> >
> > 4. What is the advantage of using this vs ILNP if we assume that
hosts
> > should be modified ?
> >
> > [AD] I'm not familiar with the ILNP work, but I'm assuming you are
> > talking about Loc-Id separation. If not, correct me. If yes, each
> Loc-Id
> > binding can be a host route, with mobility. These host-routes are a
> > scaling problem. Traditional IP packet have IP as ID and MAC as LOC.
> We
> > are just extending this LOC to make it actually location aware
rather
> > than a flat address which is fixed regardless of where the location
> is.
> >
> > 5. The proposal does not support aggregation .. even the draft says
it
> > :)
> >
> > "The total number of hardware entries anywhere in the network equals
> the
> >
> > total number of switches and remains agnostic of VM mobility."
> >
> > [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts.
With
> > 48 port access switches, you need 833 switches. That's the routing
> table
> > size for any switch in the datacenter - core, aggregation, access.
> > Contrast this with host-routes, if each VM talks to 100 VMs, then
each
> > access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just
because
> > the network prefix is 23 bits does not mean we have to store 10^23
> > prefixes. We have to store only as many switches as there are in the
> > network. Ratio between VM : switch is 1000 : 1 (today, assuming 48
> port
> > access and 20 VM per port). That means instead of storing
host-routes
> > which will grow proportional to VM growth, we store switch-id, which
> > will grow at 1000 times slower rate. As VM density increases, this
> > growth rate is further slowed down. There are other techniques to
> > further reduce the rate of growth. But in any case, 1000 times
slower
> is
> > a lot slow.
> >
> > So if I have 100K switches I can not do any aggregation and need to
> > "route" 100K MAC addresses.
> >
> > [AD] I don't know how you came to that conclusion. Think of HMAC as
an
> > IP address. Instead of 32 bits it is 46 bits. You route by prefixes
in
> > L3, and you are routing by the same prefixes here. Just as you
> aggregate
> > IP, same way you aggregate MAC. It's not different.
> >
> > 6. Who provides me the mapping between switch mac and host/vm mac
> behind
> >
> > such switch ? Do switches proxy arp globally within your domain ?
> >
> > [AD] Variation of the same question. Above should answer it.
> >
> > Thx,
> > R.
> >
> >
> > > Robert,
> > >
> > >>> So you are advocating solution which is based on encapsulation -
> > that
> > > is fine.
> > >
> > > No, I'm not. Did you read the draft I had mentioned?
> > > Hierarchical MAC is not encapsulation. It is one 48 bit address.
> > >
> > >>> However how could you ever arrive at the conclusion that HMACs
> would
> > >>> scale better then "anything we know". Well I don't know about
you,
> > > but I
> > >>> know that the key to scaling is ability to aggregate. And it is
> not
> > > that
> > >>> huge mystery that MACs aggregate rather poorly while there are
> quite
> > >>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > natively
> > >
> > > You are hitting the issue on the nail. So, read the draft I
> mentioned.
> > > Hierarchical MAC is higher bits "network prefix" and lower bits
> "host
> > > id".
> > > That's summarizable like IP address and aggregated.
> > > It has 46 bits to modify so larger than IPv4 internet.
> > >
> > > I won't comment on the rest, because you have made an assumption
> about
> > > encapsulation.
> > >
> > > I refer to this -
> > > http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> > >
> > > Thanks, Ashish
> > >
> > >
> > > -----Original Message-----
> > > From: Robert Raszuk [mailto:robert@raszuk.net]
> > > Sent: Tuesday, January 03, 2012 7:05 PM
> > > To: Ashish Dalela (adalela)
> > > Cc: Pedro Marques; dc@ietf.org
> > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > > interconnect
> > >
> > > Ashish,
> > >
> > >> The issues of scale you mentioned don't exist in Hierarchical
MACs,
> > >> which scale better than anything we know of.
> > >
> > > So you are advocating solution which is based on encapsulation -
> that
> > is
> > > fine.
> > >
> > > However how could you ever arrive at the conclusion that HMACs
would
> > > scale better then "anything we know". Well I don't know about you,
> but
> > I
> > > know that the key to scaling is ability to aggregate. And it is
not
> > that
> > >
> > > huge mystery that MACs aggregate rather poorly while there are
quite
> > > well deployed protocols (be it IPv4 or IPv6) which aggregate
> natively.
> > >
> > > For inter-dc this is IMHO a must. A must even if you build it
using
> > > traditional routers or OF enabled switches - does not matter.
> > >
> > >> I don't want to split the requirements into multiple use-cases
> > >> because then this DC group will be many groups - one doing L2 and
> > >> another doing L3. That I think you will agree is not optimal for
> > >> anyone
> > >
> > > Why MAC-in-IP does not solve it for everyone ? And there are
> deployed
> > > solutions already ..
> > >
> > > IMHO what this group should accomplish is not to try to reinvent
the
> > > world, but perhaps as example discuss where is the right boundary
of
> > > encapsulation, how should we communicate between network and
hosts,
> > what
> > >
> > > kind of DC instrumentation should be IETF blessed for easy
> integration
> > > (ie min subset of functionality it should possess etc .... )
> > >
> > > R.
> > >
> > > _______________________________________________
> > > dc mailing list
> > > dc@ietf.org
> > > https://www.ietf.org/mailman/listinfo/dc
> > >
> > >
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
>=20


From pedro.r.marques@gmail.com  Tue Jan  3 09:31:50 2012
Return-Path: <pedro.r.marques@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2619811E8072 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:31:50 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.32
X-Spam-Level: 
X-Spam-Status: No, score=-3.32 tagged_above=-999 required=5 tests=[AWL=0.279,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id SH4HcrBcZ6Ds for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:31:49 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id 942BB21F85AB for <dc@ietf.org>; Tue,  3 Jan 2012 09:31:49 -0800 (PST)
Received: by iabz21 with SMTP id z21so10009970iab.31 for <dc@ietf.org>; Tue, 03 Jan 2012 09:31:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=Nm7Rc4wagg5P0ph7M8BhJ27zg9yu9bttHBk99tC3Oos=; b=GzVWvZ0vg6VNNFc8PGvQ5M03IeJvSR2vZ6H62MCROhKjfGBD0qZRkdjivQ0dXp14Mk llVU7LfJtM13SVs5oahrNHQlU6NDNUuADpUYejYAV+Q93ACVFxynP3utKMYWnWCXpNmT xTeSZ4hB85THHXMI6b7Ytj7tNa/RB/HXEaVqU=
MIME-Version: 1.0
Received: by 10.42.131.136 with SMTP id z8mr37122591ics.5.1325611908807; Tue, 03 Jan 2012 09:31:48 -0800 (PST)
Received: by 10.231.60.193 with HTTP; Tue, 3 Jan 2012 09:31:48 -0800 (PST)
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBD6@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B256BA@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBD6@MX14A.corp.emc.com>
Date: Tue, 3 Jan 2012 09:31:48 -0800
Message-ID: <CAMXVrt4QnsbZODLy2b9FsOmfwK5h4vhyA4kqfw48DH+Qie9SoQ@mail.gmail.com>
From: Pedro Marques <pedro.r.marques@gmail.com>
To: david.black@emc.com
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:31:50 -0000

David,

On Tue, Jan 3, 2012 at 9:14 AM,  <david.black@emc.com> wrote:
> It's not compatible with widely deployed VM live migration mechanisms tha=
t don't
> change the MAC. =A0As noted in my other follow-up email, things will brea=
k, and I'd
> prefer to look at solutions that don't break things that currently work.

Whether the VMs change the MAC or not is not necessarily relevant
either. What is the service model to the VM ? Is it an IEEE compatible
LAN or an ethernet (point-to-point) interface with the ability to
carry IP traffic ?. In the later case the MAC is not relevant.

  Pedro.

From david.black@emc.com  Tue Jan  3 09:35:35 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id DC04C5E8003 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:35:34 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.589
X-Spam-Level: 
X-Spam-Status: No, score=-106.589 tagged_above=-999 required=5 tests=[AWL=0.010, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id a4yn+FqTrm6J for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:35:34 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 1F17E5E8002 for <dc@ietf.org>; Tue,  3 Jan 2012 09:35:33 -0800 (PST)
Received: from hop04-l1d11-si03.isus.emc.com (HOP04-L1D11-SI03.isus.emc.com [10.254.111.23]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q03HZVxp013251 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 3 Jan 2012 12:35:32 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.221.253]) by hop04-l1d11-si03.isus.emc.com (RSA Interceptor); Tue, 3 Jan 2012 12:35:19 -0500
Received: from mxhub36.corp.emc.com (mxhub36.corp.emc.com [10.254.93.84]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q03HZIVX009335; Tue, 3 Jan 2012 12:35:18 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub36.corp.emc.com ([::1]) with mapi; Tue, 3 Jan 2012 12:35:18 -0500
From: <david.black@emc.com>
To: <pedro.r.marques@gmail.com>
Date: Tue, 3 Jan 2012 12:35:17 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKPZ/xRVifuccASOKcILXvZIWu6wAAEW6Q
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBE3@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B256BA@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBD6@MX14A.corp.emc.com> <CAMXVrt4QnsbZODLy2b9FsOmfwK5h4vhyA4kqfw48DH+Qie9SoQ@mail.gmail.com>
In-Reply-To: <CAMXVrt4QnsbZODLy2b9FsOmfwK5h4vhyA4kqfw48DH+Qie9SoQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:35:35 -0000

> Whether the VMs change the MAC or not is not necessarily relevant
> either. What is the service model to the VM ? Is it an IEEE compatible
> LAN or an ethernet (point-to-point) interface with the ability to
> carry IP traffic ?. In the later case the MAC is not relevant.

The widely deployed service model for VMs that I'm thinking of is
an IEEE-compatible LAN (L2 network service, for which the MAC matters).

Thanks,
--David


> -----Original Message-----
> From: Pedro Marques [mailto:pedro.r.marques@gmail.com]
> Sent: Tuesday, January 03, 2012 12:32 PM
> To: Black, David
> Cc: dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
>=20
> David,
>=20
> On Tue, Jan 3, 2012 at 9:14 AM,  <david.black@emc.com> wrote:
> > It's not compatible with widely deployed VM live migration mechanisms t=
hat don't
> > change the MAC. =A0As noted in my other follow-up email, things will br=
eak, and I'd
> > prefer to look at solutions that don't break things that currently work=
.
>=20
> Whether the VMs change the MAC or not is not necessarily relevant
> either. What is the service model to the VM ? Is it an IEEE compatible
> LAN or an ethernet (point-to-point) interface with the ability to
> carry IP traffic ?. In the later case the MAC is not relevant.
>=20
>   Pedro.


From pedro.r.marques@gmail.com  Tue Jan  3 09:42:38 2012
Return-Path: <pedro.r.marques@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id BAC2D11E808C for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:42:38 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.367
X-Spam-Level: 
X-Spam-Status: No, score=-3.367 tagged_above=-999 required=5 tests=[AWL=0.232,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id v9ACQiTKnMV8 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:42:38 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id CA46C11E8087 for <dc@ietf.org>; Tue,  3 Jan 2012 09:42:37 -0800 (PST)
Received: by iabz21 with SMTP id z21so10022711iab.31 for <dc@ietf.org>; Tue, 03 Jan 2012 09:42:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=UbaSGXaUoo5tqL8Y7Coa3AvT9UguIO1ckVsIiNuSYXs=; b=UPMdCR1KvZyGTCN8vYORcOIL6Whn1oB9ZUwv4yIHJsTLH5nGM4HnRcraXzU92zy6uZ DfSBeKHhAWRcWUypoIf5EM+gLJgWM/zqj81h6AuRA4lpxNr0LcORGtdO+uVyJyBV/roi /AOEdpw+8j9fRmYeRMrXRK1iYmqYh0eDjl5FU=
MIME-Version: 1.0
Received: by 10.43.54.10 with SMTP id vs10mr7719570icb.13.1325612557499; Tue, 03 Jan 2012 09:42:37 -0800 (PST)
Received: by 10.231.60.193 with HTTP; Tue, 3 Jan 2012 09:42:37 -0800 (PST)
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBE3@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B256BA@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBD6@MX14A.corp.emc.com> <CAMXVrt4QnsbZODLy2b9FsOmfwK5h4vhyA4kqfw48DH+Qie9SoQ@mail.gmail.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBE3@MX14A.corp.emc.com>
Date: Tue, 3 Jan 2012 09:42:37 -0800
Message-ID: <CAMXVrt55+sEU+nN9a48ohNyHZMx=q9nv5FCGgm8uN1T8LFgSgQ@mail.gmail.com>
From: Pedro Marques <pedro.r.marques@gmail.com>
To: david.black@emc.com
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:42:38 -0000

David,
I propose the following as a problem statement. The IETF goal should
be to standardize:
   1) an IEEE-compatible LAN service (which implies broadcast support).
   2) an IP unicast service.

Both services should be able to run on top of a IP infrastructure
network as an overlay. Progress on both service models should be made
independently.

  Pedro.

On Tue, Jan 3, 2012 at 9:35 AM,  <david.black@emc.com> wrote:
>> Whether the VMs change the MAC or not is not necessarily relevant
>> either. What is the service model to the VM ? Is it an IEEE compatible
>> LAN or an ethernet (point-to-point) interface with the ability to
>> carry IP traffic ?. In the later case the MAC is not relevant.
>
> The widely deployed service model for VMs that I'm thinking of is
> an IEEE-compatible LAN (L2 network service, for which the MAC matters).
>
> Thanks,
> --David
>
>
>> -----Original Message-----
>> From: Pedro Marques [mailto:pedro.r.marques@gmail.com]
>> Sent: Tuesday, January 03, 2012 12:32 PM
>> To: Black, David
>> Cc: dc@ietf.org
>> Subject: Re: [dc] [armd] IP over IP solution for data center interconnec=
t
>>
>> David,
>>
>> On Tue, Jan 3, 2012 at 9:14 AM, =A0<david.black@emc.com> wrote:
>> > It's not compatible with widely deployed VM live migration mechanisms =
that don't
>> > change the MAC. =A0As noted in my other follow-up email, things will b=
reak, and I'd
>> > prefer to look at solutions that don't break things that currently wor=
k.
>>
>> Whether the VMs change the MAC or not is not necessarily relevant
>> either. What is the service model to the VM ? Is it an IEEE compatible
>> LAN or an ethernet (point-to-point) interface with the ability to
>> carry IP traffic ?. In the later case the MAC is not relevant.
>>
>> =A0 Pedro.
>

From adalela@cisco.com  Tue Jan  3 09:45:24 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 800B511E8087 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:45:24 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.365
X-Spam-Level: 
X-Spam-Status: No, score=-2.365 tagged_above=-999 required=5 tests=[AWL=0.234,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id r-p8Ih1QIEaI for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:45:23 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 38F4211E8086 for <dc@ietf.org>; Tue,  3 Jan 2012 09:45:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=9450; q=dns/txt; s=iport; t=1325612722; x=1326822322; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=ncZ7ClnaWk7G8mmiKdK8lUMgsW2KrU/ghtXNX8u6aMY=; b=RnxrADH9DEVWRUPdvqQ98ZkLCA9lb0WkAi9hW/PCVxWM1RJYZ7XyxItV xBKYu92e8j9VtJ+2ZugtdEzPmHdzm3EqaDgLF0+qnTsf2p3rVLSiLgYuy ilCLJO82aKN4IjEKjxrkuhP27gZcq8rYkmiKf2tBugSh/Pg4UiHw/3FLH Q=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AjwIAAE+A09Io8UY/2dsb2JhbAA5CoIFq16BcgEBAQQBAQEPAR0KNAsMBAIBCBEEAQEBCgYXAQYBJh8JCAEBBAEKCAgTB4dgl1EBnXSIVoJWYwSINZ8J
X-IronPort-AV: E=Sophos;i="4.71,451,1320624000";  d="scan'208";a="2695479"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 03 Jan 2012 17:45:20 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q03HjKKG021280; Tue, 3 Jan 2012 17:45:20 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 3 Jan 2012 23:15:20 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 3 Jan 2012 23:15:25 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco.com>
In-Reply-To: <CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKO3IzHPqB5LdOQIa6NM7HRu4VnAAAlmcg
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Pedro Marques" <pedro.r.marques@gmail.com>, <david.black@emc.com>
X-OriginalArrivalTime: 03 Jan 2012 17:45:20.0464 (UTC) FILETIME=[765FC500:01CCCA3F]
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:45:24 -0000

Suppose you have an IP solution.

To support mobility you need IP-in-IP encapsulation.=20

As VM density increases, as VM-to-VM conversation grows, as interfaces
per VM increase, the host routes increase.

These host routes are in addition to network routes, local host-port
bindings, ACLs, etc. That means in addition to everything that existed
so far.=20

Eventually, you hit a limit on the access, and you have to reduce size
of network, reduce VM mobility, reduce VM density per server, reduce
application spread.=20

The alternative is to constantly increase network hardware table sizes
at access, which increases costs and energy.

We have to realize that IP encapsulations put network and compute at
opposite sides of the cost trend. Compute cost reduces slowly as size
grows. Network cost grows rapidly as size grows.=20

Thanks,
Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Pedro Marques
Sent: Tuesday, January 03, 2012 10:46 PM
To: david.black@emc.com
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect

That assumes that the MAC has relevance in the network. It is possible
to build solutions such that packets are forwarded based on their IP
addresses rather than their MACs.

  Pedro.

On Tue, Jan 3, 2012 at 8:41 AM,  <david.black@emc.com> wrote:
> Ashish,
>
>> > [AD] The higher bits identify a switch - it's a switch-id.
>
> That breaks VM migration across switches by forcing a MAC change.
>
> Thanks,
> --David
>
>> -----Original Message-----
>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Ashish Dalela (adalela)
>> Sent: Tuesday, January 03, 2012 11:15 AM
>> To: robert@raszuk.net
>> Cc: Pedro Marques; dc@ietf.org
>> Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect
>>
>> Robert,
>>
>> Please see inline.
>>
>> -----Original Message-----
>> From: Robert Raszuk [mailto:robert@raszuk.net]
>> Sent: Tuesday, January 03, 2012 8:24 PM
>> To: Ashish Dalela (adalela)
>> Cc: Pedro Marques; dc@ietf.org
>> Subject: Re: [dc] [armd] IP over IP solution for data center
>> interconnect
>>
>> Ashish,
>>
>> OK let's just discuss what is in your draft on Hierarchical
Addressing.
>>
>> 1. You have 48 bits 32 go for host remaining 16 goes for switches.
How
>> do you aggregate at the TOR or AGGR switch boundary ? Are you
assuming
>> single HOST - SWITCH with max 65K flat macs ?
>>
>> [AD] The higher bits identify a switch - it's a switch-id. The hosts
are
>> dynamically assigned a host-id under that switch. Let's assume 23
bits
>> are for switch-id and 23 bits for host-id. To forward a packet to the
>> host, you only have to look at the first 23 bits. That's a MAC prefix
to
>> route against.
>>
>> [AD] You can have 2^23 switches in a network and 2^23 hosts under
each
>> switch.
>>
>> 2. Can you deploy this on existing VMs and existing switches ?
>>
>> [AD] What do you mean by this? Any VM can be configured with any MAC.
>> Any physical host can be configured with any MAC on any logical
>> interface. Configuration standpoint this is possible. Forwarding
>> standpoint, that's another question.
>>
>> 3. What new protocol you envision to use to distribute those new MACs
?
>>
>> [AD] IS-IS extensions. It can be TRILL extensions.
>>
>> 4. What is the advantage of using this vs ILNP if we assume that
hosts
>> should be modified ?
>>
>> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
>> talking about Loc-Id separation. If not, correct me. If yes, each
Loc-Id
>> binding can be a host route, with mobility. These host-routes are a
>> scaling problem. Traditional IP packet have IP as ID and MAC as LOC.
We
>> are just extending this LOC to make it actually location aware rather
>> than a flat address which is fixed regardless of where the location
is.
>>
>> 5. The proposal does not support aggregation .. even the draft says
it
>> :)
>>
>> "The total number of hardware entries anywhere in the network equals
the
>>
>> total number of switches and remains agnostic of VM mobility."
>>
>> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts.
With
>> 48 port access switches, you need 833 switches. That's the routing
table
>> size for any switch in the datacenter - core, aggregation, access.
>> Contrast this with host-routes, if each VM talks to 100 VMs, then
each
>> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just =
because
>> the network prefix is 23 bits does not mean we have to store 10^23
>> prefixes. We have to store only as many switches as there are in the
>> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48
port
>> access and 20 VM per port). That means instead of storing host-routes
>> which will grow proportional to VM growth, we store switch-id, which
>> will grow at 1000 times slower rate. As VM density increases, this
>> growth rate is further slowed down. There are other techniques to
>> further reduce the rate of growth. But in any case, 1000 times slower
is
>> a lot slow.
>>
>> So if I have 100K switches I can not do any aggregation and need to
>> "route" 100K MAC addresses.
>>
>> [AD] I don't know how you came to that conclusion. Think of HMAC as
an
>> IP address. Instead of 32 bits it is 46 bits. You route by prefixes
in
>> L3, and you are routing by the same prefixes here. Just as you
aggregate
>> IP, same way you aggregate MAC. It's not different.
>>
>> 6. Who provides me the mapping between switch mac and host/vm mac
behind
>>
>> such switch ? Do switches proxy arp globally within your domain ?
>>
>> [AD] Variation of the same question. Above should answer it.
>>
>> Thx,
>> R.
>>
>>
>> > Robert,
>> >
>> >>> So you are advocating solution which is based on encapsulation -
>> that
>> > is fine.
>> >
>> > No, I'm not. Did you read the draft I had mentioned?
>> > Hierarchical MAC is not encapsulation. It is one 48 bit address.
>> >
>> >>> However how could you ever arrive at the conclusion that HMACs
would
>> >>> scale better then "anything we know". Well I don't know about
you,
>> > but I
>> >>> know that the key to scaling is ability to aggregate. And it is
not
>> > that
>> >>> huge mystery that MACs aggregate rather poorly while there are
quite
>> >>> well deployed protocols (be it IPv4 or IPv6) which aggregate
>> natively
>> >
>> > You are hitting the issue on the nail. So, read the draft I
mentioned.
>> > Hierarchical MAC is higher bits "network prefix" and lower bits
"host
>> > id".
>> > That's summarizable like IP address and aggregated.
>> > It has 46 bits to modify so larger than IPv4 internet.
>> >
>> > I won't comment on the rest, because you have made an assumption
about
>> > encapsulation.
>> >
>> > I refer to this -
>> > http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
>> >
>> > Thanks, Ashish
>> >
>> >
>> > -----Original Message-----
>> > From: Robert Raszuk [mailto:robert@raszuk.net]
>> > Sent: Tuesday, January 03, 2012 7:05 PM
>> > To: Ashish Dalela (adalela)
>> > Cc: Pedro Marques; dc@ietf.org
>> > Subject: Re: [dc] [armd] IP over IP solution for data center
>> > interconnect
>> >
>> > Ashish,
>> >
>> >> The issues of scale you mentioned don't exist in Hierarchical
MACs,
>> >> which scale better than anything we know of.
>> >
>> > So you are advocating solution which is based on encapsulation -
that
>> is
>> > fine.
>> >
>> > However how could you ever arrive at the conclusion that HMACs
would
>> > scale better then "anything we know". Well I don't know about you,
but
>> I
>> > know that the key to scaling is ability to aggregate. And it is not
>> that
>> >
>> > huge mystery that MACs aggregate rather poorly while there are
quite
>> > well deployed protocols (be it IPv4 or IPv6) which aggregate
natively.
>> >
>> > For inter-dc this is IMHO a must. A must even if you build it using
>> > traditional routers or OF enabled switches - does not matter.
>> >
>> >> I don't want to split the requirements into multiple use-cases
>> >> because then this DC group will be many groups - one doing L2 and
>> >> another doing L3. That I think you will agree is not optimal for
>> >> anyone
>> >
>> > Why MAC-in-IP does not solve it for everyone ? And there are
deployed
>> > solutions already ..
>> >
>> > IMHO what this group should accomplish is not to try to reinvent
the
>> > world, but perhaps as example discuss where is the right boundary
of
>> > encapsulation, how should we communicate between network and hosts,
>> what
>> >
>> > kind of DC instrumentation should be IETF blessed for easy
integration
>> > (ie min subset of functionality it should possess etc .... )
>> >
>> > R.
>> >
>> > _______________________________________________
>> > dc mailing list
>> > dc@ietf.org
>> > https://www.ietf.org/mailman/listinfo/dc
>> >
>> >
>>
>> _______________________________________________
>> dc mailing list
>> dc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dc
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From pedro.r.marques@gmail.com  Tue Jan  3 09:59:37 2012
Return-Path: <pedro.r.marques@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 87B8321F84C7 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:59:37 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.4
X-Spam-Level: 
X-Spam-Status: No, score=-3.4 tagged_above=-999 required=5 tests=[AWL=0.199, BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id WZz1DmlmiBB1 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 09:59:37 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id 0D4F521F84C2 for <dc@ietf.org>; Tue,  3 Jan 2012 09:59:36 -0800 (PST)
Received: by iabz21 with SMTP id z21so10044260iab.31 for <dc@ietf.org>; Tue, 03 Jan 2012 09:59:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=q37zO7KFeNly157YsgBzdv0u2qkv5ZuXrwCJYkZIYk0=; b=qDNRpb7+O9i2SbV7J0Y3zl5z+zT3GPJ6Tw0kaao9XVgWjLb+xO9/6z1TGQGPHrfzcq N1kc+t+kD2mHwlFRgbfpzOV7ijUPgpZfrr1sGD9x+cmWJoq748OdZjxofFvvtsavSTan /2HSmLOewwejIu3hGPfm1i883ktg9pgiTBrxU=
MIME-Version: 1.0
Received: by 10.50.15.161 with SMTP id y1mr74777243igc.4.1325613573108; Tue, 03 Jan 2012 09:59:33 -0800 (PST)
Received: by 10.231.60.193 with HTTP; Tue, 3 Jan 2012 09:59:32 -0800 (PST)
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco.com>
Date: Tue, 3 Jan 2012 09:59:32 -0800
Message-ID: <CAMXVrt5uMe5BT1Ae4J+QgEBjuafWcc+iCPaOam79UYvQ8OF+0Q@mail.gmail.com>
From: Pedro Marques <pedro.r.marques@gmail.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Content-Type: text/plain; charset=ISO-8859-1
Cc: david.black@emc.com, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 17:59:37 -0000

Ashish,

On Tue, Jan 3, 2012 at 9:45 AM, Ashish Dalela (adalela)
<adalela@cisco.com> wrote:
>
> Suppose you have an IP solution.
>
> To support mobility you need IP-in-IP encapsulation.
>
> As VM density increases, as VM-to-VM conversation grows, as interfaces
> per VM increase, the host routes increase.
>
> These host routes are in addition to network routes, local host-port
> bindings, ACLs, etc. That means in addition to everything that existed
> so far.

Whether you call the information "host routes" or "mac addresses" does
not change the fact that you have identities associated with the
end-points (VMs).

>
> Eventually, you hit a limit on the access, and you have to reduce size
> of network, reduce VM mobility, reduce VM density per server, reduce
> application spread.

I'm sorry but that was just a huge leap in the reasoning... i'm not
able to understand  how you reached these conclusions.

>
> The alternative is to constantly increase network hardware table sizes
> at access, which increases costs and energy.
>
> We have to realize that IP encapsulations put network and compute at
> opposite sides of the cost trend. Compute cost reduces slowly as size
> grows. Network cost grows rapidly as size grows.

When you refer to "IP encapsulations" in the sentence above... do you
mean X on top of IP or IP on top of X ?

Either way the claim seems to be rather unsubstantiated.

  Pedro.

From cdl@asgaard.org  Tue Jan  3 10:57:59 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2F74B5E8020 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 10:57:59 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.24
X-Spam-Level: 
X-Spam-Status: No, score=-6.24 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, DATE_IN_PAST_03_06=0.044, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id o99dVQTQVYLt for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 10:57:58 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id 1938E5E8023 for <dc@ietf.org>; Tue,  3 Jan 2012 10:57:57 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id 00F31A6B29B; Tue,  3 Jan 2012 18:57:56 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ahDkHjym3+z2; Tue,  3 Jan 2012 18:57:56 +0000 (UTC)
Received: from [172.20.10.2] (mobile-166-147-098-245.mycingular.net [166.147.98.245]) by asgaard.org (Postfix) with ESMTPSA id 82835A6B270; Tue,  3 Jan 2012 18:57:11 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: text/plain; charset=utf-8
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <CANavrTM1sBUdjP-+48b7z1W6FHXFPL48Pm3hGdp91iMZnH5A1w@mail.gmail.com>
Date: Tue, 3 Jan 2012 07:53:25 -0800
Content-Transfer-Encoding: quoted-printable
Message-Id: <AE704987-B775-462D-A4F7-B8A644346CE9@asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <618BE8B40039924EB9AED233D4A09C5102B2533C@XMB-BGL-416.cisco.com> <CANavrTM1sBUdjP-+48b7z1W6FHXFPL48Pm3hGdp91iMZnH5A1w@mail.gmail.com>
To: Derick Winkworth <ccie15672@gmail.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Russ White <russw@riw.us>, "Ashish Dalela \(adalela\)" <adalela@cisco.com>, dc@ietf.org
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 18:57:59 -0000

On 31Dec2011, at 06.42, Derick Winkworth wrote:

> 100,000 VMs is how many physical servers?  At some point you have to
> eat the cost of putting the right gear in.  You can't drive down to
> Best Buy to buy your switches if you think you'll have 100k host route
> entries (or MAC addresses) in your access switches.
>=20
> Russ is right, there are multiple platforms/vendors now supporting
> millions of routes/MAC addresses.  Maybe what you thought of as a
> distribution or core device is now and end-of-row "access" device?
>=20
> Which kind of leads me to a point I wanted to make with regard to some
> of the lists of "requirements" I've seen.  Some of these things should
> be considered a network design problem, not an IETF protocol problem.
> Specifically, I'm thinking of hyperscale requirements...

I agree - there is a constellation of potential problem spaces, in that =
is a subset of real operational problems, and within that is a smaller =
subset of problems that the IETF should address.  The larger problem =
space is, among other things, network and system design problems, =
including scale, performance, economic, etc. =20
>=20
>=20
>=20
> On Sat, Dec 31, 2011 at 8:32 AM, Ashish Dalela (adalela)
> <adalela@cisco.com> wrote:
>>=20
>> 100K or whatever that number is, is a big deal at the access. And =
that number will increase with time. The thing to bear in mind is that =
these are ADDITIONAL entries, not in lieu of other entries. E.g. network =
routes, local host-port entries, ACLs etc exist today and will continue =
to exist. Host routes are in addition to that. Access switches generally =
had about 16K or 32K table sizes. And we are talking about adding 3-4 =
times more to that.
>>=20
>> The other thing is that access to core ratio is about 1000:1. So, we =
need to multiply the incremental costs by 1000 to get the full impact to =
TCO.
>>=20
>> In regard to scaling, there are 4 places to look at:
>>=20
>> 1. Access
>> 2. Core
>> 3. Inter-datacenter
>> 4. Datacenter-internet
>>=20
>> These have different scaling properties.
>>=20
>> If you have different DC and DCI technologies, and you use Encap, you =
have high entries at Access, DCI and Datacenter-Internet.
>>=20
>> If you have common DC and DCI technologies, you have high Access and =
Datacenter-Internet entries.
>>=20
>> If you have hierarchical MAC, then you have only high in =
Datacenter-Internet boundary.
>>=20
>> Net-net, out of the 4 things above, you can solve 3 (first 3). The =
fourth we have to distribute over multiple routers, which is not hard =
because it is always north-south traffic.
>>=20
>> Thanks, Ashish
>>=20
>>=20
>> -----Original Message-----
>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of =
Christopher LILJENSTOLPE
>> Sent: Thursday, December 29, 2011 10:58 PM
>> To: Russ White
>> Cc: dc@ietf.org
>> Subject: Re: [dc] Elevator Pitch
>>=20
>> Greetings Russ et.al.,
>>=20
>> On 29Dec2011, at 07.08, Russ White wrote:
>>=20
>>>=20
>>>> Actually, 100K host routes in leaf nodes may well be a Big Deal. It
>>>> depends on what type of devices need to understand host routes.
>>>=20
>>> So let me try to abstract the problem a little... There are two =
things
>>> that cost money in building a network:
>>>=20
>>> 1. State --the more you carry, the more hardware is going to cost.
>>> 2. Bandwidth --the less optimal your bandwidth utilization, the more
>>> you're spending on something that's not used.
>>>=20
>>> The first rule of all network design is that the more state you put =
in a
>>> network, the more optimal your bandwidth usage. To reduce the cost =
of
>>> bandwidth, you have to spend money on state. To reduce the cost of
>>> state, you must increase your spending on bandwidth.
>>=20
>> Agreed, mostly.  The question is where the state resides, control =
plane or data plane (do you differentiate between possible state and =
active state).  Control plane "possible state" should be cheaper (by =
orders of magnitude) than data plane "active state" - however, =
transitioning between the two is a complex problem (that has gotten =
somewhat easier as we have gotten smarter about distributed systems over =
time)
>>>=20
>>> The second of network design is the balance point where the network =
is
>>> cheapest is different in every network.
>>=20
>> agreed
>>=20
>>>=20
>>> So, IMHO, the point of protocol design should be to allow the design =
the
>>> maximum amount of flexibility and granularity in when and where to =
hide
>>> information, to make the right tradeoff for any particular network.
>>>=20
>> agreed
>>=20
>>> Now, to return to the discussion at hand: If you don't want to pay =
for
>>> devices that will handle 100 million routes, but you have 100 =
million
>>> devices, then you're going to have some degree of suboptimal =
bandwidth
>>> utilization. There's simply no way around this reality.
>>=20
>> agreed
>>>=20
>>> IMHO, however we resolve this problem, we need to resolve it in a =
way
>>> that allows an operator to support 100 million routes in every node, =
if
>>> they choose to optimize for bandwidth at the cost of hardware. Or to
>>> support 1 route in every node using really cheap hardware, if they =
want
>>> to optimize for state. Or for any point in the sliding scale in =
between
>>> (within reason).
>>=20
>> agreed
>>=20
>>>=20
>>> So this control plane needs to:
>>>=20
>>> 1. Be able to support 100 million destinations at the host level =
(that
>>> doesn't mean 100 million paths, but only 100 million destinations =
--two
>>> different problems).
>>=20
>> yes
>>=20
>>>=20
>>> 2. Be able to aggregate to hide information at anyplace that's =
logical
>>> within the network.
>>=20
>> yes
>>=20
>>>=20
>>> BTW, some folks would like to solve this problem by making the =
control
>>> plane react to the data plane --but this carries it's own baggage in
>>> complexity and in operational capabilities. Reactive control planes
>>> always converge more slowly, and waste bandwidth at a rate that's
>>> arguably higher than simple aggregation. So there's no "silver =
bullet"
>>> waiting in the wings in the form of caching at the control plane =
level.
>>=20
>> however, I believe that aggregation time may be coming down, =
especially in a more homogenous environment (like a DC) vs a =
heterogeneous environment (like the DFZ).
>>=20
>>>=20
>>>>> 2. Why does this mobility need to be at layer 2 specifically? Are =
we
>>>>> assuming DDNS and other sorts of solutions in this space will =
simply
>>>>> never be fast enough/scale far enough/etc?
>>>>=20
>>>> Like it or not, the key requirement for VM mobility is that the =
VM's
>>>> IP address does not change. That means the VM can't really move =
from
>>>> one IP subnet to another. That means either moving to bigger and
>>>> bigger L2s (all under one IP subnet) as the DC expands or the need =
to
>>>> inject /32 host routes.
>>>=20
>>> Again, the same tradeoff as above --moving to a bigger l2 domain =
also
>>> means losing the ability to optimally direct traffic through the =
network
>>> (unless you put another control plane on top of the l2 and l3 =
control
>>> planes already in existence --which just adds the complexity you're
>>> trying to get away from in the host routes back into the network =
state!).
>>=20
>> Or replacing one or two of those L2/L3 control planes with a control =
plane with a global view, which is easier to do in a constrained network =
like a traffic engineering core or a data center.  It's is a bounded =
problem.  As you stated, however, convergence MAY be slower, but I would =
argue that converging a network of Nx10K switching/routing nodes takes a =
non-trivial amount of time as well :)
>>=20
>>>=20
>>>> Neither of those approaches seems particularly scalable/desirable =
if
>>>> you look 10 years down the road and think of 1M+ physical machines =
in
>>>> a DC.
>>>=20
>>> There is no "particularly desirable" solution, in reality, because
>>> building a network with no control plane state anyplace that uses
>>> bandwidth in a perfectly optimal way is simply impossible no matter =
how
>>> you slice it (unless you're going to go into quantum routing!).
>>>=20
>>> :-)
>>>=20
>>> Russ
>>> _______________________________________________
>>> dc mailing list
>>> dc@ietf.org
>>> https://www.ietf.org/mailman/listinfo/dc
>>=20
>> --
>> =E6=9D=8E=E6=9F=AF=E7=9D=BF
>> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
>> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>>=20
>> _______________________________________________
>> dc mailing list
>> dc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dc
>> _______________________________________________
>> dc mailing list
>> dc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


From cdl@asgaard.org  Tue Jan  3 11:01:36 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F1B5F5E8026 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 11:01:35 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.555
X-Spam-Level: 
X-Spam-Status: No, score=-6.555 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, DATE_IN_PAST_03_06=0.044, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id eoUhmJAyrOlo for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 11:01:34 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id 758FB5E8020 for <dc@ietf.org>; Tue,  3 Jan 2012 11:01:34 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id 517ADA6B319; Tue,  3 Jan 2012 19:01:34 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id naX76RXtTA6W; Tue,  3 Jan 2012 19:01:33 +0000 (UTC)
Received: from [192.168.10.18] (unknown [64.134.170.146]) by asgaard.org (Postfix) with ESMTPSA id 11016A6B312; Tue,  3 Jan 2012 19:01:32 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: text/plain; charset=utf-8
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B2533C@XMB-BGL-416.cisco.com>
Date: Tue, 3 Jan 2012 08:01:21 -0800
Content-Transfer-Encoding: quoted-printable
Message-Id: <8C931A6D-D669-4D7A-8DAA-7230610177D4@asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net><6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com><201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com><13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net><618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com><4EF7B019.3030202@riw.us><201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com><4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <618BE8B40039924EB9AED233D4A09C5102B2533C@XMB-BGL-416.cisco.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Russ White <russw@riw.us>, dc@ietf.org
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 19:01:36 -0000

On 31Dec2011, at 06.32, Ashish Dalela (adalela) wrote:

>=20
> 100K or whatever that number is, is a big deal at the access. And that =
number will increase with time. The thing to bear in mind is that these =
are ADDITIONAL entries, not in lieu of other entries. E.g. network =
routes, local host-port entries, ACLs etc exist today and will continue =
to exist. Host routes are in addition to that. Access switches generally =
had about 16K or 32K table sizes. And we are talking about adding 3-4 =
times more to that.

That is assuming that we are adding that to "hardware" switches.  In =
some cases that will be the case, and sometimes not, or sometimes it may =
be a hybrid.

>=20
> The other thing is that access to core ratio is about 1000:1. So, we =
need to multiply the incremental costs by 1000 to get the full impact to =
TCO.
>=20
> In regard to scaling, there are 4 places to look at:
>=20
> 1. Access
> 2. Core
> 3. Inter-datacenter
> 4. Datacenter-internet
>=20
> These have different scaling properties.=20
>=20
> If you have different DC and DCI technologies, and you use Encap, you =
have high entries at Access, DCI and Datacenter-Internet.=20
>=20
> If you have common DC and DCI technologies, you have high Access and =
Datacenter-Internet entries.
>=20
> If you have hierarchical MAC, then you have only high in =
Datacenter-Internet boundary.=20
>=20
> Net-net, out of the 4 things above, you can solve 3 (first 3). The =
fourth we have to distribute over multiple routers, which is not hard =
because it is always north-south traffic.=20
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of =
Christopher LILJENSTOLPE
> Sent: Thursday, December 29, 2011 10:58 PM
> To: Russ White
> Cc: dc@ietf.org
> Subject: Re: [dc] Elevator Pitch
>=20
> Greetings Russ et.al.,
>=20
> On 29Dec2011, at 07.08, Russ White wrote:
>=20
>>=20
>>> Actually, 100K host routes in leaf nodes may well be a Big Deal. It
>>> depends on what type of devices need to understand host routes.
>>=20
>> So let me try to abstract the problem a little... There are two =
things
>> that cost money in building a network:
>>=20
>> 1. State --the more you carry, the more hardware is going to cost.

If you are carrying it in hardware.  If not, the tradeoff will be in =
performance (available bandwidth) rather than $$.


>> 2. Bandwidth --the less optimal your bandwidth utilization, the more
>> you're spending on something that's not used.
>>=20
>> The first rule of all network design is that the more state you put =
in a
>> network, the more optimal your bandwidth usage. To reduce the cost of
>> bandwidth, you have to spend money on state. To reduce the cost of
>> state, you must increase your spending on bandwidth.
>=20
> Agreed, mostly.  The question is where the state resides, control =
plane or data plane (do you differentiate between possible state and =
active state).  Control plane "possible state" should be cheaper (by =
orders of magnitude) than data plane "active state" - however, =
transitioning between the two is a complex problem (that has gotten =
somewhat easier as we have gotten smarter about distributed systems over =
time)
>>=20
>> The second of network design is the balance point where the network =
is
>> cheapest is different in every network.
>=20
> agreed
>=20
>>=20
>> So, IMHO, the point of protocol design should be to allow the design =
the
>> maximum amount of flexibility and granularity in when and where to =
hide
>> information, to make the right tradeoff for any particular network.
>>=20
> agreed
>=20
>> Now, to return to the discussion at hand: If you don't want to pay =
for
>> devices that will handle 100 million routes, but you have 100 million
>> devices, then you're going to have some degree of suboptimal =
bandwidth
>> utilization. There's simply no way around this reality.
>=20
> agreed
>>=20
>> IMHO, however we resolve this problem, we need to resolve it in a way
>> that allows an operator to support 100 million routes in every node, =
if
>> they choose to optimize for bandwidth at the cost of hardware. Or to
>> support 1 route in every node using really cheap hardware, if they =
want
>> to optimize for state. Or for any point in the sliding scale in =
between
>> (within reason).
>=20
> agreed
>=20
>>=20
>> So this control plane needs to:
>>=20
>> 1. Be able to support 100 million destinations at the host level =
(that
>> doesn't mean 100 million paths, but only 100 million destinations =
--two
>> different problems).
>=20
> yes
>=20
>>=20
>> 2. Be able to aggregate to hide information at anyplace that's =
logical
>> within the network.
>=20
> yes
>=20
>>=20
>> BTW, some folks would like to solve this problem by making the =
control
>> plane react to the data plane --but this carries it's own baggage in
>> complexity and in operational capabilities. Reactive control planes
>> always converge more slowly, and waste bandwidth at a rate that's
>> arguably higher than simple aggregation. So there's no "silver =
bullet"
>> waiting in the wings in the form of caching at the control plane =
level.
>=20
> however, I believe that aggregation time may be coming down, =
especially in a more homogenous environment (like a DC) vs a =
heterogeneous environment (like the DFZ).
>=20
>>=20
>>>> 2. Why does this mobility need to be at layer 2 specifically? Are =
we
>>>> assuming DDNS and other sorts of solutions in this space will =
simply
>>>> never be fast enough/scale far enough/etc?
>>>=20
>>> Like it or not, the key requirement for VM mobility is that the VM's
>>> IP address does not change. That means the VM can't really move from
>>> one IP subnet to another. That means either moving to bigger and
>>> bigger L2s (all under one IP subnet) as the DC expands or the need =
to
>>> inject /32 host routes.
>>=20
>> Again, the same tradeoff as above --moving to a bigger l2 domain also
>> means losing the ability to optimally direct traffic through the =
network
>> (unless you put another control plane on top of the l2 and l3 control
>> planes already in existence --which just adds the complexity you're
>> trying to get away from in the host routes back into the network =
state!).
>=20
> Or replacing one or two of those L2/L3 control planes with a control =
plane with a global view, which is easier to do in a constrained network =
like a traffic engineering core or a data center.  It's is a bounded =
problem.  As you stated, however, convergence MAY be slower, but I would =
argue that converging a network of Nx10K switching/routing nodes takes a =
non-trivial amount of time as well :)
>=20
>>=20
>>> Neither of those approaches seems particularly scalable/desirable if
>>> you look 10 years down the road and think of 1M+ physical machines =
in
>>> a DC.
>>=20
>> There is no "particularly desirable" solution, in reality, because
>> building a network with no control plane state anyplace that uses
>> bandwidth in a perfectly optimal way is simply impossible no matter =
how
>> you slice it (unless you're going to go into quantum routing!).
>>=20
>> :-)
>>=20
>> Russ
>> _______________________________________________
>> dc mailing list
>> dc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dc
>=20
> -- =20
> =E6=9D=8E=E6=9F=AF=E7=9D=BF
> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


From cdl@asgaard.org  Tue Jan  3 11:02:06 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D1D475E8023 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 11:02:06 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.42
X-Spam-Level: 
X-Spam-Status: No, score=-6.42 tagged_above=-999 required=5 tests=[AWL=-0.135,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 9W8Ag0UBCD3r for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 11:02:05 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id 605295E801B for <dc@ietf.org>; Tue,  3 Jan 2012 11:01:42 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id 4C842A6B335; Tue,  3 Jan 2012 19:01:42 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id b+5BhqBimTXg; Tue,  3 Jan 2012 19:01:40 +0000 (UTC)
Received: from [192.168.10.18] (unknown [64.134.170.146]) by asgaard.org (Postfix) with ESMTPSA id C3A8EA6B31B; Tue,  3 Jan 2012 19:01:39 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: text/plain; charset=utf-8
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <4EFF0DCA.5090707@riw.us>
Date: Tue, 3 Jan 2012 08:41:47 -0800
Content-Transfer-Encoding: quoted-printable
Message-Id: <D7F34AF6-E93C-44F5-8C60-B3E9E8C2E38C@asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net><6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com><201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <4EFF0DCA.5090707@riw.us>
To: Russ White <russw@riw.us>
X-Mailer: Apple Mail (2.1251.1)
Cc: dc@ietf.org
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 19:02:06 -0000

On 31Dec2011, at 05.27, Russ White wrote:

>=20
>=20
>>> BTW, some folks would like to solve this problem by making the =
control
>>> plane react to the data plane --but this carries it's own baggage in
>>> complexity and in operational capabilities. Reactive control planes
>>> always converge more slowly, and waste bandwidth at a rate that's
>>> arguably higher than simple aggregation. So there's no "silver =
bullet"
>>> waiting in the wings in the form of caching at the control plane =
level.
>>=20
>> however, I believe that aggregation time may be coming down, =
especially in a more homogenous environment (like a DC) vs a =
heterogeneous environment (like the DFZ).
>=20
> Do you mean aggregation levels may be coming down?

I probably wasn't very clear hear, sorry.  What I was saying was that =
within the datacenter, the majority of the state being computed is the =
same for each element computing across the state.  That, combined with a =
flattening out of the network layers should decrease convergence time =
relative to the total amount of state.

> I would argue that
> aggregation is used to solve two problems:
>=20
> 1. Rate of state change.
> 2. Physical table size.

agreed on both points

>=20
> Note that you don't need both everywhere. For instance, it doesn't =
cost
> much, in the way of hardware, to hold millions of routes in the =
control
> plane, nor to provide the processor to deal with those routes. If you
> can afford 150k servers with moderately good processors and the memory
> to go with them, then you can afford 15k network devices with =
moderately
> good processors and the memory to go with them.

agreed

>=20
> Where the cost is, is in the data plane, it seems --the forwarding =
table
> size. But we already know how to do aggregation (and even caching)
> between the control plane and the forwarding table.

yes

>=20
> BTW, and aside on caching in the forwarding table. It appears to be a
> magic bullet for the problem of forwarding table size at first glance,
> but really it all depends on a lot of things... As long as the 80/20
> rule holds, caching works at this level. The question is --when you
> reach 50/50, does caching still work? What about 40/60? Or 20/80? And
> how does caching fail? It always fails catastrophically. Cliffs are =
not
> good hidden away in unmonitored/not well understood pieces of the =
network.

...but they are so much fun to discover the hard way :)

>=20
> And of course when we try to cache at the control plane level in order
> to solve the forwarding table size problem, and when we try to mix
> cached control planes with data plane driven information, we are, =
IMHO,
> in "bad juju."

If that caching modality is not monitored, I agree.  However, I don't =
believe that this needs to be an unbounded asynchronous cache. However =
if we can adjust the cache system based on ratios, etc - and monitor it, =
it's no longer hidden, nor necessarily a cliff.  I'm not saying caching =
is the only way, but it may be one approach.

>=20
> Note also that the speed at which topology changes impact the
> convergence of the protocol, and the scope of those changes, can be
> controlled effectively separately from the rate of change in leaf =
nodes
> on the tree --so long as the protocol is designed to handle this sort =
of
> separation effectively.

Protocol, or algorithm.=20

>=20
> So if we break the problem down into its component parts, and define
> what we need from each component, we might be able to reach a =
reasonable
> solution that provides convergence and mobility. There will always be =
a
> tradeoff, but it should be up to the operator to make that ultimate
> tradeoff along the points where it's simply impossible for a protocol =
to
> go beyond.

agreed.

>=20
> And to deal with the information hiding verses optimal bandwidth usage
> problem, of course. :-)
>=20
>>> Again, the same tradeoff as above --moving to a bigger l2 domain =
also
>>> means losing the ability to optimally direct traffic through the =
network
>>> (unless you put another control plane on top of the l2 and l3 =
control
>>> planes already in existence --which just adds the complexity you're
>>> trying to get away from in the host routes back into the network =
state!).
>>=20
>> Or replacing one or two of those L2/L3 control planes with a control =
plane with a global view, which is easier to do in a constrained network =
like a traffic engineering core or a data center.  It's is a bounded =
problem.  As you stated, however, convergence MAY be slower, but I would =
argue that converging a network of Nx10K switching/routing nodes takes a =
non-trivial amount of time as well :)
>=20
> I'm not certain I understand this... I think you mean like a DFZ, a
> control plane that knows every possible destination. But you have to
> separate knowing every possible destination from knowing every =
possible
> route to that destination. Even the DFZ in the 'net is really an
> aggregated suboptimal subset. I don't know of any network on this =
scale
> that has an optimal route to every destination, and I don't think it's
> really possible to build one unless you want to make processing power
> and control plane bandwidth usage unbounded.

For an unbounded network, I agree.  However, if it is within a bounded =
subset (i.e. a dc or collection of dc's) I believe it is possible.

>=20
> It's quite possible to know every destination as a host, but not know
> the entire path to every one of those destinations in detail (a form =
of
> fisheye routing, for instance). Aggregation is, in reality, just a =
form
> of fisheye routing --you know the path to the aggregation point in
> detail, but you don't know the path beyond that.

However, if that set of optimal paths is computed for every source/dest =
pair (or at least for every unique best path) once, based on a global =
topological/demand view, a "global" set of best paths may be =
accomplished within that constrained universe.

>=20
> The difference in aggregation at the reachability level is that you =
also
> don't know the actual state of the destinations hidden in the =
aggregate
> itself. To the degree that mobility isn't an issue, it's okay to tie
> topology to reachability in this way. When mobility becomes an issue,
> you need to unbundle the two in some way, treating detailed topology =
as
> one problem, and detailed reachability as another problem altogether.
> I'm guessing that unbundling these two is the most "logical" or "free
> and clear" path towards scaling for the requirements as they appear to
> exist.

Agreed, if there are destinations hidden by aggregation - however, a =
"global" view may not have that problem.

>=20
> The issue of convergence presents another problem to think about... If
> convergence ends up being slower than the application timing out and
> searching for a new destination IP address, then why is switching IP
> addresses worse? The only way any of this makes sense is if it =
converges
> faster than a human would notice --and, increasingly, faster than a
> computer would notice.

yes

>=20
> :-)
>=20
> Russ

Chris

> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


From cdl@asgaard.org  Tue Jan  3 11:10:36 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 562DD21F847C for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 11:10:36 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.654
X-Spam-Level: 
X-Spam-Status: No, score=-5.654 tagged_above=-999 required=5 tests=[AWL=-0.766, BAYES_00=-2.599, MIME_QP_LONG_LINE=1.396, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 24iT83DmW13E for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 11:10:35 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id DB1D221F8476 for <dc@ietf.org>; Tue,  3 Jan 2012 11:10:23 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id E5135A6B341; Tue,  3 Jan 2012 19:01:49 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 52t1dGyvnGwl; Tue,  3 Jan 2012 19:01:49 +0000 (UTC)
Received: from [192.168.10.18] (unknown [64.134.170.146]) by asgaard.org (Postfix) with ESMTPSA id 1E865A6B33A; Tue,  3 Jan 2012 19:01:49 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: text/plain; charset=utf-8
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639E9@szxeml525-mbs.china.huawei.com>
Date: Tue, 3 Jan 2012 09:04:33 -0800
Content-Transfer-Encoding: quoted-printable
Message-Id: <FDC18641-F09F-40D6-90C7-8A3152DE108B@asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639E9@szxeml525-mbs.china.huawei.com>
To: Xuxiaohu <xuxiaohu@huawei.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Thomas Narten <narten@us.ibm.com>, Russ White <russw@riw.us>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 19:10:36 -0000

On 30Dec2011, at 01.45, Xuxiaohu wrote:

>> -----=E9=82=AE=E4=BB=B6=E5=8E=9F=E4=BB=B6-----
>> =E5=8F=91=E4=BB=B6=E4=BA=BA: Xuxiaohu
>> =E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4: 2011=E5=B9=B412=E6=9C=8830=E6=97=A5=
 12:17
>> =E6=94=B6=E4=BB=B6=E4=BA=BA: 'Christopher LILJENSTOLPE'
>> =E6=8A=84=E9=80=81: Thomas Narten; Russ White; dc@ietf.org
>> =E4=B8=BB=E9=A2=98: re: [dc] Elevator Pitch
>>=20
>>=20
>>> -----=E9=82=AE=E4=BB=B6=E5=8E=9F=E4=BB=B6-----
>>> =E5=8F=91=E4=BB=B6=E4=BA=BA: Christopher LILJENSTOLPE =
[mailto:cdl@asgaard.org]
>>> =E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4: 2011=E5=B9=B412=E6=9C=8830=E6=97=
=A5 1:20
>>> =E6=94=B6=E4=BB=B6=E4=BA=BA: Xuxiaohu
>>> =E6=8A=84=E9=80=81: Thomas Narten; Russ White; dc@ietf.org
>>> =E4=B8=BB=E9=A2=98: Re: [dc] Elevator Pitch
>>>=20
>>> Greetings Xuxiaohu,
>>>=20
>>> On 29Dec2011, at 00.55, Xuxiaohu wrote:
>>>=20
>>>> Hi Thomas,
>>>>=20
>>>>> -----=E9=82=AE=E4=BB=B6=E5=8E=9F=E4=BB=B6-----
>>>>> =E5=8F=91=E4=BB=B6=E4=BA=BA: dc-bounces@ietf.org =
[mailto:dc-bounces@ietf.org] =E4=BB=A3=E8=A1=A8
>> Thomas
>>>>> Narten
>>>>> =E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4: 2011=E5=B9=B412=E6=9C=8829=E6=97=
=A5 1:01
>>>>> =E6=94=B6=E4=BB=B6=E4=BA=BA: Russ White
>>>>> =E6=8A=84=E9=80=81: dc@ietf.org
>>>>> =E4=B8=BB=E9=A2=98: Re: [dc] Elevator Pitch
>>>>>=20
<snip - it was getting long>

> Hi Chris,
>=20
> By the way, in the MAC over IP solution, the forwarding table of the =
leaf nodes contains the MAC routes to hosts and the gateways, similarly, =
in the IP over IP solution, the forwarding table of leaf nodes contains =
the routes to hosts and one or more default route to the gateways. If =
you believe the default route directed to the gateway in the IP over IP =
solution is not enough for forwarding the cross-data-center traffic, =
does that mean the MAC over IP solution is totally unworkable since =
there is only one MAC route directed to the gateway for forwarding the =
cross-data-center traffic?

Greetings Xiaohu,

I didn't actually state that IP over IP or MAC over IP were unworkable - =
I think that is a network design decision, and the answer will be =
different for different networks.  MAC's being non-aggregatable does =
make it a bit more "interesting" than IP over IP however.

What I am saying is that if all "east-west" traffic in the DC has to =
aggregate up to some level of core via a tree (read default route) =
topology, that would lead to some interesting scale issues (actually, it =
already does).  There is a reason that large networks tend not to run =
default for edge-to-edge connectivity (i.e. backbones) - almost always =
they are incomplete meshes.  The fact that we continue to use trees in =
the data center is an artifact of their enterprise origins, and the =
historical north-south traffic patterns.  I think the reason you are =
seeing so much activity now is due, in part, to the fact that the tree =
is starting to creak, and the north-south assumption is no longer valid.

	Chris

>=20
> Best regards,
> Xiaohu
>=20
>> Hi Chris,
>>=20
>> Would you please give a concrete example where the communication =
between
>> different tenants is very common in the multi-tenant cloud data =
center?
>>=20
>> Best regards,
>> Xiaohu
>>=20
>>>>=20
>>>>=20
>>>>> Or will they want an alternative approach?
>>>>>=20
>>>>>> 2. Why does this mobility need to be at layer 2 specifically? Are =
we
>>>>>> assuming DDNS and other sorts of solutions in this space will =
simply
>>>>>> never be fast enough/scale far enough/etc?
>>>>>=20
>>>>> Like it or not, the key requirement for VM mobility is that the =
VM's
>>>>> IP address does not change. That means the VM can't really move =
from
>>>>> one IP subnet to another. That means either moving to bigger and
>>>>> bigger L2s (all under one IP subnet) as the DC expands or the need =
to
>>>>> inject /32 host routes.
>>>>=20
>>>> In the DCI scenario where the PE routers are usually performed at =
the
>>> aggregation SWs or even core SWs, the PE routers would need a much =
large
>>> forwarding table. Provided the routing table containing millions of =
entries,
>>> which is available on most today's high-end routers, was still not =
large
>> enough,
>>> the on-demand FIB installation or on-demand route announcement
>>> mechanisms can be used further to scale the solution. Note that the =
trigger
>> for
>>> the FIB installation or route announcement is ARP request packets =
rather
>> than
>>> data packets. Hence it will not cause the so-called initial packet =
loss or
>> latency
>>> issue.
>>>>=20
>>>>> Neither of those approaches seems particularly scalable/desirable =
if
>>>>> you look 10 years down the road and think of 1M+ physical machines =
in
>>>>> a DC.
>>>>=20
>>>> Maybe we should also take the development speed of =
routing/switching
>> chip
>>> and CPU technologies into account:)
>>>=20
>>> It's more a question of cost/performance on off-chip memory/TCAMs.  =
That
>> is
>>> a slightly different curve :)
>>>=20
>>> 	Chris
>>>=20
>>>>=20
>>>> Best regards,
>>>> Xiaohu
>>>>=20
>>>>> Thomas
>>>>>=20
>>>>> _______________________________________________
>>>>> dc mailing list
>>>>> dc@ietf.org
>>>>> https://www.ietf.org/mailman/listinfo/dc
>>>> _______________________________________________
>>>> dc mailing list
>>>> dc@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/dc
>>>=20
>>> --
>>> =E6=9D=8E=E6=9F=AF=E7=9D=BF
>>> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
>>> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


From cdl@asgaard.org  Tue Jan  3 11:10:36 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 73B9E21F8476 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 11:10:36 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.254
X-Spam-Level: 
X-Spam-Status: No, score=-6.254 tagged_above=-999 required=5 tests=[AWL=0.345,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PFwVr5Bo8M9l for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 11:10:35 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id DDD7621F8479 for <dc@ietf.org>; Tue,  3 Jan 2012 11:10:23 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id 73DE2A6B354; Tue,  3 Jan 2012 19:01:51 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7XCNprp4xITo; Tue,  3 Jan 2012 19:01:50 +0000 (UTC)
Received: from [192.168.10.18] (unknown [64.134.170.146]) by asgaard.org (Postfix) with ESMTPSA id 151C4A6B343; Tue,  3 Jan 2012 19:01:49 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: text/plain; charset=utf-8
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <A589A5D9-D18D-4CEF-A199-CD5305C3C394@gmail.com>
Date: Tue, 3 Jan 2012 09:07:08 -0800
Content-Transfer-Encoding: quoted-printable
Message-Id: <B63E1B67-EE93-46A5-A44D-600C6B970CC3@asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net><6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com><201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <5223116C-2E89-4835-BBA3-1D8B2241FD43@gmail.com> <13FED1F2-74A2-41F7-AB5D-489EAAD958F8@asgaard.org> <A589A5D9-D18D-4CEF-A199-CD5305C3C394@gmail.com>
To: Aldrin Isaac <aldrin.isaac@gmail.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Russ White <russw@riw.us>, dc@ietf.org
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 19:10:36 -0000

On 29Dec2011, at 20.31, Aldrin Isaac wrote:

>=20
> On Dec 29, 2011, at 4:27 PM, Christopher LILJENSTOLPE wrote:
>=20
>> Greetings Aldrin,
>> On 29Dec2011, at 13.21, Aldrin Isaac wrote:
>>=20
>>> Hi Christopher,
>>>=20
>>>>>=20
>>>>> BTW, some folks would like to solve this problem by making the =
control
>>>>> plane react to the data plane --but this carries it's own baggage =
in
>>>>> complexity and in operational capabilities. Reactive control =
planes
>>>>> always converge more slowly, and waste bandwidth at a rate that's
>>>>> arguably higher than simple aggregation. So there's no "silver =
bullet"
>>>>> waiting in the wings in the form of caching at the control plane =
level.
>>>>=20
>>>> however, I believe that aggregation time may be coming down, =
especially in a more homogenous environment (like a DC) vs a =
heterogeneous environment (like the DFZ).
>>>>=20
>>> Wrt your responses above, are you speaking specifically regarding =
data centers where end stations are 100% VM?  Could you lend your =
thoughts on how non-aggregating DCs ought to interwork with an =
aggregating global WAN/Internet?=20
>>=20
>> No, I was not assuming a 100% VM datacenter (which I think is =
unlikely in the near-to-mid term - there will be appliances, storage, =
etc for sometime that is not behind a HV).  However, I'm uncertain by =
what you mean a "non-aggregating" data center.  Any data center will, by =
nature aggregate, the question is where, and how many layers of =
aggregation?  Leaf (hypervisor and/or ToR for non HV elements), ToR, =
spine, core router?  I seriously doubt that any DC will expose it's =
enitire fleet of /32 (/128) hosts to the Internet (or a global WAN).   =
Some level of aggregation will be necessary.  As to how a DC would =
interact with another DC or a WAN backbone in a controller-based =
network, one could envision each DC as a network element that aggregates =
into the WAN with, potentially, cooperating controllers, or a controller =
hierarchy.  Other topologies are likely to exist, as well.
>> 	Chris
>=20
> Hi Chris,=20
>=20
> I may have misunderstood your statement, "I believe that aggregation =
time is coming down", to mean you are a proponent of dramatically =
reducing or eliminating route aggregation in some way.  Maybe I'm wrong, =
but it seems there are at least a few folks that expect that hosts =
should be independently mobile across more than one site and that a =
solution should support that expectation at scale.

I agree that is a design goal - it was for me at my last job as well.

>=20
> I think we're on the same page wrt needing some levels of aggregation. =
 As a matter of fact, the run-of-the-mill DC network design already has =
aggregation at the access gateway/router (in the form of subnet routes) =
and "host routes" (mac routes) within the access LAN.  You can make the =
access LAN (host route domain) smaller or bigger (hopefully not relying =
on STP) for varying degree of mobility. =20

Run-of-the-mill I would agree.  However, the fact that we are all =
spending so much time and effort in this space would indicate that the =
current mode of operation seems to be either breaking down or not =
delivering the desired outcomes.

>=20
> Could you highlight, from your perspective, the features current =
standards and design patterns cannot fundamentally support?  Since we're =
familiar with the limitations with STP, flooding and ARP for large LANs, =
I'm interested in knowing, in technical terms, what other flaws you =
believe exist and features you believe are missing.

Please go back and look at Tom Narten's presentation on the topic from =
Taipei. =20

>=20
> Thanks -- aldrin_______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


From cdl@asgaard.org  Tue Jan  3 11:20:21 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 34DEC11E8098 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 11:20:21 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.485
X-Spam-Level: 
X-Spam-Status: No, score=-5.485 tagged_above=-999 required=5 tests=[AWL=-0.597, BAYES_00=-2.599, MIME_QP_LONG_LINE=1.396, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id H0b28k2CTF7b for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 11:20:20 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id E3C9411E809F for <dc@ietf.org>; Tue,  3 Jan 2012 11:20:19 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id 27FE9A6B35F; Tue,  3 Jan 2012 19:01:52 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UHjRyUwNO7h4; Tue,  3 Jan 2012 19:01:51 +0000 (UTC)
Received: from [192.168.10.18] (unknown [64.134.170.146]) by asgaard.org (Postfix) with ESMTPSA id 10B36A6B352; Tue,  3 Jan 2012 19:01:50 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: text/plain; charset=utf-8
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com>
Date: Tue, 3 Jan 2012 09:13:02 -0800
Content-Transfer-Encoding: quoted-printable
Message-Id: <40F3FB9C-CBCB-41ED-A1E7-FB99DB3A928D@asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com>
To: Xuxiaohu <xuxiaohu@huawei.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Thomas Narten <narten@us.ibm.com>, Russ White <russw@riw.us>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 19:20:21 -0000

Greetings,


On 29Dec2011, at 20.01, Xuxiaohu wrote:

>=20
>> -----=E9=82=AE=E4=BB=B6=E5=8E=9F=E4=BB=B6-----
>> =E5=8F=91=E4=BB=B6=E4=BA=BA: Christopher LILJENSTOLPE =
[mailto:cdl@asgaard.org]
>> =E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4: 2011=E5=B9=B412=E6=9C=8830=E6=97=A5=
 1:20
>> =E6=94=B6=E4=BB=B6=E4=BA=BA: Xuxiaohu
>> =E6=8A=84=E9=80=81: Thomas Narten; Russ White; dc@ietf.org
>> =E4=B8=BB=E9=A2=98: Re: [dc] Elevator Pitch
>>=20
>> Greetings Xuxiaohu,
>>=20
>> On 29Dec2011, at 00.55, Xuxiaohu wrote:
>>=20
>>> Hi Thomas,
>>>=20
>>>> -----=E9=82=AE=E4=BB=B6=E5=8E=9F=E4=BB=B6-----
>>>> =E5=8F=91=E4=BB=B6=E4=BA=BA: dc-bounces@ietf.org =
[mailto:dc-bounces@ietf.org] =E4=BB=A3=E8=A1=A8 Thomas
>>>> Narten
>>>> =E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4: 2011=E5=B9=B412=E6=9C=8829=E6=97=
=A5 1:01
>>>> =E6=94=B6=E4=BB=B6=E4=BA=BA: Russ White
>>>> =E6=8A=84=E9=80=81: dc@ietf.org
>>>> =E4=B8=BB=E9=A2=98: Re: [dc] Elevator Pitch
>>>>=20
<snip>

> Hi Chris,
>=20
> Would you please give a concrete example where the communication =
between different tenants is very common in the multi-tenant cloud data =
center?

Here's the question - if you are a tenant in a data center, and you are =
writing a mash-up application against some other content provider, how =
do you know if they are in the same data center, or not?  My guess is =
that there is quite a bit of traffic between tenants of EC2, btw.  I =
know that that was the intent at my last gig - we wanted SaaS-like =
providers to live in our DC's and develop an eco-system around our core =
services.  Other major cross-dc traffic - how about all of my services =
like spam filtering, backup, etc.  In a DC, I may call them "core =
services" but they are, in fact, another tenant.  How about large scale =
content providers that mash-up between their own offerings.  Many of =
those properties are viewed as "separate customers" by the =
infrastructure teams (can't name names here).  Any inter-offering =
mash-ups would definitely be cross-dc.
>=20
> Best regards,
> Xiaohu
>=20
>>>=20
>>>=20
>>>> Or will they want an alternative approach?
>>>>=20
>>>>> 2. Why does this mobility need to be at layer 2 specifically? Are =
we
>>>>> assuming DDNS and other sorts of solutions in this space will =
simply
>>>>> never be fast enough/scale far enough/etc?
>>>>=20
>>>> Like it or not, the key requirement for VM mobility is that the =
VM's
>>>> IP address does not change. That means the VM can't really move =
from
>>>> one IP subnet to another. That means either moving to bigger and
>>>> bigger L2s (all under one IP subnet) as the DC expands or the need =
to
>>>> inject /32 host routes.
>>>=20
>>> In the DCI scenario where the PE routers are usually performed at =
the
>> aggregation SWs or even core SWs, the PE routers would need a much =
large
>> forwarding table. Provided the routing table containing millions of =
entries,
>> which is available on most today's high-end routers, was still not =
large enough,
>> the on-demand FIB installation or on-demand route announcement
>> mechanisms can be used further to scale the solution. Note that the =
trigger for
>> the FIB installation or route announcement is ARP request packets =
rather than
>> data packets. Hence it will not cause the so-called initial packet =
loss or latency
>> issue.
>>>=20
>>>> Neither of those approaches seems particularly scalable/desirable =
if
>>>> you look 10 years down the road and think of 1M+ physical machines =
in
>>>> a DC.
>>>=20
>>> Maybe we should also take the development speed of routing/switching =
chip
>> and CPU technologies into account:)
>>=20
>> It's more a question of cost/performance on off-chip memory/TCAMs.  =
That is
>> a slightly different curve :)
>>=20
>> 	Chris
>>=20
>>>=20
>>> Best regards,
>>> Xiaohu
>>>=20
>>>> Thomas
>>>>=20
>>>> _______________________________________________
>>>> dc mailing list
>>>> dc@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/dc
>>> _______________________________________________
>>> dc mailing list
>>> dc@ietf.org
>>> https://www.ietf.org/mailman/listinfo/dc
>>=20
>> --
>> =E6=9D=8E=E6=9F=AF=E7=9D=BF
>> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
>> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


From narten@us.ibm.com  Tue Jan  3 12:56:05 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E410411E8122 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 12:56:05 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.586
X-Spam-Level: 
X-Spam-Status: No, score=-106.586 tagged_above=-999 required=5 tests=[AWL=0.013, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ewxODUNrdx85 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 12:56:05 -0800 (PST)
Received: from e4.ny.us.ibm.com (e4.ny.us.ibm.com [32.97.182.144]) by ietfa.amsl.com (Postfix) with ESMTP id 398D811E811B for <dc@ietf.org>; Tue,  3 Jan 2012 12:56:04 -0800 (PST)
Received: from /spool/local by e4.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Tue, 3 Jan 2012 15:56:04 -0500
Received: from d01relay04.pok.ibm.com (9.56.227.236) by e4.ny.us.ibm.com (192.168.1.104) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Tue, 3 Jan 2012 15:55:47 -0500
Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q03Ktk1C323608 for <dc@ietf.org>; Tue, 3 Jan 2012 15:55:46 -0500
Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q03KtjqQ006468 for <dc@ietf.org>; Tue, 3 Jan 2012 15:55:45 -0500
Received: from cichlid.raleigh.ibm.com (sig-9-65-253-202.mts.ibm.com [9.65.253.202]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q03Ktive006360 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 3 Jan 2012 15:55:45 -0500
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q03KtgnA016017; Tue, 3 Jan 2012 15:55:43 -0500
Message-Id: <201201032055.q03KtgnA016017@cichlid.raleigh.ibm.com>
To: Christopher LILJENSTOLPE <ietf@cdl.asgaard.org>
In-reply-to: <2E742C02-F621-497D-AE06-6A91EEEBA498@cdl.asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <CANtnpwj3hCD4UbidDzG=4xChJOaQ1T8mLqQkDUWxoRZV1hjuYA@mail.gmail.com> <201112281650.pBSGo7Mn011365@cichlid.raleigh.ibm.com> <CANtnpwgKKh_6emFK2Gx_WfqU929UK3rzQmh1cuWxoJFGH6eHUw@mail.gmail.com> <2E742C02-F621-497D-AE06-6A91EEEBA498@cdl.asgaard.org>
Comments: In-reply-to Christopher LILJENSTOLPE <ietf@cdl.asgaard.org> message dated "Wed, 28 Dec 2011 21:59:42 -0800."
Date: Tue, 03 Jan 2012 15:55:41 -0500
From: Thomas Narten <narten@us.ibm.com>
x-cbid: 12010320-3534-0000-0000-0000043072B4
Cc: Ronald Bonica <rbonica@juniper.net>, "So, Ning" <ning.so@verizon.com>, dc@ietf.org, Bhumip Khasnabish <vumip1@gmail.com>
Subject: Re: [dc] Elevator Pitch (was: Scoping the Interim meeting)
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 20:56:06 -0000

+1 to Chris' comments.

Christopher LILJENSTOLPE <ietf@cdl.asgaard.org> writes:

> Secondly, the question we should be asking is "What does the IETF
>  NEED to do" not "What can the IETF do" We can do a great many
>  things, the bulk of which will not be helpful or a good use of
>  resources.

Agree completely. If those advocating that the IETF "do work" cannot
answer the above succinctly and in a way that fellow IETFers can
understand, the only conclusion that can be drawn is that the IETF
cannot (and should not) take any further action at this time.

> > On 28Dec2011, at 17.40, Bhumip Khasnabish wrote:
> > We need comments and suggestions from you and others to update this doc.
> > 
> > We also have another draft covering potential work items
> > (
> > http://tools.ietf.org/html/draft-khasnabish-cloud-industry-workitems-survey-01

I had a look at this document, and most if not all of the
"requirements" or "work items" are very very high level, and it is not
at all clear to me how they relate to IETF work.

> > We can discuss these further in the interim mtg.

I strongly disagree. See above. There needs to be much more prep work
done on describing a specific technical or operational problem that
the IETF would work on before there would be much use in having a f2f
discussion. IMO.

> I would like to propose a different approach, if I may.  If we took
>  a focused set of problem statements and ran them through the
>  following set of filters:

> 1) Is this a current/real or near-to-mid term probable issue and is
>  it substantial?
  
> 2) If yes, is it being adequately covered by another SDO and is it
>  in that SDO's domain?
  
> 3) if no, is it in the domain of IETF competency?

> 4) if yes, do we want to work on it?

Agree completely. Again, this list should be devoted exclusively to
teasing out problem areas for which the IETF would seem to be the
right place to so specific work.

Thomas


From sblake@extremenetworks.com  Tue Jan  3 14:23:15 2012
Return-Path: <sblake@extremenetworks.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9775B11E80A3 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 14:23:15 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id m98R06ap03Pw for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 14:23:14 -0800 (PST)
Received: from ussc-casht-p1.extremenetworks.com (ussc-casht-p2.extremenetworks.com [207.179.9.62]) by ietfa.amsl.com (Postfix) with ESMTP id C329C11E8093 for <dc@ietf.org>; Tue,  3 Jan 2012 14:23:14 -0800 (PST)
Received: from [10.5.2.101] (10.5.2.101) by ussc-casht-p1.corp.extremenetworks.com (10.0.4.73) with Microsoft SMTP Server id 8.3.83.0; Tue, 3 Jan 2012 14:23:14 -0800
From: Steven Blake <sblake@extremenetworks.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Date: Tue, 3 Jan 2012 17:23:12 -0500
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com>
Organization: Extreme Networks
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.0.3 (3.0.3-1.fc15) 
Content-Transfer-Encoding: 7bit
Message-ID: <1325629393.2398.8.camel@ecliptic.extremenetworks.com>
MIME-Version: 1.0
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 22:23:15 -0000

On Tue, 2012-01-03 at 08:15 -0800, Ashish Dalela (adalela) wrote:

> Robert,
> 
> Please see inline.
> 
> -----Original Message-----
> From: Robert Raszuk [mailto:robert@raszuk.net]
> Sent: Tuesday, January 03, 2012 8:24 PM
> To: Ashish Dalela (adalela)
> Cc: Pedro Marques; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
> 
> Ashish,
> 
> OK let's just discuss what is in your draft on Hierarchical Addressing.
> 
> 1. You have 48 bits 32 go for host remaining 16 goes for switches. How
> do you aggregate at the TOR or AGGR switch boundary ? Are you assuming
> single HOST - SWITCH with max 65K flat macs ?
> 
> [AD] The higher bits identify a switch - it's a switch-id. The hosts are
> dynamically assigned a host-id under that switch. Let's assume 23 bits
> are for switch-id and 23 bits for host-id. To forward a packet to the
> host, you only have to look at the first 23 bits. That's a MAC prefix to
> route against.

[snip]

FYI, MOOSE is a detailed proposal for hierarchical MAC addressing,
resolution, and forwarding:

http://www.cl.cam.ac.uk/~mas90/MOOSE/



Regards,

/////////////////////////////////////////////
Steven Blake       sblake@extremenetworks.com
Extreme Networks              +1 919-884-3211


From linda.dunbar@huawei.com  Tue Jan  3 15:58:40 2012
Return-Path: <linda.dunbar@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 42F231F0C56 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 15:58:40 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.442
X-Spam-Level: 
X-Spam-Status: No, score=-2.442 tagged_above=-999 required=5 tests=[AWL=-0.158, BAYES_00=-2.599, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nVM8jqP5k2HS for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 15:58:39 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id 29A461F0C50 for <dc@ietf.org>; Tue,  3 Jan 2012 15:58:39 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml202-edg.china.huawei.com) ([172.18.9.243]) by dfwrg02-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACA86482; Tue, 03 Jan 2012 18:58:38 -0500 (EST)
Received: from DFWEML404-HUB.china.huawei.com (10.193.5.203) by dfweml202-edg.china.huawei.com (172.18.9.108) with Microsoft SMTP Server (TLS) id 14.1.323.3; Tue, 3 Jan 2012 15:57:05 -0800
Received: from DFWEML505-MBX.china.huawei.com ([10.124.31.100]) by dfweml404-hub.china.huawei.com ([10.193.5.203]) with mapi id 14.01.0323.003; Tue, 3 Jan 2012 15:56:49 -0800
From: Linda Dunbar <linda.dunbar@huawei.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Thread-Topic: comments to your new draft 
Thread-Index: AQHMynNDtHsmczuvtEGfnuAwHvQHJQ==
Date: Tue, 3 Jan 2012 23:56:47 +0000
Message-ID: <4A95BA014132FF49AE685FAB4B9F17F62A4E3F6D@dfweml505-mbx>
References: <618BE8B40039924EB9AED233D4A09C5102B2527A@XMB-BGL-416.cisco.com><D96F76EF-0011-4F33-A1CF-EC9AD12BA411@gmail.com><618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com> <201201031510.q03FABS62810@magenta.juniper.net> <618BE8B40039924EB9AED233D4A09C5102B2569C@XMB-BGL-416.cisco.com> <4F032492.4030201@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256AC@XMB-BGL-416.cisco.com> <4F03331E.9020104@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256C1@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B256C1@XMB-BGL-416.cisco.com>
Accept-Language: en-US, zh-CN
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-hashedpuzzle: AEaD BAlB CAkY CVn8 EQpO FDL/ FJyP FdAj FdxE F3uJ GTTd JoEL K0G1 Ldyy RJ+8 Ri08; 2; YQBkAGEAbABlAGwAYQBAAGMAaQBzAGMAbwAuAGMAbwBtADsAZABjAEAAaQBlAHQAZgAuAG8AcgBnAA==; Sosha1_v1; 7; {4FFCF722-5F60-4258-BC29-06471D2C8337}; bABpAG4AZABhAC4AZAB1AG4AYgBhAHIAQABoAHUAYQB3AGUAaQAuAGMAbwBtAA==; Tue, 03 Jan 2012 23:56:37 GMT; YwBvAG0AbQBlAG4AdABzACAAdABvACAAeQBvAHUAcgAgAG4AZQB3ACAAZAByAGEAZgB0AA==
x-cr-puzzleid: {4FFCF722-5F60-4258-BC29-06471D2C8337}
x-originating-ip: [10.192.11.97]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: [dc] comments to your new draft
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 03 Jan 2012 23:58:40 -0000

Ashish,=20


I really like what you stated "articulating the problem itself therefore ha=
s become a challenge" in your "draft-dalela-dc-requirements-00". In my opin=
ion, articulating which problem is IETF problem is probably even harder, gi=
ven all the drafts flowing to IETF DC Interim meeting.=20


Can you elaborate more on why "The Mobile IP approach does not handle data =
center mobility optimally"? Every smart phone has an IP address. Today's mo=
bile IP works (even though not optimal).=20


Your section 4.1 has very similar description as in "ARMD problem statement=
". When network is fragmented, you either have to enable all subnets on all=
 network links or re-configure addresses on all switches/routers whenever t=
here is any network fragmentation change. Then not only gateway routers hav=
e to deal with traffic on all subnets (which is ARMD problem), but all link=
s have to carry broadcast traffic from all subnets (which is bandwidth util=
ization issue).=20

Your section 4.3 "Data is relatively Immobile" assumes that VMs can move an=
ywhere without considering where needed data storage is. It is not true wit=
h many VM management systems. Many VM/server management systems' algorithms=
 do take in consideration on where the needed data storage is (and many oth=
er attributes) in determining where Application can be hosted.=20

=20
Section 5.1: broadcast is not the reason that L2 can't scale. It is "fragme=
nted network can't scale". The main reason that L2 can't scale is that L2's=
 addresses can't be aggregated.=20

Linda Dunbar

> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Ashish Dalela (adalela)
> Sent: Tuesday, January 03, 2012 11:20 AM
> To: robert@raszuk.net
> Cc: Yakov Rekhter; Aldrin Isaac; dc@ietf.org
> Subject: Re: [dc] new drafts
>=20
> Robert,
>=20
> I think you and I talking past each other.
>=20
> I'm talking about datacenter switching and not BGP, PE, etc. The
> context
> is scaling of access switches also called Top of Rack or ToR.
> So, your comment about millions of routes and BGP assumes a core router,
> not an access switch.
>=20
> Reset context.
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: Robert Raszuk [mailto:robert@raszuk.net]
> Sent: Tuesday, January 03, 2012 10:26 PM
> To: Ashish Dalela (adalela)
> Cc: Yakov Rekhter; dc@ietf.org; Aldrin Isaac
> Subject: Re: [dc] new drafts
>=20
> Ashish,
>=20
> > Assume a simple case that under a switch there are 250 VM, split
> amongst
> > 10 customers. Each customer has a unique VRF. Normally, we would have
> > advertized a /24 route for that switch. In this case your routes to a
> > single switch are segmented and there are 10 VRFs, and you will very
> > likely have 250 route table entries total segmented by VRF-ids.
> That's
> a
> > routing table bloat from 1 entry to 250 entry. This happens
> everywhere.
> > I have assumed a public IP addressing, but the same thing will happen
> > for the private addressing as well.
>=20
> Normally in this case I would have 10 routes + 1 route for switch
> loopback not 250. However when those VMs will start moving between PEs
> you are right that worse case one could end up would be 250 non
> aggregetable routes per VRF.
>=20
> First I don't think this is a problem scaling wise today as I would not
> assume that everyone will be moving.
>=20
> Second we know today how to handle millions of routes in BGP.
>=20
> Third I am not saying that this model should be used.
>=20
> I am advocating that a hierarchical IP in IP model should be used. VRFs
> on those PEs could be used for isolation purposes. And any VM move
> needs
>=20
> to be only reflected in mapping plane and not in routing infrastructure
> of the network.
>=20
>=20
> > Then, typically the number of VRFs you can support on a router is
> about
> > 4K. These # of VRFs have to be supported at the access, so you have
> to
> > assume this is the limit from the access viewpoint.
>=20
> Nope ... Control plane VRFs have no bounds for limits. 4K comes from
> platform limitations.
>=20
> Hint: Think about control plane and data plane separation. Pedro's
> draft
>=20
> already provides an example on how such separation can be accomplished.
>=20
> > Then, at massive scale, the failure rates are also massive. At 5
> nines
> > reliability, a hardware entity out of 100,000 will fail every 5.25
> > minutes. Access switches don't have high availability. Software fails
> > even faster - OS is generally 4 9's, which means one out of 10,000
> fails
> > every 5.25 minutes. At millions of instances of such entities, there
> are
> > rapid failures happening. You have to only look at massive
> datacenters
> > today run by Web 2.0 companies, and they all echo this view. They
> > basically form clusters of the same application. Software moves the
> > workload from one cluster to another. The whole cluster can fail over.
> > That's not what you do in a consumer cloud, where you have to recover.
> > At massive failure rates, and rapid recovery rates, you are moving
> > things around and injecting host routes for reachability. It's a
> > convergence problem, especially with link-state algorithms.
>=20
> Not applicable to what I am advocating.
>=20
>=20
> > If the VM can be moved, then all you need to do is install a
> temporary
> > redirect of packets to the new location.
>=20
> What is this redirection ? Where do you install it ? In all switching
> elements of the network ? Redirection works when you encapsulate.
> Without any encapsulation how do you redirect by just touching a single
> network element ?
>=20
> > Each host will refresh the MAC
> > after 15-30 seconds. If the packets are redirected from old to new
> > location for these 30 seconds, the redirect can be aged automatically.
> > This happens all the time in mobile networks in what is called a
> "fast
> > handoff" where you redirect the packets until handoff is completed.
>=20
> Hmmmm interesting. We even have a draft which can be used for such
> redirection today ... draft-rekhter-l3vpn-virtual-hub
>=20
> Cheers,
> R.
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From adalela@cisco.com  Tue Jan  3 17:06:12 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id DBBBD21F8614 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 17:06:12 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.216
X-Spam-Level: 
X-Spam-Status: No, score=-2.216 tagged_above=-999 required=5 tests=[AWL=0.068,  BAYES_00=-2.599, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id EWE-8kDC-O+I for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 17:06:12 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id F3F7A21F8613 for <dc@ietf.org>; Tue,  3 Jan 2012 17:06:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=7786; q=dns/txt; s=iport; t=1325639171; x=1326848771; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=EcvlrrIndd9STxdLPeBytx68flybN8o9q7tKR+fPOW8=; b=JkZlqNX4LH1uB+XLw9IEb9YLHKx6Q0TUik2jbo0FBvBlz4BOq8+yvhDC buYvOr9Tj3JTjXX6RQfjhKgEMv8Zl/NW3UHY7uBAK3JYq91v9CwHqGd7D wvjjFjwrwBNpuQQ6vbdTnWEmmOlhotMpYiXC1GvHO0+Gosjp+gzWqSzLd c=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AsAEAJ6lA09Io8UY/2dsb2JhbABDggWrYoFyAQEBBAEBAQ8BHQo0CwwEAgEIEQQBAQsGFwEGASYfCQgCBAsICBqHYJc3AZ4SBIssYwSINZ8J
X-IronPort-AV: E=Sophos;i="4.71,453,1320624000";  d="scan'208";a="2701268"
Received: from vla196-nat.cisco.com (HELO bgl-core-3.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 04 Jan 2012 01:06:09 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-3.cisco.com (8.14.3/8.14.3) with ESMTP id q04169Pu030198; Wed, 4 Jan 2012 01:06:09 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Wed, 4 Jan 2012 06:36:09 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Wed, 4 Jan 2012 06:36:03 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25726@XMB-BGL-416.cisco.com>
In-Reply-To: <4A95BA014132FF49AE685FAB4B9F17F62A4E3F6D@dfweml505-mbx>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] comments to your new draft
Thread-Index: AQHMynNDtHsmczuvtEGfnuAwHvQHJZX7W7ew
References: <618BE8B40039924EB9AED233D4A09C5102B2527A@XMB-BGL-416.cisco.com><D96F76EF-0011-4F33-A1CF-EC9AD12BA411@gmail.com><618BE8B40039924EB9AED233D4A09C5102B25310@XMB-BGL-416.cisco.com><201201031510.q03FABS62810@magenta.juniper.net><618BE8B40039924EB9AED233D4A09C5102B2569C@XMB-BGL-416.cisco.com><4F032492.4030201@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256AC@XMB-BGL-416.cisco.com><4F03331E.9020104@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256C1@XMB-BGL-416.cisco.com> <4A95BA014132FF49AE685FAB4B9F17F62A4E3F6D@dfweml505-mbx>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Linda Dunbar" <linda.dunbar@huawei.com>
X-OriginalArrivalTime: 04 Jan 2012 01:06:09.0417 (UTC) FILETIME=[0B2DB790:01CCCA7D]
Cc: dc@ietf.org
Subject: Re: [dc] comments to your new draft
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 01:06:13 -0000

Linda,=20

Please see inline.

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Linda Dunbar
Sent: Wednesday, January 04, 2012 5:27 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: [dc] comments to your new draft

Ashish,=20


I really like what you stated "articulating the problem itself therefore
has become a challenge" in your "draft-dalela-dc-requirements-00". In my
opinion, articulating which problem is IETF problem is probably even
harder, given all the drafts flowing to IETF DC Interim meeting.=20

[AD] Thanks. Except for the Network SLA problem, there is already work
going on in IETF somewhere or the other. So, they are recognized to be
all IETF problems. The question now is - is there a "uber" problem
statement or there are many "small" problem statements? Correspondingly,
does not work of these workgroups needs to be pursued individually and
independently or coordinated?

Can you elaborate more on why "The Mobile IP approach does not handle
data center mobility optimally"? Every smart phone has an IP address.
Today's mobile IP works (even though not optimal).=20

[AD] Datacenter needs shortest path and multiple paths. Mobile IP does
not give either.=20

Your section 4.1 has very similar description as in "ARMD problem
statement". When network is fragmented, you either have to enable all
subnets on all network links or re-configure addresses on all
switches/routers whenever there is any network fragmentation change.
Then not only gateway routers have to deal with traffic on all subnets
(which is ARMD problem), but all links have to carry broadcast traffic
from all subnets (which is bandwidth utilization issue).=20

[AD] Agree. I was trying to elaborate on use-cases that lead to
fragmentation.

Your section 4.3 "Data is relatively Immobile" assumes that VMs can move
anywhere without considering where needed data storage is. It is not
true with many VM management systems. Many VM/server management systems'
algorithms do take in consideration on where the needed data storage is
(and many other attributes) in determining where Application can be
hosted.=20

[AD] In DC contexts there are discussions about elephant and mouse
flows. What I was trying to say (but not said it clearly perhaps) that
data immobility leads to elephant flows. These flows can't be toggled to
avoid packet reordering. This section needs to be reworded as I realize
now. Thanks.
=20
Section 5.1: broadcast is not the reason that L2 can't scale. It is
"fragmented network can't scale". The main reason that L2 can't scale is
that L2's addresses can't be aggregated.=20

[AD] Yes, it's both. At the control plane it is broadcasting and the
forwarding plane it is lack of aggregation. Will update this. Thanks.

Linda Dunbar

> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Ashish Dalela (adalela)
> Sent: Tuesday, January 03, 2012 11:20 AM
> To: robert@raszuk.net
> Cc: Yakov Rekhter; Aldrin Isaac; dc@ietf.org
> Subject: Re: [dc] new drafts
>=20
> Robert,
>=20
> I think you and I talking past each other.
>=20
> I'm talking about datacenter switching and not BGP, PE, etc. The
> context
> is scaling of access switches also called Top of Rack or ToR.
> So, your comment about millions of routes and BGP assumes a core
router,
> not an access switch.
>=20
> Reset context.
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: Robert Raszuk [mailto:robert@raszuk.net]
> Sent: Tuesday, January 03, 2012 10:26 PM
> To: Ashish Dalela (adalela)
> Cc: Yakov Rekhter; dc@ietf.org; Aldrin Isaac
> Subject: Re: [dc] new drafts
>=20
> Ashish,
>=20
> > Assume a simple case that under a switch there are 250 VM, split
> amongst
> > 10 customers. Each customer has a unique VRF. Normally, we would
have
> > advertized a /24 route for that switch. In this case your routes to
a
> > single switch are segmented and there are 10 VRFs, and you will very
> > likely have 250 route table entries total segmented by VRF-ids.
> That's
> a
> > routing table bloat from 1 entry to 250 entry. This happens
> everywhere.
> > I have assumed a public IP addressing, but the same thing will
happen
> > for the private addressing as well.
>=20
> Normally in this case I would have 10 routes + 1 route for switch
> loopback not 250. However when those VMs will start moving between PEs
> you are right that worse case one could end up would be 250 non
> aggregetable routes per VRF.
>=20
> First I don't think this is a problem scaling wise today as I would
not
> assume that everyone will be moving.
>=20
> Second we know today how to handle millions of routes in BGP.
>=20
> Third I am not saying that this model should be used.
>=20
> I am advocating that a hierarchical IP in IP model should be used.
VRFs
> on those PEs could be used for isolation purposes. And any VM move
> needs
>=20
> to be only reflected in mapping plane and not in routing
infrastructure
> of the network.
>=20
>=20
> > Then, typically the number of VRFs you can support on a router is
> about
> > 4K. These # of VRFs have to be supported at the access, so you have
> to
> > assume this is the limit from the access viewpoint.
>=20
> Nope ... Control plane VRFs have no bounds for limits. 4K comes from
> platform limitations.
>=20
> Hint: Think about control plane and data plane separation. Pedro's
> draft
>=20
> already provides an example on how such separation can be
accomplished.
>=20
> > Then, at massive scale, the failure rates are also massive. At 5
> nines
> > reliability, a hardware entity out of 100,000 will fail every 5.25
> > minutes. Access switches don't have high availability. Software
fails
> > even faster - OS is generally 4 9's, which means one out of 10,000
> fails
> > every 5.25 minutes. At millions of instances of such entities, there
> are
> > rapid failures happening. You have to only look at massive
> datacenters
> > today run by Web 2.0 companies, and they all echo this view. They
> > basically form clusters of the same application. Software moves the
> > workload from one cluster to another. The whole cluster can fail
over.
> > That's not what you do in a consumer cloud, where you have to
recover.
> > At massive failure rates, and rapid recovery rates, you are moving
> > things around and injecting host routes for reachability. It's a
> > convergence problem, especially with link-state algorithms.
>=20
> Not applicable to what I am advocating.
>=20
>=20
> > If the VM can be moved, then all you need to do is install a
> temporary
> > redirect of packets to the new location.
>=20
> What is this redirection ? Where do you install it ? In all switching
> elements of the network ? Redirection works when you encapsulate.
> Without any encapsulation how do you redirect by just touching a
single
> network element ?
>=20
> > Each host will refresh the MAC
> > after 15-30 seconds. If the packets are redirected from old to new
> > location for these 30 seconds, the redirect can be aged
automatically.
> > This happens all the time in mobile networks in what is called a
> "fast
> > handoff" where you redirect the packets until handoff is completed.
>=20
> Hmmmm interesting. We even have a draft which can be used for such
> redirection today ... draft-rekhter-l3vpn-virtual-hub
>=20
> Cheers,
> R.
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From adalela@cisco.com  Tue Jan  3 17:08:02 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6F05321F8602 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 17:08:02 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.376
X-Spam-Level: 
X-Spam-Status: No, score=-2.376 tagged_above=-999 required=5 tests=[AWL=0.223,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id lw3fnnaTL0S6 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 17:08:00 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 3E4E121F8600 for <dc@ietf.org>; Tue,  3 Jan 2012 17:08:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2284; q=dns/txt; s=iport; t=1325639280; x=1326848880; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=LGkD+6bbKd7z/vGs61ZyxOA0Am6cH9ORGK8joVrPTpw=; b=OkB4WHj6TOzq7rrhwDYGpZPEXxJje9KY14SH8LDt5AWoPD+RV9JlsIfw btXwE4hQYYOZOuvruKdxmxzg1GlqNvflIP02/WnxB26fRgK+JfkN5he5y cVXlUhGL1Yf1V6MsXoEpO+rJvsw9ACu8bzR7DK6ZIusiOMlfEpXCzUTKO Q=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AsQEAFelA09Io8UY/2dsb2JhbAA5BwMWgW+DC6ZLggyBcgEBAQIBARIBEA0ERQUHBAIBCA4DBAEBAwIGBhcBAgICAQFECQgBAQQLCAgTB4dYCJc2AYxbkTuBL4cnGoIJM2MEiDWfCQ
X-IronPort-AV: E=Sophos;i="4.71,453,1320624000";  d="scan'208";a="2706633"
Received: from vla196-nat.cisco.com (HELO bgl-core-1.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 04 Jan 2012 01:07:58 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q0417wZA016939; Wed, 4 Jan 2012 01:07:58 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Wed, 4 Jan 2012 06:37:58 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Date: Wed, 4 Jan 2012 06:37:55 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25727@XMB-BGL-416.cisco.com>
In-Reply-To: <1325629393.2398.8.camel@ecliptic.extremenetworks.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczKZktx/XlWOlm5Q9u5cqF2vltc4wAFtYiA
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <1325629393.2398.8.camel@ecliptic.extremenetworks.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Steven Blake" <sblake@extremenetworks.com>
X-OriginalArrivalTime: 04 Jan 2012 01:07:58.0750 (UTC) FILETIME=[4C589BE0:01CCCA7D]
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 01:08:02 -0000

WWVzLCB0aGVyZSBpcyBhbHNvIFBPUlRMQU5ELiBodHRwOi8vY3Nld2ViLnVjc2QuZWR1L352YWhk
YXQvcGFwZXJzL3BvcnRsYW5kLXNpZ2NvbW0wOS5wZGYNClRoYW5rcywgQXNoaXNoDQoNCg0KLS0t
LS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCkZyb206IFN0ZXZlbiBCbGFrZSBbbWFpbHRvOnNibGFr
ZUBleHRyZW1lbmV0d29ya3MuY29tXSANClNlbnQ6IFdlZG5lc2RheSwgSmFudWFyeSAwNCwgMjAx
MiAzOjUzIEFNDQpUbzogQXNoaXNoIERhbGVsYSAoYWRhbGVsYSkNCkNjOiBkY0BpZXRmLm9yZw0K
U3ViamVjdDogUmU6IFtkY10gW2FybWRdIElQIG92ZXIgSVAgc29sdXRpb24gZm9yIGRhdGEgY2Vu
dGVyIGludGVyY29ubmVjdA0KDQpPbiBUdWUsIDIwMTItMDEtMDMgYXQgMDg6MTUgLTA4MDAsIEFz
aGlzaCBEYWxlbGEgKGFkYWxlbGEpIHdyb3RlOg0KDQo+IFJvYmVydCwNCj4gDQo+IFBsZWFzZSBz
ZWUgaW5saW5lLg0KPiANCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogUm9i
ZXJ0IFJhc3p1ayBbbWFpbHRvOnJvYmVydEByYXN6dWsubmV0XQ0KPiBTZW50OiBUdWVzZGF5LCBK
YW51YXJ5IDAzLCAyMDEyIDg6MjQgUE0NCj4gVG86IEFzaGlzaCBEYWxlbGEgKGFkYWxlbGEpDQo+
IENjOiBQZWRybyBNYXJxdWVzOyBkY0BpZXRmLm9yZw0KPiBTdWJqZWN0OiBSZTogW2RjXSBbYXJt
ZF0gSVAgb3ZlciBJUCBzb2x1dGlvbiBmb3IgZGF0YSBjZW50ZXINCj4gaW50ZXJjb25uZWN0DQo+
IA0KPiBBc2hpc2gsDQo+IA0KPiBPSyBsZXQncyBqdXN0IGRpc2N1c3Mgd2hhdCBpcyBpbiB5b3Vy
IGRyYWZ0IG9uIEhpZXJhcmNoaWNhbCBBZGRyZXNzaW5nLg0KPiANCj4gMS4gWW91IGhhdmUgNDgg
Yml0cyAzMiBnbyBmb3IgaG9zdCByZW1haW5pbmcgMTYgZ29lcyBmb3Igc3dpdGNoZXMuIEhvdw0K
PiBkbyB5b3UgYWdncmVnYXRlIGF0IHRoZSBUT1Igb3IgQUdHUiBzd2l0Y2ggYm91bmRhcnkgPyBB
cmUgeW91IGFzc3VtaW5nDQo+IHNpbmdsZSBIT1NUIC0gU1dJVENIIHdpdGggbWF4IDY1SyBmbGF0
IG1hY3MgPw0KPiANCj4gW0FEXSBUaGUgaGlnaGVyIGJpdHMgaWRlbnRpZnkgYSBzd2l0Y2ggLSBp
dCdzIGEgc3dpdGNoLWlkLiBUaGUgaG9zdHMgYXJlDQo+IGR5bmFtaWNhbGx5IGFzc2lnbmVkIGEg
aG9zdC1pZCB1bmRlciB0aGF0IHN3aXRjaC4gTGV0J3MgYXNzdW1lIDIzIGJpdHMNCj4gYXJlIGZv
ciBzd2l0Y2gtaWQgYW5kIDIzIGJpdHMgZm9yIGhvc3QtaWQuIFRvIGZvcndhcmQgYSBwYWNrZXQg
dG8gdGhlDQo+IGhvc3QsIHlvdSBvbmx5IGhhdmUgdG8gbG9vayBhdCB0aGUgZmlyc3QgMjMgYml0
cy4gVGhhdCdzIGEgTUFDIHByZWZpeCB0bw0KPiByb3V0ZSBhZ2FpbnN0Lg0KDQpbc25pcF0NCg0K
RllJLCBNT09TRSBpcyBhIGRldGFpbGVkIHByb3Bvc2FsIGZvciBoaWVyYXJjaGljYWwgTUFDIGFk
ZHJlc3NpbmcsDQpyZXNvbHV0aW9uLCBhbmQgZm9yd2FyZGluZzoNCg0KaHR0cDovL3d3dy5jbC5j
YW0uYWMudWsvfm1hczkwL01PT1NFLw0KDQoNCg0KUmVnYXJkcywNCg0KLy8vLy8vLy8vLy8vLy8v
Ly8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vDQpTdGV2ZW4gQmxha2UgICAgICAgc2JsYWtl
QGV4dHJlbWVuZXR3b3Jrcy5jb20NCkV4dHJlbWUgTmV0d29ya3MgICAgICAgICAgICAgICsxIDkx
OS04ODQtMzIxMQ0KDQo=

From xuxiaohu@huawei.com  Tue Jan  3 19:12:10 2012
Return-Path: <xuxiaohu@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7523B21F8503 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 19:12:10 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.194
X-Spam-Level: 
X-Spam-Status: No, score=-4.194 tagged_above=-999 required=5 tests=[AWL=1.490,  BAYES_00=-2.599, J_CHICKENPOX_13=0.6, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id F3eknu4SEzOd for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 19:12:09 -0800 (PST)
Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [119.145.14.67]) by ietfa.amsl.com (Postfix) with ESMTP id 0703A21F84EC for <dc@ietf.org>; Tue,  3 Jan 2012 19:12:09 -0800 (PST)
Received: from huawei.com (szxga04-in [172.24.2.12]) by szxga04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX900HKY7K8E6@szxga04-in.huawei.com> for dc@ietf.org; Wed, 04 Jan 2012 11:12:08 +0800 (CST)
Received: from szxrg02-dlp.huawei.com ([172.24.2.119]) by szxga04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX90078D7K862@szxga04-in.huawei.com> for dc@ietf.org; Wed, 04 Jan 2012 11:12:08 +0800 (CST)
Received: from szxeml207-edg.china.huawei.com ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGB89260; Wed, 04 Jan 2012 11:12:06 +0800
Received: from SZXEML413-HUB.china.huawei.com (10.82.67.152) by szxeml207-edg.china.huawei.com (172.24.2.59) with Microsoft SMTP Server (TLS) id 14.1.323.3; Wed, 04 Jan 2012 11:12:03 +0800
Received: from SZXEML525-MBS.china.huawei.com ([169.254.8.55]) by szxeml413-hub.china.huawei.com ([10.82.67.152]) with mapi id 14.01.0323.003; Wed, 04 Jan 2012 11:11:59 +0800
Date: Wed, 04 Jan 2012 03:11:58 +0000
From: Xuxiaohu <xuxiaohu@huawei.com>
In-reply-to: <40F3FB9C-CBCB-41ED-A1E7-FB99DB3A928D@asgaard.org>
X-Originating-IP: [10.108.4.80]
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
Message-id: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE764022@szxeml525-mbs.china.huawei.com>
MIME-version: 1.0
Content-type: text/plain; charset=utf-8
Content-language: zh-CN
Content-transfer-encoding: base64
Accept-Language: zh-CN, en-US
Thread-topic: [dc] Elevator Pitch
Thread-index: AQHMw1wuBgE5G7BjGEyEvGVWBnVRbZXw96YAgAF/L4CAABiKgIABPS+QgAacgwCAAStPwA==
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <40F3FB9C-CBCB-41ED-A1E7-FB99DB3A928D@asgaard.org>
Cc: Thomas Narten <narten@us.ibm.com>, Russ White <russw@riw.us>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 03:12:10 -0000

DQo+IC0tLS0t6YKu5Lu25Y6f5Lu2LS0tLS0NCj4g5Y+R5Lu25Lq6OiBDaHJpc3RvcGhlciBMSUxK
RU5TVE9MUEUgW21haWx0bzpjZGxAYXNnYWFyZC5vcmddDQo+IOWPkemAgeaXtumXtDogMjAxMuW5
tDHmnIg05pelIDE6MTMNCj4g5pS25Lu25Lq6OiBYdXhpYW9odQ0KPiDmioTpgIE6IFRob21hcyBO
YXJ0ZW47IFJ1c3MgV2hpdGU7IGRjQGlldGYub3JnDQo+IOS4u+mimDogUmU6IFtkY10gRWxldmF0
b3IgUGl0Y2gNCj4gDQo+IEdyZWV0aW5ncywNCj4gDQo+IA0KPiBPbiAyOURlYzIwMTEsIGF0IDIw
LjAxLCBYdXhpYW9odSB3cm90ZToNCj4gDQo+ID4NCj4gPj4gLS0tLS3pgq7ku7bljp/ku7YtLS0t
LQ0KPiA+PiDlj5Hku7bkuro6IENocmlzdG9waGVyIExJTEpFTlNUT0xQRSBbbWFpbHRvOmNkbEBh
c2dhYXJkLm9yZ10NCj4gPj4g5Y+R6YCB5pe26Ze0OiAyMDEx5bm0MTLmnIgzMOaXpSAxOjIwDQo+
ID4+IOaUtuS7tuS6ujogWHV4aWFvaHUNCj4gPj4g5oqE6YCBOiBUaG9tYXMgTmFydGVuOyBSdXNz
IFdoaXRlOyBkY0BpZXRmLm9yZw0KPiA+PiDkuLvpopg6IFJlOiBbZGNdIEVsZXZhdG9yIFBpdGNo
DQo+ID4+DQo+ID4+IEdyZWV0aW5ncyBYdXhpYW9odSwNCj4gPj4NCj4gPj4gT24gMjlEZWMyMDEx
LCBhdCAwMC41NSwgWHV4aWFvaHUgd3JvdGU6DQo+ID4+DQo+ID4+PiBIaSBUaG9tYXMsDQo+ID4+
Pg0KPiA+Pj4+IC0tLS0t6YKu5Lu25Y6f5Lu2LS0tLS0NCj4gPj4+PiDlj5Hku7bkuro6IGRjLWJv
dW5jZXNAaWV0Zi5vcmcgW21haWx0bzpkYy1ib3VuY2VzQGlldGYub3JnXSDku6PooagNCj4gVGhv
bWFzDQo+ID4+Pj4gTmFydGVuDQo+ID4+Pj4g5Y+R6YCB5pe26Ze0OiAyMDEx5bm0MTLmnIgyOeaX
pSAxOjAxDQo+ID4+Pj4g5pS25Lu25Lq6OiBSdXNzIFdoaXRlDQo+ID4+Pj4g5oqE6YCBOiBkY0Bp
ZXRmLm9yZw0KPiA+Pj4+IOS4u+mimDogUmU6IFtkY10gRWxldmF0b3IgUGl0Y2gNCj4gPj4+Pg0K
PiA8c25pcD4NCj4gDQo+ID4gSGkgQ2hyaXMsDQo+ID4NCj4gPiBXb3VsZCB5b3UgcGxlYXNlIGdp
dmUgYSBjb25jcmV0ZSBleGFtcGxlIHdoZXJlIHRoZSBjb21tdW5pY2F0aW9uDQo+IGJldHdlZW4g
ZGlmZmVyZW50IHRlbmFudHMgaXMgdmVyeSBjb21tb24gaW4gdGhlIG11bHRpLXRlbmFudCBjbG91
ZCBkYXRhDQo+IGNlbnRlcj8NCj4gDQo+IEhlcmUncyB0aGUgcXVlc3Rpb24gLSBpZiB5b3UgYXJl
IGEgdGVuYW50IGluIGEgZGF0YSBjZW50ZXIsIGFuZCB5b3UgYXJlIHdyaXRpbmcgYQ0KPiBtYXNo
LXVwIGFwcGxpY2F0aW9uIGFnYWluc3Qgc29tZSBvdGhlciBjb250ZW50IHByb3ZpZGVyLCBob3cg
ZG8geW91IGtub3cgaWYNCj4gdGhleSBhcmUgaW4gdGhlIHNhbWUgZGF0YSBjZW50ZXIsIG9yIG5v
dD8gIE15IGd1ZXNzIGlzIHRoYXQgdGhlcmUgaXMgcXVpdGUgYSBiaXQNCj4gb2YgdHJhZmZpYyBi
ZXR3ZWVuIHRlbmFudHMgb2YgRUMyLCBidHcuICBJIGtub3cgdGhhdCB0aGF0IHdhcyB0aGUgaW50
ZW50IGF0IG15DQo+IGxhc3QgZ2lnIC0gd2Ugd2FudGVkIFNhYVMtbGlrZSBwcm92aWRlcnMgdG8g
bGl2ZSBpbiBvdXIgREMncyBhbmQgZGV2ZWxvcCBhbg0KPiBlY28tc3lzdGVtIGFyb3VuZCBvdXIg
Y29yZSBzZXJ2aWNlcy4gIE90aGVyIG1ham9yIGNyb3NzLWRjIHRyYWZmaWMgLSBob3cNCj4gYWJv
dXQgYWxsIG9mIG15IHNlcnZpY2VzIGxpa2Ugc3BhbSBmaWx0ZXJpbmcsIGJhY2t1cCwgZXRjLiAg
SW4gYSBEQywgSSBtYXkgY2FsbA0KPiB0aGVtICJjb3JlIHNlcnZpY2VzIiBidXQgdGhleSBhcmUs
IGluIGZhY3QsIGFub3RoZXIgdGVuYW50LiAgSG93IGFib3V0IGxhcmdlDQo+IHNjYWxlIGNvbnRl
bnQgcHJvdmlkZXJzIHRoYXQgbWFzaC11cCBiZXR3ZWVuIHRoZWlyIG93biBvZmZlcmluZ3MuICBN
YW55IG9mDQo+IHRob3NlIHByb3BlcnRpZXMgYXJlIHZpZXdlZCBhcyAic2VwYXJhdGUgY3VzdG9t
ZXJzIiBieSB0aGUgaW5mcmFzdHJ1Y3R1cmUNCj4gdGVhbXMgKGNhbid0IG5hbWUgbmFtZXMgaGVy
ZSkuICBBbnkgaW50ZXItb2ZmZXJpbmcgbWFzaC11cHMgd291bGQgZGVmaW5pdGVseQ0KPiBiZSBj
cm9zcy1kYy4NCg0KSGkgQ2hyaXMsDQoNCkkgaGF2ZSBubyBkb3VidCBhYm91dCB0aGUgcG9zc2li
aWxpdHkgeW91IG1lbnRpb25lZCBhYm92ZS4gSG93ZXZlciwgb25jZSB5b3UgYXJlIGNvbnNpZGVy
aW5nIHRvIG9wdGltaXplIHRoZSBmb3J3YXJkaW5nIHBhdGggb2YgaW50ZXItdGVuYW50IHRyYWZm
aWMgd2l0aGluIHRoZSBzY29wZSBvZiBMMlZQTiBvciBMM1ZQTiBzb2x1dGlvbnMsIHdoaWxlIHRh
a2luZyB0aGUgYWRkcmVzcyBzcGFjZSBvdmVybGFwcGluZywgZmlyZXdhbGwgcG9saWN5IGlzc3Vl
cywgZXRjLiBpbnRvIGFjY291bnQsIHlvdSB3aWxsIGZpbmQgaXQgaXMgYSBtdWNoIGNvbXBsZXgg
am9iLiBXaGV0aGVyIG9yIG5vdCBpdCBpcyB3b3J0aHdoaWxlIHRvIGRvIHRoYXQgb3B0aW1pemF0
aW9uIGhlYXZpbHkgZGVwZW5kcyBvbiB3aGV0aGVyIHRoZSB2b2x1bWUgb2YgaW50ZXItdGVuYW50
IHRyYWZmaWMgaXMgbXVjaCBzaWduaWZpY2FudC4gDQoNCkJlc3QgcmVnYXJkcywNClhpYW9odQ0K
DQo+ID4gQmVzdCByZWdhcmRzLA0KPiA+IFhpYW9odQ0KPiA+DQo+ID4+Pg0KPiA+Pj4NCj4gPj4+
PiBPciB3aWxsIHRoZXkgd2FudCBhbiBhbHRlcm5hdGl2ZSBhcHByb2FjaD8NCj4gPj4+Pg0KPiA+
Pj4+PiAyLiBXaHkgZG9lcyB0aGlzIG1vYmlsaXR5IG5lZWQgdG8gYmUgYXQgbGF5ZXIgMiBzcGVj
aWZpY2FsbHk/IEFyZSB3ZQ0KPiA+Pj4+PiBhc3N1bWluZyBERE5TIGFuZCBvdGhlciBzb3J0cyBv
ZiBzb2x1dGlvbnMgaW4gdGhpcyBzcGFjZSB3aWxsIHNpbXBseQ0KPiA+Pj4+PiBuZXZlciBiZSBm
YXN0IGVub3VnaC9zY2FsZSBmYXIgZW5vdWdoL2V0Yz8NCj4gPj4+Pg0KPiA+Pj4+IExpa2UgaXQg
b3Igbm90LCB0aGUga2V5IHJlcXVpcmVtZW50IGZvciBWTSBtb2JpbGl0eSBpcyB0aGF0IHRoZSBW
TSdzDQo+ID4+Pj4gSVAgYWRkcmVzcyBkb2VzIG5vdCBjaGFuZ2UuIFRoYXQgbWVhbnMgdGhlIFZN
IGNhbid0IHJlYWxseSBtb3ZlIGZyb20NCj4gPj4+PiBvbmUgSVAgc3VibmV0IHRvIGFub3RoZXIu
IFRoYXQgbWVhbnMgZWl0aGVyIG1vdmluZyB0byBiaWdnZXIgYW5kDQo+ID4+Pj4gYmlnZ2VyIEwy
cyAoYWxsIHVuZGVyIG9uZSBJUCBzdWJuZXQpIGFzIHRoZSBEQyBleHBhbmRzIG9yIHRoZSBuZWVk
IHRvDQo+ID4+Pj4gaW5qZWN0IC8zMiBob3N0IHJvdXRlcy4NCj4gPj4+DQo+ID4+PiBJbiB0aGUg
RENJIHNjZW5hcmlvIHdoZXJlIHRoZSBQRSByb3V0ZXJzIGFyZSB1c3VhbGx5IHBlcmZvcm1lZCBh
dCB0aGUNCj4gPj4gYWdncmVnYXRpb24gU1dzIG9yIGV2ZW4gY29yZSBTV3MsIHRoZSBQRSByb3V0
ZXJzIHdvdWxkIG5lZWQgYSBtdWNoIGxhcmdlDQo+ID4+IGZvcndhcmRpbmcgdGFibGUuIFByb3Zp
ZGVkIHRoZSByb3V0aW5nIHRhYmxlIGNvbnRhaW5pbmcgbWlsbGlvbnMgb2YgZW50cmllcywNCj4g
Pj4gd2hpY2ggaXMgYXZhaWxhYmxlIG9uIG1vc3QgdG9kYXkncyBoaWdoLWVuZCByb3V0ZXJzLCB3
YXMgc3RpbGwgbm90IGxhcmdlDQo+IGVub3VnaCwNCj4gPj4gdGhlIG9uLWRlbWFuZCBGSUIgaW5z
dGFsbGF0aW9uIG9yIG9uLWRlbWFuZCByb3V0ZSBhbm5vdW5jZW1lbnQNCj4gPj4gbWVjaGFuaXNt
cyBjYW4gYmUgdXNlZCBmdXJ0aGVyIHRvIHNjYWxlIHRoZSBzb2x1dGlvbi4gTm90ZSB0aGF0IHRo
ZSB0cmlnZ2VyDQo+IGZvcg0KPiA+PiB0aGUgRklCIGluc3RhbGxhdGlvbiBvciByb3V0ZSBhbm5v
dW5jZW1lbnQgaXMgQVJQIHJlcXVlc3QgcGFja2V0cyByYXRoZXINCj4gdGhhbg0KPiA+PiBkYXRh
IHBhY2tldHMuIEhlbmNlIGl0IHdpbGwgbm90IGNhdXNlIHRoZSBzby1jYWxsZWQgaW5pdGlhbCBw
YWNrZXQgbG9zcyBvcg0KPiBsYXRlbmN5DQo+ID4+IGlzc3VlLg0KPiA+Pj4NCj4gPj4+PiBOZWl0
aGVyIG9mIHRob3NlIGFwcHJvYWNoZXMgc2VlbXMgcGFydGljdWxhcmx5IHNjYWxhYmxlL2Rlc2ly
YWJsZSBpZg0KPiA+Pj4+IHlvdSBsb29rIDEwIHllYXJzIGRvd24gdGhlIHJvYWQgYW5kIHRoaW5r
IG9mIDFNKyBwaHlzaWNhbCBtYWNoaW5lcyBpbg0KPiA+Pj4+IGEgREMuDQo+ID4+Pg0KPiA+Pj4g
TWF5YmUgd2Ugc2hvdWxkIGFsc28gdGFrZSB0aGUgZGV2ZWxvcG1lbnQgc3BlZWQgb2Ygcm91dGlu
Zy9zd2l0Y2hpbmcNCj4gY2hpcA0KPiA+PiBhbmQgQ1BVIHRlY2hub2xvZ2llcyBpbnRvIGFjY291
bnQ6KQ0KPiA+Pg0KPiA+PiBJdCdzIG1vcmUgYSBxdWVzdGlvbiBvZiBjb3N0L3BlcmZvcm1hbmNl
IG9uIG9mZi1jaGlwIG1lbW9yeS9UQ0FNcy4NCj4gVGhhdCBpcw0KPiA+PiBhIHNsaWdodGx5IGRp
ZmZlcmVudCBjdXJ2ZSA6KQ0KPiA+Pg0KPiA+PiAJQ2hyaXMNCj4gPj4NCj4gPj4+DQo+ID4+PiBC
ZXN0IHJlZ2FyZHMsDQo+ID4+PiBYaWFvaHUNCj4gPj4+DQo+ID4+Pj4gVGhvbWFzDQo+ID4+Pj4N
Cj4gPj4+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0K
PiA+Pj4+IGRjIG1haWxpbmcgbGlzdA0KPiA+Pj4+IGRjQGlldGYub3JnDQo+ID4+Pj4gaHR0cHM6
Ly93d3cuaWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9kYw0KPiA+Pj4gX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4gPj4+IGRjIG1haWxpbmcgbGlzdA0K
PiA+Pj4gZGNAaWV0Zi5vcmcNCj4gPj4+IGh0dHBzOi8vd3d3LmlldGYub3JnL21haWxtYW4vbGlz
dGluZm8vZGMNCj4gPj4NCj4gPj4gLS0NCj4gPj4g5p2O5p+v552/DQo+ID4+IENoZWNrIG15IFBH
UCBrZXkgaGVyZTogaHR0cHM6Ly93d3cuYXNnYWFyZC5vcmcvfmNkbC9jZGwuYXNjDQo+ID4+IEN1
cnJlbnQgdkNhcmQgaGVyZTogaHR0cHM6Ly93d3cuYXNnYWFyZC5vcmcvfmNkbC9jZGwudmNmDQo+
ID4NCj4gPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0K
PiA+IGRjIG1haWxpbmcgbGlzdA0KPiA+IGRjQGlldGYub3JnDQo+ID4gaHR0cHM6Ly93d3cuaWV0
Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9kYw0KPiANCj4gLS0NCj4g5p2O5p+v552/DQo+IENoZWNr
IG15IFBHUCBrZXkgaGVyZTogaHR0cHM6Ly93d3cuYXNnYWFyZC5vcmcvfmNkbC9jZGwuYXNjDQo+
IEN1cnJlbnQgdkNhcmQgaGVyZTogaHR0cHM6Ly93d3cuYXNnYWFyZC5vcmcvfmNkbC9jZGwudmNm
DQoNCg==

From xuxiaohu@huawei.com  Tue Jan  3 19:22:37 2012
Return-Path: <xuxiaohu@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2105111E8087 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 19:22:37 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.594
X-Spam-Level: 
X-Spam-Status: No, score=-4.594 tagged_above=-999 required=5 tests=[AWL=1.690,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0C0beCMtLqZD for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 19:22:36 -0800 (PST)
Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [119.145.14.67]) by ietfa.amsl.com (Postfix) with ESMTP id B56CA11E8073 for <dc@ietf.org>; Tue,  3 Jan 2012 19:22:35 -0800 (PST)
Received: from huawei.com (szxga04-in [172.24.2.12]) by szxga04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX900H1L7Y45K@szxga04-in.huawei.com> for dc@ietf.org; Wed, 04 Jan 2012 11:20:28 +0800 (CST)
Received: from szxrg01-dlp.huawei.com ([172.24.2.119]) by szxga04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX9007W97XX62@szxga04-in.huawei.com> for dc@ietf.org; Wed, 04 Jan 2012 11:20:28 +0800 (CST)
Received: from szxeml201-edg.china.huawei.com ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGE20048; Wed, 04 Jan 2012 11:20:27 +0800
Received: from SZXEML401-HUB.china.huawei.com (10.82.67.31) by szxeml201-edg.china.huawei.com (172.24.2.39) with Microsoft SMTP Server (TLS) id 14.1.323.3; Wed, 04 Jan 2012 11:20:23 +0800
Received: from SZXEML525-MBS.china.huawei.com ([169.254.8.55]) by szxeml401-hub.china.huawei.com ([::1]) with mapi id 14.01.0323.003; Wed, 04 Jan 2012 11:20:16 +0800
Date: Wed, 04 Jan 2012 03:20:16 +0000
From: Xuxiaohu <xuxiaohu@huawei.com>
In-reply-to: <FDC18641-F09F-40D6-90C7-8A3152DE108B@asgaard.org>
X-Originating-IP: [10.108.4.80]
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
Message-id: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76403A@szxeml525-mbs.china.huawei.com>
MIME-version: 1.0
Content-type: text/plain; charset=utf-8
Content-language: zh-CN
Content-transfer-encoding: base64
Accept-Language: zh-CN, en-US
Thread-topic: [dc] Elevator Pitch
Thread-index: AQHMw1wuBgE5G7BjGEyEvGVWBnVRbZXw96YAgAF/L4CAABiKgIABPS+QgABdsOCABjx0gIABLVCA
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639E9@szxeml525-mbs.china.huawei.com> <FDC18641-F09F-40D6-90C7-8A3152DE108B@asgaard.org>
Cc: Thomas Narten <narten@us.ibm.com>, Russ White <russw@riw.us>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 03:22:37 -0000

DQo+IC0tLS0t6YKu5Lu25Y6f5Lu2LS0tLS0NCj4g5Y+R5Lu25Lq6OiBDaHJpc3RvcGhlciBMSUxK
RU5TVE9MUEUgW21haWx0bzpjZGxAYXNnYWFyZC5vcmddDQo+IOWPkemAgeaXtumXtDogMjAxMuW5
tDHmnIg05pelIDE6MDUNCj4g5pS25Lu25Lq6OiBYdXhpYW9odQ0KPiDmioTpgIE6IFRob21hcyBO
YXJ0ZW47IFJ1c3MgV2hpdGU7IGRjQGlldGYub3JnDQo+IOS4u+mimDogUmU6IFtkY10gRWxldmF0
b3IgUGl0Y2gNCj4gDQo+IA0KPiBPbiAzMERlYzIwMTEsIGF0IDAxLjQ1LCBYdXhpYW9odSB3cm90
ZToNCj4gDQo+ID4+IC0tLS0t6YKu5Lu25Y6f5Lu2LS0tLS0NCj4gPj4g5Y+R5Lu25Lq6OiBYdXhp
YW9odQ0KPiA+PiDlj5HpgIHml7bpl7Q6IDIwMTHlubQxMuaciDMw5pelIDEyOjE3DQo+ID4+IOaU
tuS7tuS6ujogJ0NocmlzdG9waGVyIExJTEpFTlNUT0xQRScNCj4gPj4g5oqE6YCBOiBUaG9tYXMg
TmFydGVuOyBSdXNzIFdoaXRlOyBkY0BpZXRmLm9yZw0KPiA+PiDkuLvpopg6IHJlOiBbZGNdIEVs
ZXZhdG9yIFBpdGNoDQo+ID4+DQo+ID4+DQo+ID4+PiAtLS0tLemCruS7tuWOn+S7ti0tLS0tDQo+
ID4+PiDlj5Hku7bkuro6IENocmlzdG9waGVyIExJTEpFTlNUT0xQRSBbbWFpbHRvOmNkbEBhc2dh
YXJkLm9yZ10NCj4gPj4+IOWPkemAgeaXtumXtDogMjAxMeW5tDEy5pyIMzDml6UgMToyMA0KPiA+
Pj4g5pS25Lu25Lq6OiBYdXhpYW9odQ0KPiA+Pj4g5oqE6YCBOiBUaG9tYXMgTmFydGVuOyBSdXNz
IFdoaXRlOyBkY0BpZXRmLm9yZw0KPiA+Pj4g5Li76aKYOiBSZTogW2RjXSBFbGV2YXRvciBQaXRj
aA0KPiA+Pj4NCj4gPj4+IEdyZWV0aW5ncyBYdXhpYW9odSwNCj4gPj4+DQo+ID4+PiBPbiAyOURl
YzIwMTEsIGF0IDAwLjU1LCBYdXhpYW9odSB3cm90ZToNCj4gPj4+DQo+ID4+Pj4gSGkgVGhvbWFz
LA0KPiA+Pj4+DQo+ID4+Pj4+IC0tLS0t6YKu5Lu25Y6f5Lu2LS0tLS0NCj4gPj4+Pj4g5Y+R5Lu2
5Lq6OiBkYy1ib3VuY2VzQGlldGYub3JnIFttYWlsdG86ZGMtYm91bmNlc0BpZXRmLm9yZ10g5Luj
6KGoDQo+ID4+IFRob21hcw0KPiA+Pj4+PiBOYXJ0ZW4NCj4gPj4+Pj4g5Y+R6YCB5pe26Ze0OiAy
MDEx5bm0MTLmnIgyOeaXpSAxOjAxDQo+ID4+Pj4+IOaUtuS7tuS6ujogUnVzcyBXaGl0ZQ0KPiA+
Pj4+PiDmioTpgIE6IGRjQGlldGYub3JnDQo+ID4+Pj4+IOS4u+mimDogUmU6IFtkY10gRWxldmF0
b3IgUGl0Y2gNCj4gPj4+Pj4NCj4gPHNuaXAgLSBpdCB3YXMgZ2V0dGluZyBsb25nPg0KPiANCj4g
PiBIaSBDaHJpcywNCj4gPg0KPiA+IEJ5IHRoZSB3YXksIGluIHRoZSBNQUMgb3ZlciBJUCBzb2x1
dGlvbiwgdGhlIGZvcndhcmRpbmcgdGFibGUgb2YgdGhlIGxlYWYNCj4gbm9kZXMgY29udGFpbnMg
dGhlIE1BQyByb3V0ZXMgdG8gaG9zdHMgYW5kIHRoZSBnYXRld2F5cywgc2ltaWxhcmx5LCBpbiB0
aGUgSVANCj4gb3ZlciBJUCBzb2x1dGlvbiwgdGhlIGZvcndhcmRpbmcgdGFibGUgb2YgbGVhZiBu
b2RlcyBjb250YWlucyB0aGUgcm91dGVzIHRvIGhvc3RzDQo+IGFuZCBvbmUgb3IgbW9yZSBkZWZh
dWx0IHJvdXRlIHRvIHRoZSBnYXRld2F5cy4gSWYgeW91IGJlbGlldmUgdGhlIGRlZmF1bHQgcm91
dGUNCj4gZGlyZWN0ZWQgdG8gdGhlIGdhdGV3YXkgaW4gdGhlIElQIG92ZXIgSVAgc29sdXRpb24g
aXMgbm90IGVub3VnaCBmb3IgZm9yd2FyZGluZw0KPiB0aGUgY3Jvc3MtZGF0YS1jZW50ZXIgdHJh
ZmZpYywgZG9lcyB0aGF0IG1lYW4gdGhlIE1BQyBvdmVyIElQIHNvbHV0aW9uIGlzDQo+IHRvdGFs
bHkgdW53b3JrYWJsZSBzaW5jZSB0aGVyZSBpcyBvbmx5IG9uZSBNQUMgcm91dGUgZGlyZWN0ZWQg
dG8gdGhlIGdhdGV3YXkNCj4gZm9yIGZvcndhcmRpbmcgdGhlIGNyb3NzLWRhdGEtY2VudGVyIHRy
YWZmaWM/DQo+IA0KPiBHcmVldGluZ3MgWGlhb2h1LA0KPiANCj4gSSBkaWRuJ3QgYWN0dWFsbHkg
c3RhdGUgdGhhdCBJUCBvdmVyIElQIG9yIE1BQyBvdmVyIElQIHdlcmUgdW53b3JrYWJsZSAtIEkg
dGhpbmsNCj4gdGhhdCBpcyBhIG5ldHdvcmsgZGVzaWduIGRlY2lzaW9uLCBhbmQgdGhlIGFuc3dl
ciB3aWxsIGJlIGRpZmZlcmVudCBmb3IgZGlmZmVyZW50DQo+IG5ldHdvcmtzLiAgTUFDJ3MgYmVp
bmcgbm9uLWFnZ3JlZ2F0YWJsZSBkb2VzIG1ha2UgaXQgYSBiaXQgbW9yZQ0KPiAiaW50ZXJlc3Rp
bmciIHRoYW4gSVAgb3ZlciBJUCBob3dldmVyLg0KPiANCj4gV2hhdCBJIGFtIHNheWluZyBpcyB0
aGF0IGlmIGFsbCAiZWFzdC13ZXN0IiB0cmFmZmljIGluIHRoZSBEQyBoYXMgdG8gYWdncmVnYXRl
IHVwDQo+IHRvIHNvbWUgbGV2ZWwgb2YgY29yZSB2aWEgYSB0cmVlIChyZWFkIGRlZmF1bHQgcm91
dGUpIHRvcG9sb2d5LCB0aGF0IHdvdWxkIGxlYWQgdG8NCj4gc29tZSBpbnRlcmVzdGluZyBzY2Fs
ZSBpc3N1ZXMgKGFjdHVhbGx5LCBpdCBhbHJlYWR5IGRvZXMpLiAgVGhlcmUgaXMgYSByZWFzb24N
Cj4gdGhhdCBsYXJnZSBuZXR3b3JrcyB0ZW5kIG5vdCB0byBydW4gZGVmYXVsdCBmb3IgZWRnZS10
by1lZGdlIGNvbm5lY3Rpdml0eSAoaS5lLg0KPiBiYWNrYm9uZXMpIC0gYWxtb3N0IGFsd2F5cyB0
aGV5IGFyZSBpbmNvbXBsZXRlIG1lc2hlcy4gIFRoZSBmYWN0IHRoYXQgd2UNCj4gY29udGludWUg
dG8gdXNlIHRyZWVzIGluIHRoZSBkYXRhIGNlbnRlciBpcyBhbiBhcnRpZmFjdCBvZiB0aGVpciBl
bnRlcnByaXNlIG9yaWdpbnMsDQo+IGFuZCB0aGUgaGlzdG9yaWNhbCBub3J0aC1zb3V0aCB0cmFm
ZmljIHBhdHRlcm5zLiAgSSB0aGluayB0aGUgcmVhc29uIHlvdSBhcmUNCj4gc2VlaW5nIHNvIG11
Y2ggYWN0aXZpdHkgbm93IGlzIGR1ZSwgaW4gcGFydCwgdG8gdGhlIGZhY3QgdGhhdCB0aGUgdHJl
ZSBpcyBzdGFydGluZw0KPiB0byBjcmVhaywgYW5kIHRoZSBub3J0aC1zb3V0aCBhc3N1bXB0aW9u
IGlzIG5vIGxvbmdlciB2YWxpZC4NCg0KSGkgQ2hyaXMsDQoNCkFncmVlIHRvIHlvdXIgYWJvdmUg
b3BpbmlvbnMuIEhvd2V2ZXIsIHRoZSBxdWVzdGlvbiBpcyB3aGV0aGVyIHRoZSBtYWpvcml0eSBv
ZiB0aGUgZWFzdC13ZXN0IHRyYWZmaWMgaXMgaW50cmEtdGVuYW50IG9yIGludGVyLXRlbmFudC4N
Cg0KQmVzdCByZWdhcmRzLA0KWGlhb2h1DQoNCj4gCUNocmlzDQo+IA0KPiA+DQo+ID4gQmVzdCBy
ZWdhcmRzLA0KPiA+IFhpYW9odQ0KPiA+DQo+ID4+IEhpIENocmlzLA0KPiA+Pg0KPiA+PiBXb3Vs
ZCB5b3UgcGxlYXNlIGdpdmUgYSBjb25jcmV0ZSBleGFtcGxlIHdoZXJlIHRoZSBjb21tdW5pY2F0
aW9uDQo+IGJldHdlZW4NCj4gPj4gZGlmZmVyZW50IHRlbmFudHMgaXMgdmVyeSBjb21tb24gaW4g
dGhlIG11bHRpLXRlbmFudCBjbG91ZCBkYXRhIGNlbnRlcj8NCj4gPj4NCj4gPj4gQmVzdCByZWdh
cmRzLA0KPiA+PiBYaWFvaHUNCj4gPj4NCj4gPj4+Pg0KPiA+Pj4+DQo+ID4+Pj4+IE9yIHdpbGwg
dGhleSB3YW50IGFuIGFsdGVybmF0aXZlIGFwcHJvYWNoPw0KPiA+Pj4+Pg0KPiA+Pj4+Pj4gMi4g
V2h5IGRvZXMgdGhpcyBtb2JpbGl0eSBuZWVkIHRvIGJlIGF0IGxheWVyIDIgc3BlY2lmaWNhbGx5
PyBBcmUgd2UNCj4gPj4+Pj4+IGFzc3VtaW5nIERETlMgYW5kIG90aGVyIHNvcnRzIG9mIHNvbHV0
aW9ucyBpbiB0aGlzIHNwYWNlIHdpbGwgc2ltcGx5DQo+ID4+Pj4+PiBuZXZlciBiZSBmYXN0IGVu
b3VnaC9zY2FsZSBmYXIgZW5vdWdoL2V0Yz8NCj4gPj4+Pj4NCj4gPj4+Pj4gTGlrZSBpdCBvciBu
b3QsIHRoZSBrZXkgcmVxdWlyZW1lbnQgZm9yIFZNIG1vYmlsaXR5IGlzIHRoYXQgdGhlIFZNJ3MN
Cj4gPj4+Pj4gSVAgYWRkcmVzcyBkb2VzIG5vdCBjaGFuZ2UuIFRoYXQgbWVhbnMgdGhlIFZNIGNh
bid0IHJlYWxseSBtb3ZlIGZyb20NCj4gPj4+Pj4gb25lIElQIHN1Ym5ldCB0byBhbm90aGVyLiBU
aGF0IG1lYW5zIGVpdGhlciBtb3ZpbmcgdG8gYmlnZ2VyIGFuZA0KPiA+Pj4+PiBiaWdnZXIgTDJz
IChhbGwgdW5kZXIgb25lIElQIHN1Ym5ldCkgYXMgdGhlIERDIGV4cGFuZHMgb3IgdGhlIG5lZWQg
dG8NCj4gPj4+Pj4gaW5qZWN0IC8zMiBob3N0IHJvdXRlcy4NCj4gPj4+Pg0KPiA+Pj4+IEluIHRo
ZSBEQ0kgc2NlbmFyaW8gd2hlcmUgdGhlIFBFIHJvdXRlcnMgYXJlIHVzdWFsbHkgcGVyZm9ybWVk
IGF0IHRoZQ0KPiA+Pj4gYWdncmVnYXRpb24gU1dzIG9yIGV2ZW4gY29yZSBTV3MsIHRoZSBQRSBy
b3V0ZXJzIHdvdWxkIG5lZWQgYSBtdWNoDQo+IGxhcmdlDQo+ID4+PiBmb3J3YXJkaW5nIHRhYmxl
LiBQcm92aWRlZCB0aGUgcm91dGluZyB0YWJsZSBjb250YWluaW5nIG1pbGxpb25zIG9mIGVudHJp
ZXMsDQo+ID4+PiB3aGljaCBpcyBhdmFpbGFibGUgb24gbW9zdCB0b2RheSdzIGhpZ2gtZW5kIHJv
dXRlcnMsIHdhcyBzdGlsbCBub3QgbGFyZ2UNCj4gPj4gZW5vdWdoLA0KPiA+Pj4gdGhlIG9uLWRl
bWFuZCBGSUIgaW5zdGFsbGF0aW9uIG9yIG9uLWRlbWFuZCByb3V0ZSBhbm5vdW5jZW1lbnQNCj4g
Pj4+IG1lY2hhbmlzbXMgY2FuIGJlIHVzZWQgZnVydGhlciB0byBzY2FsZSB0aGUgc29sdXRpb24u
IE5vdGUgdGhhdCB0aGUNCj4gdHJpZ2dlcg0KPiA+PiBmb3INCj4gPj4+IHRoZSBGSUIgaW5zdGFs
bGF0aW9uIG9yIHJvdXRlIGFubm91bmNlbWVudCBpcyBBUlAgcmVxdWVzdCBwYWNrZXRzIHJhdGhl
cg0KPiA+PiB0aGFuDQo+ID4+PiBkYXRhIHBhY2tldHMuIEhlbmNlIGl0IHdpbGwgbm90IGNhdXNl
IHRoZSBzby1jYWxsZWQgaW5pdGlhbCBwYWNrZXQgbG9zcyBvcg0KPiA+PiBsYXRlbmN5DQo+ID4+
PiBpc3N1ZS4NCj4gPj4+Pg0KPiA+Pj4+PiBOZWl0aGVyIG9mIHRob3NlIGFwcHJvYWNoZXMgc2Vl
bXMgcGFydGljdWxhcmx5IHNjYWxhYmxlL2Rlc2lyYWJsZSBpZg0KPiA+Pj4+PiB5b3UgbG9vayAx
MCB5ZWFycyBkb3duIHRoZSByb2FkIGFuZCB0aGluayBvZiAxTSsgcGh5c2ljYWwgbWFjaGluZXMg
aW4NCj4gPj4+Pj4gYSBEQy4NCj4gPj4+Pg0KPiA+Pj4+IE1heWJlIHdlIHNob3VsZCBhbHNvIHRh
a2UgdGhlIGRldmVsb3BtZW50IHNwZWVkIG9mIHJvdXRpbmcvc3dpdGNoaW5nDQo+ID4+IGNoaXAN
Cj4gPj4+IGFuZCBDUFUgdGVjaG5vbG9naWVzIGludG8gYWNjb3VudDopDQo+ID4+Pg0KPiA+Pj4g
SXQncyBtb3JlIGEgcXVlc3Rpb24gb2YgY29zdC9wZXJmb3JtYW5jZSBvbiBvZmYtY2hpcCBtZW1v
cnkvVENBTXMuDQo+IFRoYXQNCj4gPj4gaXMNCj4gPj4+IGEgc2xpZ2h0bHkgZGlmZmVyZW50IGN1
cnZlIDopDQo+ID4+Pg0KPiA+Pj4gCUNocmlzDQo+ID4+Pg0KPiA+Pj4+DQo+ID4+Pj4gQmVzdCBy
ZWdhcmRzLA0KPiA+Pj4+IFhpYW9odQ0KPiA+Pj4+DQo+ID4+Pj4+IFRob21hcw0KPiA+Pj4+Pg0K
PiA+Pj4+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0K
PiA+Pj4+PiBkYyBtYWlsaW5nIGxpc3QNCj4gPj4+Pj4gZGNAaWV0Zi5vcmcNCj4gPj4+Pj4gaHR0
cHM6Ly93d3cuaWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9kYw0KPiA+Pj4+IF9fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+ID4+Pj4gZGMgbWFpbGluZyBs
aXN0DQo+ID4+Pj4gZGNAaWV0Zi5vcmcNCj4gPj4+PiBodHRwczovL3d3dy5pZXRmLm9yZy9tYWls
bWFuL2xpc3RpbmZvL2RjDQo+ID4+Pg0KPiA+Pj4gLS0NCj4gPj4+IOadjuafr+edvw0KPiA+Pj4g
Q2hlY2sgbXkgUEdQIGtleSBoZXJlOiBodHRwczovL3d3dy5hc2dhYXJkLm9yZy9+Y2RsL2NkbC5h
c2MNCj4gPj4+IEN1cnJlbnQgdkNhcmQgaGVyZTogaHR0cHM6Ly93d3cuYXNnYWFyZC5vcmcvfmNk
bC9jZGwudmNmDQo+ID4NCj4gPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fXw0KPiA+IGRjIG1haWxpbmcgbGlzdA0KPiA+IGRjQGlldGYub3JnDQo+ID4gaHR0
cHM6Ly93d3cuaWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9kYw0KPiANCj4gLS0NCj4g5p2O5p+v
552/DQo+IENoZWNrIG15IFBHUCBrZXkgaGVyZTogaHR0cHM6Ly93d3cuYXNnYWFyZC5vcmcvfmNk
bC9jZGwuYXNjDQo+IEN1cnJlbnQgdkNhcmQgaGVyZTogaHR0cHM6Ly93d3cuYXNnYWFyZC5vcmcv
fmNkbC9jZGwudmNmDQoNCg==

From xuxiaohu@huawei.com  Tue Jan  3 19:51:53 2012
Return-Path: <xuxiaohu@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1161021F8463 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 19:51:53 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.428
X-Spam-Level: 
X-Spam-Status: No, score=-2.428 tagged_above=-999 required=5 tests=[AWL=-0.686, BAYES_00=-2.599, CN_BODY_35=0.339, MIME_BASE64_TEXT=1.753, MIME_CHARSET_FARAWAY=2.45, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id H2bwha8ZnK6S for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 19:51:52 -0800 (PST)
Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [119.145.14.64]) by ietfa.amsl.com (Postfix) with ESMTP id 67B5F21F8461 for <dc@ietf.org>; Tue,  3 Jan 2012 19:51:52 -0800 (PST)
Received: from huawei.com (szxga05-in [172.24.2.49]) by szxga05-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX900G2S9CPUF@szxga05-in.huawei.com> for dc@ietf.org; Wed, 04 Jan 2012 11:50:49 +0800 (CST)
Received: from szxrg02-dlp.huawei.com ([172.24.2.119]) by szxga05-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX9004KJ9CP0Q@szxga05-in.huawei.com> for dc@ietf.org; Wed, 04 Jan 2012 11:50:49 +0800 (CST)
Received: from szxeml207-edg.china.huawei.com ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGB92564; Wed, 04 Jan 2012 11:50:49 +0800
Received: from SZXEML423-HUB.china.huawei.com (10.82.67.162) by szxeml207-edg.china.huawei.com (172.24.2.59) with Microsoft SMTP Server (TLS) id 14.1.323.3; Wed, 04 Jan 2012 11:50:46 +0800
Received: from SZXEML525-MBS.china.huawei.com ([169.254.8.55]) by szxeml423-hub.china.huawei.com ([10.82.67.162]) with mapi id 14.01.0323.003; Wed, 04 Jan 2012 11:50:38 +0800
Date: Wed, 04 Jan 2012 03:50:38 +0000
From: Xuxiaohu <xuxiaohu@huawei.com>
In-reply-to: <201201031432.q03EWhS44922@magenta.juniper.net>
X-Originating-IP: [10.108.4.80]
To: Yakov Rekhter <yakov@juniper.net>
Message-id: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76405C@szxeml525-mbs.china.huawei.com>
MIME-version: 1.0
Content-type: text/plain; charset=gb2312
Content-language: zh-CN
Content-transfer-encoding: base64
Accept-Language: zh-CN, en-US
Thread-topic: [dc] Elevator Pitch
Thread-index: AQHMyiUQJdDgXrBVQU6qluwoM7cX9pX7li1w
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE763ACD@szxeml525-mbs.china.huawei.com> <201201031432.q03EWhS44922@magenta.juniper.net>
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 03:51:53 -0000

SGkgWWFrb3YsDQoNCj4gLS0tLS3Tyrz+1K28/i0tLS0tDQo+ILeivP7IyzogWWFrb3YgUmVraHRl
ciBbbWFpbHRvOnlha292QGp1bmlwZXIubmV0XQ0KPiC3osvNyrG85DogMjAxMsTqMdTCM8jVIDIy
OjMzDQo+IMrVvP7IyzogWHV4aWFvaHUNCj4gs63LzTogZGNAaWV0Zi5vcmcNCj4g1vfM4jogUmU6
IFtkY10gRWxldmF0b3IgUGl0Y2gNCj4gDQo+IFh1eGlhb2h1LA0KPiANCj4gPiBIaSBhbGwsDQo+
ID4NCj4gPiBTaW5jZSB0aGVyZSBhcmUgc29tZSBkaWZmZXJlbmNlcyBpbiB0aGUgcHJvYmxlbXMg
YW5kIHJlcXVpcmVtZW50cw0KPiA+IGJldHdlZW4gZGF0YSBjZW50ZXIgbmV0d29yayAoRENOKSBh
bmQgZGF0YSBjZW50ZXIgaW50ZXJjb25uZWN0DQo+ID4gKERDSSksIEkgdHJ5IHRvIGxpc3Qgc2V2
ZXJhbCBwcm9ibGVtcyBhbmQgcmVxdWlyZW1lbnRzIGZvciBEQ04gYW5kDQo+ID4gRENJIHNlcGFy
YXRlbHkgYXMgZm9sbG93cy4gSGVyZSB0aGUgZGF0IGEgY2VudGVycyBtYWlubHkgcmVmZXIgdG8N
Cj4gPiB0aG9zZSBtdWx0aS10ZW5hbnQgZGF0YSBjZW50ZXJzIHdoaWNoIGFyZSBvcGVyYXRlZCBi
eSBwdWJsaWMgY2xvdWQNCj4gPiBwcm92aWRlcnMgdG8gZGVsaXZlciBjbG91ZCBzZXJ2aWNlIChp
LmUuLCBJYWFTKSB0byB0aGVpciBjdXN0b21lcnMNCj4gPiAoaS5lLiwgdGVuYW50cykuDQo+ID4N
Cj4gPiAxLiBEQ04gcHJvYmxlbXMgYW5kIHJlcXVpcmVtZW50czoNCj4gPg0KPiA+IDEpIFZNIG1v
YmlsaXR5IGFjcm9zcyBtdWx0aXBsZSBwb2RzIC0+IExBTi9zdWJuZXQgZXh0ZW5zaW9uIGFjcm9z
cyBwb2RzDQo+ID4NCj4gPiAyKSBTb21lIGNsdXN0ZXIgYXBwbGljYXRpb25zIHVzZSBub24tSVAg
b3IgbGluay1sb2NhbCBtdWx0aWNhc3QgKG9wdGlvbmFsKSAtPg0KPiA+ICAgICBMYXllcjIgbmV0
d29ya2luZw0KPiA+DQo+ID4gMykgTXVsdGktdGVuYW5jeSBpc29sYXRpb24gLT4gVlBOL1ZMQU4g
aW5zdGFuY2Ugc2NhbGFiaWxpdHkNCj4gPg0KPiA+IDQpIE1pbGxpb25zIG9mIFZNcyAtPiBNQUMv
SVAgZm9yd2FyZGluZyB0YWJsZSBzY2FsYWJpbGl0eQ0KPiA+DQo+ID4gNSkgSW5jcmVhc2luZyBi
YW5kd2lkdGggZGVtYW5kcyBmb3Igc2VydmVyLXRvLXNlcnZlciBjb25uZWN0aXZpdHkNCj4gPiAg
ICAoaS5lLiwgZWFzdC13ZXN0IHRyYWZmaWMpLT4gRUNNUCBhbmQgc2hvcnRlc3QgcGF0aCBmb3J3
YXJkaW5nIGNhcGFiaWxpdGllcw0KPiA+DQo+ID4gNikgTmV0d29yayByZXNpbGllbmN5IC0+IEZh
c3QgY29udmVyZ2VuY2UgYW5kIG11bHRpLWhvbWluZw0KPiANCj4gRG8geW91IG5lZWQgZmFzdCBy
b3V0aW5nIGNvbnZlcmdlbmNlLCBvciBmYXN0IGNvbm5lY3Rpdml0eSByZXN0b3JhdGlvbiA/DQoN
CkJvdGguIA0KDQo+IA0KPiA+IDcpIFRob3VzYW5kcyBvZiBuZXR3b3JrIGRldmljZXMgLT4gU2lt
cGxpZmllZCBwcm92aXNpb25pbmcgYW5kIG9wZXJhdGlvbg0KPiA+DQo+ID4NCj4gPg0KPiA+IDIu
IERDSSBwcm9ibGVtcyBhbmQgcmVxdWlyZW1lbnRzOg0KPiA+DQo+ID4gMSkgVk1zIG1vYmlsaXR5
IGFjcm9zcyBkYXRhIGNlbnRlcnMgLT4gTEFOL3N1Ym5ldCBleHRlbnNpb24gYWNyb3NzDQo+ID4g
ICAgZGF0YSBjZW50ZXJzLg0KPiA+DQo+ID4gMikgTXVsdGktdGVuYW5jeSBpc29sYXRpb24gLT4g
VkxBTi9WUE4gaW5zdGFuY2Ugc2NhbGFiaWxpdHkNCj4gPg0KPiA+IDMpIE1pbGxpb25zIG9mIFZN
cyAtPiBNQUMvSVAgZm9yd2FyZGluZyB0YWJsZSBzY2FsYWJpbGl0eQ0KPiA+DQo+ID4gNCkgT3B0
aW1hbCB1dGlsaXphdGlvbiBvZiBXQU4gYmFuZHdpZHRoIHJlc291cmNlIC0+IFVua25vd24gdW5p
Y2FzdA0KPiA+ICAgIGFuZCBBUlAgYnJvYWRjYXN0IHN1cHByZXNzaW9uDQo+ID4NCj4gPiA1KSBO
ZXR3b3JrIHJlc2lsaWVuY3kgLT4gRmFzdCBjb252ZXJnZW5jZSBhbmQgbXVsdGktaG9taW5nDQo+
IA0KPiBEbyB5b3UgbmVlZCBmYXN0IHJvdXRpbmcgY29udmVyZ2VuY2UsIG9yIGZhc3QgY29ubmVj
dGl2aXR5IHJlc3RvcmF0aW9uID8NCg0KQm90aC4NCg0KQmVzdCByZWdhcmRzLA0KWGlhb2h1DQoN
Cj4gPiA2KSBMb2FkLWJhbGFuY2luZyBhY3Jvc3MgZGF0YSBjZW50ZXJzIC0+IEFjdGl2ZS1hY3Rp
dmUgREMgZXhpdHMNCj4gPg0KPiA+IDcpIFN1Ym9wdGltYWwgcGF0aCBjYXVzZWQgYnkgTEFOL3N1
Ym5ldCBleHRlbnNpb24gYWNyb3NzIGRhdGEgY2VudGVyIC0+DQo+ID4gICAgUGF0aCBvcHRpbWl6
YXRpb24gZm9yIGJvdGggVlBOIGFjY2VzcyBhbmQgSW50ZXJuZXQgYWNjZXNzDQo+IA0KPiBZYWtv
di4NCg==

From aldrin.isaac@gmail.com  Tue Jan  3 20:54:55 2012
Return-Path: <aldrin.isaac@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6ABDB11E8083 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 20:54:55 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.142
X-Spam-Level: 
X-Spam-Status: No, score=-3.142 tagged_above=-999 required=5 tests=[AWL=0.457,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HS8yoR4sIX9U for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 20:54:55 -0800 (PST)
Received: from mail-qy0-f172.google.com (mail-qy0-f172.google.com [209.85.216.172]) by ietfa.amsl.com (Postfix) with ESMTP id CE5A411E8081 for <dc@ietf.org>; Tue,  3 Jan 2012 20:54:54 -0800 (PST)
Received: by qcsf15 with SMTP id f15so12106958qcs.31 for <dc@ietf.org>; Tue, 03 Jan 2012 20:54:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; bh=cCZOZ3KzqcnpmuJdSoLW+p++sB9drCVQbD/2K0qmYiE=; b=vN2ZU8YdUIH9K73sAoUHa16jY3P2gSRjNKNoBKs4yllLB88sY+bXQ+a3pN2B6sKQ71 gqtn67VKGG6844+/skrP03FwQQ8k6m/D5WwIP5NvBbi6gM9ne5sUJk6IJOsTFeS47cBC 5Uwg+uVyqd5BvpGCZXuBDx8iew9SIL0XkTLFE=
Received: by 10.224.196.66 with SMTP id ef2mr64120531qab.94.1325652893320; Tue, 03 Jan 2012 20:54:53 -0800 (PST)
Received: from mymac.home (ool-44c1c730.dyn.optonline.net. [68.193.199.48]) by mx.google.com with ESMTPS id h9sm104300953qac.13.2012.01.03.20.54.51 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 03 Jan 2012 20:54:52 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: text/plain; charset=us-ascii
From: Aldrin Isaac <aldrin.isaac@gmail.com>
In-Reply-To: <B63E1B67-EE93-46A5-A44D-600C6B970CC3@asgaard.org>
Date: Tue, 3 Jan 2012 23:54:50 -0500
Content-Transfer-Encoding: quoted-printable
Message-Id: <2153445F-AC64-4247-B5C3-03715EC621A2@gmail.com>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net><6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com><201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <5223116C-2E89-4835-BBA3-1D8B2241FD43@gmail.com> <13FED1F2-74A2-41F7-AB5D-489EAAD958F8@asgaard.org> <A589A5D9-D18D-4CEF-A199-CD5305C3C394@gmail.com> <B63E1B67-EE93-46A5-A44D-600C6B970CC3@asgaard.org>
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
X-Mailer: Apple Mail (2.1251.1)
Cc: Russ White <russw@riw.us>, dc@ietf.org
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 04:54:55 -0000

>>=20
>> Could you highlight, from your perspective, the features current =
standards and design patterns cannot fundamentally support?  Since we're =
familiar with the limitations with STP, flooding and ARP for large LANs, =
I'm interested in knowing, in technical terms, what other flaws you =
believe exist and features you believe are missing.
>=20
> Please go back and look at Tom Narten's presentation on the topic from =
Taipei. =20

Thomas' presentation is limited to solving a set of problems for =
environments that will be primarily VM based (with low/no multicast) =
using overlay approach.  This mailing list, I believe, is inclusive of =
nvo3 but not limited to it, so my question was broader.=

From Tina.Tsou.Zouting@huawei.com  Tue Jan  3 21:37:52 2012
Return-Path: <Tina.Tsou.Zouting@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8060E21F85BA for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 21:37:52 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.401
X-Spam-Level: 
X-Spam-Status: No, score=-3.401 tagged_above=-999 required=5 tests=[AWL=-3.462, BAYES_00=-2.599, CN_BODY_46=0.256, EXTRA_MPART_TYPE=1, HTML_MESSAGE=0.001, J_CHICKENPOX_210=0.6, J_CHICKENPOX_46=0.6, MIME_BASE64_TEXT=1.753, MIME_CHARSET_FARAWAY=2.45, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id rYOX-1LfH0SY for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 21:37:51 -0800 (PST)
Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [119.145.14.66]) by ietfa.amsl.com (Postfix) with ESMTP id C94F721F8583 for <dc@ietf.org>; Tue,  3 Jan 2012 21:37:47 -0800 (PST)
Received: from huawei.com (szxga03-in [172.24.2.9]) by szxga03-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX9000Y0EAOLB@szxga03-in.huawei.com> for dc@ietf.org; Wed, 04 Jan 2012 13:37:37 +0800 (CST)
Received: from szxrg01-dlp.huawei.com ([172.24.2.119]) by szxga03-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LX900K4SEABBJ@szxga03-in.huawei.com> for dc@ietf.org; Wed, 04 Jan 2012 13:37:36 +0800 (CST)
Received: from szxeml203-edg.china.huawei.com ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGE28206; Wed, 04 Jan 2012 13:37:36 +0800
Received: from SZXEML420-HUB.china.huawei.com (10.82.67.159) by szxeml203-edg.china.huawei.com (172.24.2.55) with Microsoft SMTP Server (TLS) id 14.1.323.3; Wed, 04 Jan 2012 13:37:32 +0800
Received: from SZXEML526-MBX.china.huawei.com ([169.254.2.37]) by szxeml420-hub.china.huawei.com ([10.82.67.159]) with mapi id 14.01.0323.003; Wed, 04 Jan 2012 13:37:16 +0800
Date: Wed, 04 Jan 2012 05:37:15 +0000
From: Tina TSOU <Tina.Tsou.Zouting@huawei.com>
In-reply-to: <OF8F297787.C131233F-ON48257974.0025C3CD-48257974.00266B2A@zte.com.cn>
X-Originating-IP: [10.212.244.192]
To: "shao.weixiang@zte.com.cn" <shao.weixiang@zte.com.cn>
Message-id: <C0E0A32284495243BDE0AC8A066631A80C243768@szxeml526-mbx.china.huawei.com>
MIME-version: 1.0
Content-type: multipart/related; boundary="Boundary_(ID_+JExYkw9diA3jcAnk4dnZQ)"; type="multipart/alternative"
Content-language: en-US
Accept-Language: en-US, zh-CN
Thread-topic: feedback for http://www.ietf.org/mail-archive/web/dc/current/msg00038.html
Thread-index: AQHMxS5KW9fYi9y31UWcalHS7+EqLJX7uYuA
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
References: <OF8F297787.C131233F-ON48257974.0025C3CD-48257974.00266B2A@zte.com.cn>
Cc: "dc@ietf.org" <dc@ietf.org>, "vumip1@gmail.com" <vumip1@gmail.com>
Subject: Re: [dc] feedback for http://www.ietf.org/mail-archive/web/dc/current/msg00038.html
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 05:37:52 -0000

--Boundary_(ID_+JExYkw9diA3jcAnk4dnZQ)
Content-type: multipart/alternative;
 boundary="Boundary_(ID_qFWVe1t1TLiqJd/+RfyPrg)"


--Boundary_(ID_qFWVe1t1TLiqJd/+RfyPrg)
Content-type: text/plain; charset=gb2312
Content-transfer-encoding: base64

RGVhciBXZWl4aWFuZywNCkhhcHB5IE5ldyBZZWFyIQ0KDQpUaGUgYXV0aG9ycyBhY3R1YWxseSBo
YWQgQ2xvdWQgQnJva2VyYWdlIGluY2x1ZGVkIGluIHZlcnNpb24gMDAgb2YgdGhhdCBkcmFmdCBo
dHRwOi8vdG9vbHMuaWV0Zi5vcmcvaWQvZHJhZnQtdHNvdS12cm9tLXByb2JsZW0tc3RhdGVtZW50
LTAwLnR4dCwgYnV0IGl0IHNlZW1lZCB0b28gZWFybHkgdG8gaW5jbHVkZSBpdCBpbiBJRVRGIHNv
IGl0IHdhcyByZW1vdmVkIGJ5IHZlcnNpb24gMDIuDQpUaGUgQ1NCIGRvY3VtZW50IGNhbWUgdXAg
bGlrZSAxMCBkYXlzIGFmdGVyIHZlcnNpb24gMDAgb2YgZHJhZnQtdHNvdS4NCkl0IHNlZW1zIGRy
YWZ0LXNoYW8ganVzdCBwdXQgZG93biB0aGUgTklTVCBzcGVjcyBpbnRvIHRoZSBkcmFmdCwgd2hp
Y2ggaXMgbm90IHF1aXRlIHRoZSBhcHByb3ByaWF0ZSBsZXZlbCBmb3IgSUVURi4gSSBkbyBiZWxp
ZXZlIGl0IGNvdWxkIGZpbmQgYSBwbGFjZSBidXQgbm90IGluIHRoYXQgc2hhcGUuIEFsc28sIGFz
c3VtaW5nIHRoZXJlIGlzIGEgQ1NCIGluIHRoZSBwYXRoLCB0aGUgb25seSBwbGFjZSBpdCBjYW4g
Z28gaXMgaW4gc29tZSBkYXRhIGNlbnRlciBzb21ld2hlcmUsIHdoZXJlIGVsc2UsIGEgZGVza3Rv
cD8NCg0KVGluYQ0KDQpGcm9tOiBzaGFvLndlaXhpYW5nQHp0ZS5jb20uY24gW21haWx0bzpzaGFv
LndlaXhpYW5nQHp0ZS5jb20uY25dDQpTZW50OiBUdWVzZGF5LCBEZWNlbWJlciAyNywgMjAxMSAx
MTowMCBQTQ0KVG86IFRpbmEgVFNPVQ0KQ2M6IHZ1bWlwMUBnbWFpbC5jb207IGRjQGlldGYub3Jn
DQpTdWJqZWN0OiBmZWVkYmFjayBmb3IgaHR0cDovL3d3dy5pZXRmLm9yZy9tYWlsLWFyY2hpdmUv
d2ViL2RjL2N1cnJlbnQvbXNnMDAwMzguaHRtbA0KDQpUaGUgZm9sbG93aW5nIGRyYWZ0IG9uIENs
b3VkIFNlcnZpY2UgQnJva2VyIGlzIG5vIHJlbGF0ZWQgdG8gaHR0cDovL3Rvb2xzLmlldGYub3Jn
L2lkL2RyYWZ0LXRzb3UtdnJvbS1wcm9ibGVtLXN0YXRlbWVudC0wMi50eHQNCg0KDQoNCmh0dHA6
Ly93d3cuaWV0Zi5vcmcvc3RhZ2luZy9kcmFmdC1zaGFvLW9wc2F3Zy1jbG91ZC1zZXJ2aWNlLWJy
b2tlci0wMi50eHQNCg0KZHJhZnQtdHNvdS12cm9tLXByb2JsZW0tc3RhdGVtZW50LTAyIGlzIGp1
c3QgYSBwcm9ibGVtIHN0YXRlbWVudCBhYm91dCBkYXRhIGNlbnRlciB2aXJ0dWFsIHJlc291cmNl
cyBvcGVyYXRpb25zIGFuZCBtYW5hZ2VtZW50Lg0KDQpBIGNsb3VkIGJyb2tlciBjYW4gYmUgaW4g
ZGF0YSBjZW50ZXIsIG9yIG5vdC4gSXQgaXMgYSBuZXcgcm9sZSBpbiBjbG91ZCBlY29zeXN0ZW0u
IEJ5IHRoZSB3YXksIHRoZSBkcmFmdCBpcyBhIHNvbHV0aW9uLg0KW2NpZDppbWFnZTAwMS5qcGdA
MDFDQ0NBNUYuRDhEMkY2QjBdDQoNCg0KU2hhbyBXZWl4aWFuZyDJ286wz+gNClN0YW5kYXJkIERl
dmVsb3BtZW50IEFuZCBJbmR1c3RyeSBSZWxhdGlvbnMgRGVwdC4NCg0KW2NpZDppbWFnZTAwMi5q
cGdAMDFDQ0NBNUYuRDhEMkY2QjBdDQoNClByb2R1Y3QgUiZEIFN5c3RlbQ0KDQqy+sa30dC3oszl
z7UNCg0KRTMwNSxOby44ODksQmlibyBSZCxaaGFuZ2ppYW5nIEhpLVRlY2ggUGFyayxQdWRvbmcs
U2hhbmdoYWkNClAuUi5DaGluYSwgMjAxMjAzDQpUZWw6Kzg2LTIxLTY4ODk2OTc2DQpNb2JpbGU6
Kzg2LTEzOTE2NjE1ODE3DQpFbWFpbDpzaGFvLndlaXhpYW5nQHp0ZS5jb20uY24NCg0KDQoNCg0K
DQo=

--Boundary_(ID_qFWVe1t1TLiqJd/+RfyPrg)
Content-type: text/html; charset=gb2312
Content-transfer-encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dgb2312">
<meta name=3D"ProgId" content=3D"Word.Document">
<meta name=3D"Generator" content=3D"Microsoft Word 12">
<meta name=3D"Originator" content=3D"Microsoft Word 12">
<link rel=3D"File-List" href=3D"cid:filelist.xml@01CCCA5F.D8D2F6B0"><link r=
el=3D"Edit-Time-Data" href=3D"cid:editdata.mso"><!--[if !mso]><style>v\:* {=
behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><!--[if gte mso 9]><xml>
<o:OfficeDocumentSettings>
<o:AllowPNG/>
<o:TargetScreenSize>1024x768</o:TargetScreenSize>
</o:OfficeDocumentSettings>
</xml><![endif]--><!--[if gte mso 9]><xml>
<w:WordDocument>
<w:Zoom>210</w:Zoom>
<w:SpellingState>Clean</w:SpellingState>
<w:GrammarState>Clean</w:GrammarState>
<w:TrackMoves/>
<w:TrackFormatting/>
<w:EnvelopeVis/>
<w:ValidateAgainstSchemas/>
<w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid>
<w:IgnoreMixedContent>false</w:IgnoreMixedContent>
<w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText>
<w:DoNotPromoteQF/>
<w:LidThemeOther>EN-US</w:LidThemeOther>
<w:LidThemeAsian>ZH-CN</w:LidThemeAsian>
<w:LidThemeComplexScript>X-NONE</w:LidThemeComplexScript>
<w:Compatibility>
<w:DoNotExpandShiftReturn/>
<w:BreakWrappedTables/>
<w:SplitPgBreakAndParaMark/>
<w:DontVertAlignCellWithSp/>
<w:DontBreakConstrainedForcedTables/>
<w:DontVertAlignInTxbx/>
<w:Word11KerningPairs/>
<w:CachedColBalance/>
<w:UseFELayout/>
</w:Compatibility>
<w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel>
<m:mathPr>
<m:mathFont m:val=3D"Cambria Math"/>
<m:brkBin m:val=3D"before"/>
<m:brkBinSub m:val=3D"&#45;-"/>
<m:smallFrac m:val=3D"off"/>
<m:dispDef/>
<m:lMargin m:val=3D"0"/>
<m:rMargin m:val=3D"0"/>
<m:defJc m:val=3D"centerGroup"/>
<m:wrapIndent m:val=3D"1440"/>
<m:intLim m:val=3D"subSup"/>
<m:naryLim m:val=3D"undOvr"/>
</m:mathPr></w:WordDocument>
</xml><![endif]--><!--[if gte mso 9]><xml>
<w:LatentStyles DefLockedState=3D"false" DefUnhideWhenUsed=3D"true" DefSemi=
Hidden=3D"true" DefQFormat=3D"false" DefPriority=3D"99" LatentStyleCount=3D=
"267">
<w:LsdException Locked=3D"false" Priority=3D"0" SemiHidden=3D"false" Unhide=
WhenUsed=3D"false" QFormat=3D"true" Name=3D"Normal"/>
<w:LsdException Locked=3D"false" Priority=3D"9" SemiHidden=3D"false" Unhide=
WhenUsed=3D"false" QFormat=3D"true" Name=3D"heading 1"/>
<w:LsdException Locked=3D"false" Priority=3D"9" QFormat=3D"true" Name=3D"he=
ading 2"/>
<w:LsdException Locked=3D"false" Priority=3D"9" QFormat=3D"true" Name=3D"he=
ading 3"/>
<w:LsdException Locked=3D"false" Priority=3D"9" QFormat=3D"true" Name=3D"he=
ading 4"/>
<w:LsdException Locked=3D"false" Priority=3D"9" QFormat=3D"true" Name=3D"he=
ading 5"/>
<w:LsdException Locked=3D"false" Priority=3D"9" QFormat=3D"true" Name=3D"he=
ading 6"/>
<w:LsdException Locked=3D"false" Priority=3D"9" QFormat=3D"true" Name=3D"he=
ading 7"/>
<w:LsdException Locked=3D"false" Priority=3D"9" QFormat=3D"true" Name=3D"he=
ading 8"/>
<w:LsdException Locked=3D"false" Priority=3D"9" QFormat=3D"true" Name=3D"he=
ading 9"/>
<w:LsdException Locked=3D"false" Priority=3D"39" Name=3D"toc 1"/>
<w:LsdException Locked=3D"false" Priority=3D"39" Name=3D"toc 2"/>
<w:LsdException Locked=3D"false" Priority=3D"39" Name=3D"toc 3"/>
<w:LsdException Locked=3D"false" Priority=3D"39" Name=3D"toc 4"/>
<w:LsdException Locked=3D"false" Priority=3D"39" Name=3D"toc 5"/>
<w:LsdException Locked=3D"false" Priority=3D"39" Name=3D"toc 6"/>
<w:LsdException Locked=3D"false" Priority=3D"39" Name=3D"toc 7"/>
<w:LsdException Locked=3D"false" Priority=3D"39" Name=3D"toc 8"/>
<w:LsdException Locked=3D"false" Priority=3D"39" Name=3D"toc 9"/>
<w:LsdException Locked=3D"false" Priority=3D"35" QFormat=3D"true" Name=3D"c=
aption"/>
<w:LsdException Locked=3D"false" Priority=3D"10" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Title"/>
<w:LsdException Locked=3D"false" Priority=3D"1" Name=3D"Default Paragraph F=
ont"/>
<w:LsdException Locked=3D"false" Priority=3D"11" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Subtitle"/>
<w:LsdException Locked=3D"false" Priority=3D"22" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Strong"/>
<w:LsdException Locked=3D"false" Priority=3D"20" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Emphasis"/>
<w:LsdException Locked=3D"false" Priority=3D"59" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Table Grid"/>
<w:LsdException Locked=3D"false" UnhideWhenUsed=3D"false" Name=3D"Placehold=
er Text"/>
<w:LsdException Locked=3D"false" Priority=3D"1" SemiHidden=3D"false" Unhide=
WhenUsed=3D"false" QFormat=3D"true" Name=3D"No Spacing"/>
<w:LsdException Locked=3D"false" Priority=3D"60" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Shading"/>
<w:LsdException Locked=3D"false" Priority=3D"61" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light List"/>
<w:LsdException Locked=3D"false" Priority=3D"62" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Grid"/>
<w:LsdException Locked=3D"false" Priority=3D"63" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 1"/>
<w:LsdException Locked=3D"false" Priority=3D"64" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 2"/>
<w:LsdException Locked=3D"false" Priority=3D"65" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 1"/>
<w:LsdException Locked=3D"false" Priority=3D"66" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 2"/>
<w:LsdException Locked=3D"false" Priority=3D"67" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 1"/>
<w:LsdException Locked=3D"false" Priority=3D"68" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 2"/>
<w:LsdException Locked=3D"false" Priority=3D"69" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 3"/>
<w:LsdException Locked=3D"false" Priority=3D"70" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Dark List"/>
<w:LsdException Locked=3D"false" Priority=3D"71" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Shading"/>
<w:LsdException Locked=3D"false" Priority=3D"72" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful List"/>
<w:LsdException Locked=3D"false" Priority=3D"73" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Grid"/>
<w:LsdException Locked=3D"false" Priority=3D"60" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Shading Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"61" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light List Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"62" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Grid Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"63" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 1 Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"64" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 2 Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"65" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 1 Accent 1"/>
<w:LsdException Locked=3D"false" UnhideWhenUsed=3D"false" Name=3D"Revision"=
/>
<w:LsdException Locked=3D"false" Priority=3D"34" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"List Paragraph"/>
<w:LsdException Locked=3D"false" Priority=3D"29" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Quote"/>
<w:LsdException Locked=3D"false" Priority=3D"30" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Intense Quote"/>
<w:LsdException Locked=3D"false" Priority=3D"66" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 2 Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"67" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 1 Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"68" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 2 Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"69" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 3 Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"70" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Dark List Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"71" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Shading Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"72" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful List Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"73" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Grid Accent 1"/>
<w:LsdException Locked=3D"false" Priority=3D"60" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Shading Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"61" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light List Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"62" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Grid Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"63" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 1 Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"64" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 2 Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"65" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 1 Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"66" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 2 Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"67" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 1 Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"68" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 2 Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"69" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 3 Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"70" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Dark List Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"71" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Shading Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"72" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful List Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"73" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Grid Accent 2"/>
<w:LsdException Locked=3D"false" Priority=3D"60" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Shading Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"61" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light List Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"62" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Grid Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"63" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 1 Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"64" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 2 Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"65" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 1 Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"66" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 2 Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"67" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 1 Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"68" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 2 Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"69" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 3 Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"70" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Dark List Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"71" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Shading Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"72" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful List Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"73" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Grid Accent 3"/>
<w:LsdException Locked=3D"false" Priority=3D"60" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Shading Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"61" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light List Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"62" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Grid Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"63" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 1 Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"64" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 2 Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"65" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 1 Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"66" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 2 Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"67" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 1 Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"68" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 2 Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"69" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 3 Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"70" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Dark List Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"71" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Shading Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"72" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful List Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"73" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Grid Accent 4"/>
<w:LsdException Locked=3D"false" Priority=3D"60" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Shading Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"61" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light List Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"62" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Grid Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"63" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 1 Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"64" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 2 Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"65" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 1 Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"66" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 2 Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"67" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 1 Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"68" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 2 Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"69" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 3 Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"70" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Dark List Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"71" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Shading Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"72" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful List Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"73" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Grid Accent 5"/>
<w:LsdException Locked=3D"false" Priority=3D"60" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Shading Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"61" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light List Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"62" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Light Grid Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"63" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 1 Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"64" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Shading 2 Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"65" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 1 Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"66" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium List 2 Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"67" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 1 Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"68" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 2 Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"69" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Medium Grid 3 Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"70" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Dark List Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"71" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Shading Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"72" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful List Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"73" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" Name=3D"Colorful Grid Accent 6"/>
<w:LsdException Locked=3D"false" Priority=3D"19" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Subtle Emphasis"/>
<w:LsdException Locked=3D"false" Priority=3D"21" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Intense Emphasis"/>
<w:LsdException Locked=3D"false" Priority=3D"31" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Subtle Reference"/>
<w:LsdException Locked=3D"false" Priority=3D"32" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Intense Reference"/>
<w:LsdException Locked=3D"false" Priority=3D"33" SemiHidden=3D"false" Unhid=
eWhenUsed=3D"false" QFormat=3D"true" Name=3D"Book Title"/>
<w:LsdException Locked=3D"false" Priority=3D"37" Name=3D"Bibliography"/>
<w:LsdException Locked=3D"false" Priority=3D"39" QFormat=3D"true" Name=3D"T=
OC Heading"/>
</w:LatentStyles>
</xml><![endif]--><style><!--
/* Font Definitions */
@font-face
	{font-family:=CB=CE=CC=E5;
	panose-1:2 1 6 0 3 1 1 1 1 1;
	mso-font-alt:SimSun;
	mso-font-charset:134;
	mso-generic-font-family:auto;
	mso-font-pitch:variable;
	mso-font-signature:3 135135232 16 0 262145 0;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;
	mso-font-alt:"Calisto MT";
	mso-font-charset:0;
	mso-generic-font-family:roman;
	mso-font-pitch:variable;
	mso-font-signature:-1610611985 1107304683 0 0 159 0;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;
	mso-font-alt:"Century Gothic";
	mso-font-charset:0;
	mso-generic-font-family:swiss;
	mso-font-pitch:variable;
	mso-font-signature:-1610611985 1073750139 0 0 159 0;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;
	mso-font-alt:Arial;
	mso-font-charset:0;
	mso-generic-font-family:swiss;
	mso-font-pitch:variable;
	mso-font-signature:1627400839 -2147483648 8 0 66047 0;}
@font-face
	{font-family:"\@=CB=CE=CC=E5";
	panose-1:2 1 6 0 3 1 1 1 1 1;
	mso-font-charset:134;
	mso-generic-font-family:auto;
	mso-font-pitch:variable;
	mso-font-signature:3 135135232 16 0 262145 0;}
@font-face
	{font-family:=BB=AA=CE=C4=B7=C2=CB=CE;
	panose-1:0 0 0 0 0 0 0 0 0 0;
	mso-font-alt:=CB=CE=CC=E5;
	mso-font-charset:134;
	mso-generic-font-family:roman;
	mso-font-format:other;
	mso-font-pitch:auto;
	mso-font-signature:1 135135232 16 0 262144 0;}
@font-face
	{font-family:"\@=BB=AA=CE=C4=B7=C2=CB=CE";
	panose-1:0 0 0 0 0 0 0 0 0 0;
	mso-font-charset:134;
	mso-generic-font-family:roman;
	mso-font-format:other;
	mso-font-pitch:auto;
	mso-font-signature:1 135135232 16 0 262144 0;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{mso-style-unhide:no;
	mso-style-qformat:yes;
	mso-style-parent:"";
	margin:0in;
	margin-bottom:.0001pt;
	mso-pagination:widow-orphan;
	font-size:12.0pt;
	font-family:=CB=CE=CC=E5;
	mso-bidi-font-family:=CB=CE=CC=E5;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;
	text-underline:single;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-noshow:yes;
	mso-style-priority:99;
	color:purple;
	text-decoration:underline;
	text-underline:single;}
p
	{mso-style-noshow:yes;
	mso-style-priority:99;
	mso-margin-top-alt:auto;
	margin-right:0in;
	mso-margin-bottom-alt:auto;
	margin-left:0in;
	mso-pagination:widow-orphan;
	font-size:12.0pt;
	font-family:=CB=CE=CC=E5;
	mso-bidi-font-family:=CB=CE=CC=E5;}
span.EmailStyle18
	{mso-style-type:personal-reply;
	mso-style-noshow:yes;
	mso-style-unhide:no;
	mso-ansi-font-size:11.0pt;
	mso-bidi-font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-ascii-font-family:Calibri;
	mso-fareast-font-family:=CB=CE=CC=E5;
	mso-hansi-font-family:Calibri;
	mso-bidi-font-family:"Times New Roman";
	color:#1F497D;}
span.SpellE
	{mso-style-name:"";
	mso-spl-e:yes;}
span.GramE
	{mso-style-name:"";
	mso-gram-e:yes;}
.MsoChpDefault
	{mso-style-type:export-only;
	mso-default-props:yes;
	mso-ascii-font-family:Calibri;
	mso-fareast-font-family:=CB=CE=CC=E5;
	mso-hansi-font-family:Calibri;
	mso-bidi-font-family:"Times New Roman";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.25in 1.0in 1.25in;
	mso-header-margin:.5in;
	mso-footer-margin:.5in;
	mso-paper-source:0;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 10]><style>/* Style Definitions */
table.MsoNormalTable
	{mso-style-name:"Table Normal";
	mso-tstyle-rowband-size:0;
	mso-tstyle-colband-size:0;
	mso-style-noshow:yes;
	mso-style-priority:99;
	mso-style-qformat:yes;
	mso-style-parent:"";
	mso-padding-alt:0in 5.4pt 0in 5.4pt;
	mso-para-margin:0in;
	mso-para-margin-bottom:.0001pt;
	mso-pagination:widow-orphan;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-ascii-font-family:Calibri;
	mso-hansi-font-family:Calibri;}
</style><![endif]--><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple" style=3D"tab-interval:.=
5in">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso=
-bidi-font-family:&quot;Times New Roman&quot;;color:#1F497D">Dear
<span class=3D"SpellE">Weixiang</span>,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso=
-bidi-font-family:&quot;Times New Roman&quot;;color:#1F497D">Happy New Year=
!<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso=
-bidi-font-family:&quot;Times New Roman&quot;;color:#1F497D"><o:p>&nbsp;</o=
:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso=
-bidi-font-family:&quot;Times New Roman&quot;;color:#1F497D">The authors ac=
tually had Cloud Brokerage included in version 00 of that draft
<a href=3D"http://tools.ietf.org/id/draft-tsou-vrom-problem-statement-00.tx=
t">http://tools.ietf.org/id/draft-tsou-vrom-problem-statement-00.txt</a>, b=
ut it seemed too early to include it in IETF so it was removed by version 0=
2.
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso=
-bidi-font-family:&quot;Times New Roman&quot;;color:#1F497D">The CSB docume=
nt came up like 10 days after version 00 of draft-<span class=3D"SpellE">ts=
ou</span>.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso=
-bidi-font-family:&quot;Times New Roman&quot;;color:#1F497D">It seems draft=
-<span class=3D"SpellE">shao</span> just put down the NIST specs into the d=
raft,
 which is not quite the appropriate level for IETF. I do believe it could f=
ind a place but not in that shape. Also, assuming there is a CSB in the pat=
h, the only place it can go is in some data center somewhere, where else,
<span class=3D"GramE">a</span> desktop?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso=
-bidi-font-family:&quot;Times New Roman&quot;;color:#1F497D"><o:p>&nbsp;</o=
:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-family:&quot;Times New Roman&quo=
t;,&quot;serif&quot;;mso-fareast-font-family:&quot;Times New Roman&quot;;co=
lor:#1F497D;mso-no-proof:yes">Tina<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso=
-bidi-font-family:&quot;Times New Roman&quot;;color:#1F497D"><o:p>&nbsp;</o=
:p></span></p>
<div style=3D"border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in">
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:&quot=
;Tahoma&quot;,&quot;sans-serif&quot;">From:</span></b><span style=3D"font-s=
ize:10.0pt;font-family:&quot;Tahoma&quot;,&quot;sans-serif&quot;"> shao.wei=
xiang@zte.com.cn [mailto:shao.weixiang@zte.com.cn]
<br>
<b>Sent:</b> Tuesday, December 27, 2011 11:00 PM<br>
<b>To:</b> Tina TSOU<br>
<b>Cc:</b> vumip1@gmail.com; dc@ietf.org<br>
<b>Subject:</b> feedback for http://www.ietf.org/mail-archive/web/dc/curren=
t/msg00038.html<o:p></o:p></span></p>
</div>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<table class=3D"MsoNormalTable" border=3D"0" cellpadding=3D"0" width=3D"100=
%" style=3D"width:100.0%;mso-cellspacing:1.5pt;mso-yfti-tbllook:1184">
<tbody>
<tr style=3D"mso-yfti-irow:0;mso-yfti-firstrow:yes;mso-yfti-lastrow:yes">
<td width=3D"100%" style=3D"width:100.0%;padding:.75pt .75pt .75pt .75pt">
<p class=3D"MsoNormal">The following draft on Cloud Service Broker is no re=
lated to
<a href=3D"http://tools.ietf.org/id/draft-tsou-vrom-problem-statement-02.tx=
t"><span style=3D"font-size:10.0pt;font-family:&quot;Calibri&quot;,&quot;sa=
ns-serif&quot;">http://tools.ietf.org/id/draft-tsou-vrom-problem-statement-=
02.txt</span></a><span style=3D"font-size:10.0pt;font-family:&quot;Calibri&=
quot;,&quot;sans-serif&quot;;color:#1F497D">
</span><o:p></o:p></p>
</td>
</tr>
</tbody>
</table>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ca=
libri&quot;,&quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso=
-bidi-font-family:&quot;Times New Roman&quot;"><br>
<br>
</span><span style=3D"font-size:10.0pt;font-family:&quot;Arial&quot;,&quot;=
sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5">http://www.ietf.org/=
staging/draft-shao-opsawg-cloud-service-broker-02.txt<br>
</span><span style=3D"font-size:10.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso-bidi-font-famil=
y:&quot;Times New Roman&quot;"><br>
</span><u><span style=3D"font-size:10.0pt;font-family:&quot;Calibri&quot;,&=
quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso-bidi-font-fa=
mily:&quot;Times New Roman&quot;;color:blue">draft-tsou-vrom-problem-statem=
ent-02</span></u><span style=3D"font-size:10.0pt;font-family:&quot;Arial&qu=
ot;,&quot;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5">
</span><span style=3D"font-family:&quot;Arial&quot;,&quot;sans-serif&quot;;=
mso-fareast-font-family:=CB=CE=CC=E5">is just a problem statement about dat=
a center virtual resources operations and management.
</span><span style=3D"font-size:10.0pt;font-family:&quot;Calibri&quot;,&quo=
t;sans-serif&quot;;mso-fareast-font-family:=CB=CE=CC=E5;mso-bidi-font-famil=
y:&quot;Times New Roman&quot;"><o:p></o:p></span></p>
<p><span style=3D"font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">A c=
loud broker can be in data center, or not. It is a new role in cloud ecosys=
tem. By the way, the draft is a solution.</span>
<o:p></o:p></p>
<table class=3D"MsoNormalTable" border=3D"0" cellpadding=3D"0" style=3D"mso=
-cellspacing:1.5pt;mso-yfti-tbllook:1184">
<tbody>
<tr style=3D"mso-yfti-irow:0;mso-yfti-firstrow:yes;mso-yfti-lastrow:yes">
<td style=3D"padding:.75pt .75pt .75pt .75pt">
<p class=3D"MsoNormal" align=3D"center" style=3D"text-align:center"><img bo=
rder=3D"0" width=3D"137" height=3D"132" id=3D"_x0000_i1033" src=3D"cid:imag=
e001.jpg@01CCCA5F.D8D2F6B0"><o:p></o:p></p>
</td>
<td style=3D"padding:.75pt .75pt .75pt .75pt">
<p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><o:p>&nbsp;</o:p></p>
<table class=3D"MsoNormalTable" border=3D"0" cellpadding=3D"0" width=3D"100=
%" style=3D"width:100.0%;mso-cellspacing:1.5pt;mso-yfti-tbllook:1184">
<tbody>
<tr style=3D"mso-yfti-irow:0;mso-yfti-firstrow:yes">
<td colspan=3D"2" style=3D"padding:.75pt .75pt .75pt .75pt">
<p class=3D"MsoNormal"><b><i><span style=3D"font-family:&quot;=BB=AA=CE=C4=
=B7=C2=CB=CE&quot;,&quot;serif&quot;">Shao </span></i></b><b><i><span style=
=3D"font-size:7.5pt;font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">W=
eixiang</span></i></b><b><i><span style=3D"font-family:&quot;=BB=AA=CE=C4=
=B7=C2=CB=CE&quot;,&quot;serif&quot;">
<span lang=3D"ZH-CN">=C9=DB=CE=B0=CF=E8</span></span></i></b> <br>
<span style=3D"font-size:7.5pt;font-family:&quot;Arial&quot;,&quot;sans-ser=
if&quot;">Standard Development And Industry Relations Dept.</span>
<o:p></o:p></p>
</td>
</tr>
<tr style=3D"mso-yfti-irow:1">
<td rowspan=3D"2" style=3D"padding:.75pt .75pt .75pt .75pt">
<p class=3D"MsoNormal"><img border=3D"0" width=3D"100" height=3D"35" id=3D"=
_x0000_i1034" src=3D"cid:image002.jpg@01CCCA5F.D8D2F6B0"><o:p></o:p></p>
</td>
<td style=3D"padding:.75pt .75pt .75pt .75pt">
<p class=3D"MsoNormal"><b><span style=3D"font-size:7.5pt;font-family:&quot;=
Arial&quot;,&quot;sans-serif&quot;;color:#909090">Product R&amp;D System
</span></b><o:p></o:p></p>
</td>
</tr>
<tr style=3D"mso-yfti-irow:2">
<td style=3D"padding:.75pt .75pt .75pt .75pt">
<p class=3D"MsoNormal"><b><span lang=3D"ZH-CN" style=3D"font-size:7.5pt;mso=
-ascii-font-family:Arial;mso-hansi-font-family:Arial;mso-bidi-font-family:A=
rial;color:#909090">=B2=FA=C6=B7=D1=D0=B7=A2=CC=E5=CF=B5</span></b><span la=
ng=3D"ZH-CN">
</span><o:p></o:p></p>
</td>
</tr>
<tr style=3D"mso-yfti-irow:3;mso-yfti-lastrow:yes">
<td colspan=3D"2" style=3D"padding:.75pt .75pt .75pt .75pt">
<p class=3D"MsoNormal"><span style=3D"font-size:7.5pt;font-family:&quot;Ari=
al&quot;,&quot;sans-serif&quot;">E305,No.889,Bibo Rd,Zhangjiang Hi-Tech Par=
k,Pudong,Shanghai
<br>
P.R.China, 201203<br>
Tel:&#43;86-21-68896976<br>
Mobile:&#43;86-13916615817<br>
Email:shao.weixiang@zte.com.cn </span><o:p></o:p></p>
</td>
</tr>
</tbody>
</table>
</td>
</tr>
</tbody>
</table>
<p><br>
<br>
<span style=3D"font-size:7.5pt;font-family:&quot;Arial&quot;,&quot;sans-ser=
if&quot;">&nbsp;</span> <o:p></o:p></p>
</div>
</body>
</html>

--Boundary_(ID_qFWVe1t1TLiqJd/+RfyPrg)--

--Boundary_(ID_+JExYkw9diA3jcAnk4dnZQ)
Content-id: <image001.jpg@01CCCA5F.D8D2F6B0>
Content-type: image/jpeg; name=image001.jpg
Content-transfer-encoding: base64
Content-disposition: inline; filename=image001.jpg; size=3911;
 creation-date="Wed, 04 Jan 2012 05:37:12 GMT";
 modification-date="Wed, 04 Jan 2012 05:37:12 GMT"
Content-description: image001.jpg

/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0a
HBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIy
MjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCACEAIkDASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3+iii
gBKa8iRxs7sFVRkknAFUdW1ez0Wwe8vZQka9B3Y+gHc14t4q8X6j4lkaIs9tp4PyW6HBcf7Z7/Su
ihhpVXpsZVKqgjY134l6rpniC6g026sr+xDAoxQgqT1XI649am034uald3dvaPokUssziNRFMQSS
fcV51HZvPMlvBE8kjcJHGMk/hXe/Czw6Z9Yn1e5jxHaZiiDD/lp3P4dK9KrRoU6d2tUcMKlWc9Hu
ewISUBYbWwMjrink4oyByTgVBJeW0X+snjX6sK8Wzex6ZIz1Cz00ajYvwLqEn/eFTBYZRlSrD1U5
pSjIV77EHmsv3Wx9act4BxIMe4oltm6o2fY1Ql3xnawwfesZScdRNtGwrq4ypyPWnVgpcPC25G57
jsa07S9S5GPuuOqmqhUUtAUky3RRRWpQUUUUAJVTUNQg0yxlu7lwkUYyff2q3Xl3jTWW1XUDaQt/
otsSOOjv6/hWtGn7SViJy5Vc5rxDrN34hv2ubg7Y1OIYc8Rj+pqLRfDl7r979ntF2ouPNmb7sY/q
farml6NPq2oR2kHDNyzkcIvrXrVpaWHhvSBFGBHBGMk93b19ya9CrXVGPJDc5o0+d80irofhrS/D
FqWhUebjMlxJjc349h7Cqr6zBaB4NJtUVWcsz4wCxPJx3qheahPq82XykAPyRD+Z9TU9taZA4rk5
X8U9WOVTpArzS313zPcSMD2BwPyqo9gDywz9ea6ZNObZkrhfWkaxXptzVKpFaIhwk9zkJtPT+7+l
UytzaNvtriWI/wCwxFdlcWG0ZZMehNZNzZDnitoVE9GYTUobFSy8calYSBL+IXUI6sBtcf412mma
xpuv2pe2lEgH3424ZD7ivOL2zxnisTzLnTbxbuyleGdD95e49CO4onhYVFeOgU8ZKLtM9bvrN7cF
1+eL9VrOEzRsrKcOvQ1L4S8W2/iO2aCUCK/iH72Lsw/vL6inatZfZG3pzA3GP7prwsTQlTfod11J
c8TY07UFvYsHiReGFXq4iC7e1uFnQ/MOo9RXY206XMCTRnKMMiroVudWe5cJ8ysT0UUV0GhjeJNQ
OnaPLIhxLJ+7T6nvXl5hxyfxPcmu18XTGa+htx92JdxHuelZGkacL7V7eFhlA29/oP8A69d9C1On
zMxn7zsdR4S0ZdL0wSyLi5uAHcnqB2H5Vh61qratqJiQ/wCiwthR/ePr/hXQeK9TOnaK4jOJZj5a
Y7ep/KuO0uMEqMcCopRc26kjHEVOX3EbNlB0zXQWtuEUOwxnoO9U9LthJukf7i8n3NaSvvbd0HQD
0FZVJ3diqUNLssAO4G48e1MdWU8GpUbApkhFYHT0IpJhIuyRc+9UZrCOcHyW+YfwnrVmTFRNG8sZ
ePO9O464q07bGUlzaM5u+s2Qsrrhh1Brl9StMAkCvSfNgvl+z3eFk6K49a5XWtOe2laOQe6kdGFd
dCtrZnm4ijZXR559pudI1GK/s3MdxC25T2I7qfUEV7do2qWvinw/FeRj5Jlw6d0buPzrxrVoNu41
tfCzWjZeILjR5G/c3amSMHoJAOfzFdGMoqpT510FgazjLkfU6e7ia1uJIH+8hwT6j1rW8M3+JpLN
zw3zpn9ab4rttphu17/u2/pXO2l2bW/guAcbXGfpmvkXL2FY7XLkmemUtNRgyhh0IzTq9dHYcPqy
mbVLlzz820fhV/wvbBbm4mI5ChRUNzETdTE93J/WtbQE2Qz+pYfyrrnL93YhLU5Lx5eGTXLW0B+W
GLeR7k/4CmaPGzyKqjJIrJ8WTF/Gt4rfwBFH0x/9eup8GvEZnVgN+3IJ7Ct5Lkoqx51+evZm+6i1
torZfvYyx96WNuwpt6yveYXk4Ap6MsOP4n/QV57Z3LctIjEZY7R70rPCnT5jVUyu/wB459u1NLVH
MXzdiZ7k/wAKIPwpkd6yygP908YAxUDGoJDRzIhye4l+Qs7q8SEHkdqZGkWrQNZTkh1GY3PUVPqQ
3QwTf3htNY0jvEd6MVccgjtVxlqY1HyvU4/xJp7WNzJBJ94d/UetcfYXjaZ4i0+8U7TDcISR1xnB
/Q13/iWX7dFHcv8Afb9259CK801D5ZGYdVOR+de1QbnTszynaNZOJ9F+IohcaDcEc7VDg/rXnUj5
U+4/Wu81HUre28PRJcEq1xbBV+UkEleledM+Bj8PWvisyklUVj0cVJXVj1fR5vtGkWkpPLRir9cx
4R1W1m062sFkLXMaZdQpwoye+MV09eph5qdNNHbTkpRTRg3MGLiT/ezVvSV2eavrg1NcQ5lz6022
Xy5/Zhg11uV42KtqeUeNwbbx1c56SxxuPfjH9K3PC1wEFzL3WLiqvxXsWhvNO1VBhSDA59+o/rUP
g0TXkN75S5VYhuNdzfNQTPLknDENHcWjtLZBhgzqvPrilWQVi6XqLRyiRfvZ5HqPSt2eFZoftVr8
yHlkHUHvXmTi07HVCfMroBJ70GSqXm0hmFZ8o/aItmQetRtIDVUy00y+9DiJ1DUuSG0aJu6vj+dY
VycKSa0riYDRYV7tIawb642x7Ryx4A96qKbehnWkhunaVDqtvqLXMhSKJcjno2DzXk92nnXiQp8x
eQIPfLAV2+qXkljDLbJKVaRC0wU8HjgVheDtNOseN7CHbujhfz5PZV6frivYoXhBzZxJKc4xSPdL
rS4b7SFsZl+URgA91IHBry670y8t9VOnNGXuC21AOjj+99K9h5qFrSBrlLlokM6qVVyOQK+cxeEj
XaezPUrYdVLGd4e0SLRbJYwA078yvjqf8K2aQZpa6qcIwiox2NoxUVZCMobrUZi9KloqyjD8U6Iv
iHw9dWB4kZd0TH+Fx0P51434Z1i90i7ntSxhd8wzRnqDyK9/ry74jeEZBOfEOnRFiMG7iTqwH8Y9
x3rswtVa05bM4sXSbXPHdFCwv9p2MeQcE+9dNp2rvbuGQ5B4ZT0YVyOqaro2oQ2VxpC+XOV2zpja
M4H61HBqDpgOCp962nQc1fY89VXTla56Q622pKZbRgk/Vo24yaypmeCQpIpRveuei1MghgxBHQg4
NaSeJHeMR3KJOnTLcEfjXN9XkjV4mE99CyZ6a09VRd2N0wW389JG/gVd3Nbmm+HZZ8S3blYuoUjB
I96iUFHcceab90p39yRHa2yDfIqZ2jk5NZl/KmkQ+bOQ94w/dxddnuav65rljotxLBYx+Zdt9+Zu
QvtXnuo6i88jyOxZ2OSSckmuihh3J3aIr1FF2vdlTUrxm812bczdT3r0b4TaCbPSJtZuEKzXxxHn
qIgePz61xPhLwvN4s1UGRSumQMDPJ0Dn+4P617siJBEkUahUUBVVegGK0xlRRXso/M3wNJ/xGT0U
g6UteaemFFFFABRRRQA3cBTGkjwQzLg9QTWP4qspLzQpxCzrLGPMXaSCcdq8qimkdvmlkP1Y13YX
BfWIuSlax4+YZo8JUUHC9zoPFXw+WSeXUPDuxmb5prMNwx9V9DXExX9xaytb3CEOhw0Uy4K16H4b
12HSkaGa3yrnJlXlvx9a6K+0rw/4rhBnSKZwOHU7ZF/HrXT7WdB8lVXj3Mowp4qPPTdpdjyuHUrM
8yWPP+xIRV+LWNJi5/skyEdpJjitm++FkqMW03Usr2S4XP4bhWRN4A8RwE7beGUDoUmHP4EVtGph
pr4jCWHxFN/D+paTxpcwr5en2Nnaj+8q5IHvWfd+KdSlJ/0+Uk/ebOAfYDsKkj8D+JpV2ixjjB/v
TAVpWnwv1OYg3t/DAncRAufzNJvCw1uhqni56Wf5HE3N8WLOzZLHk9ya3fDngPUfEMiXF6slnp5w
SWGJJR6D0Hua9D0rwToOhMJjF9ouB/y1uDuOfYdq3Hvdw2xDCjjca56uMuuWivmddDAqD5qrv5BY
2Vpo9lHZWcKxQxjCov8AnmrCkkknrVVOeevqatxIT8x6V58lbVnqRd9ETL0paKKzNAooooAKKKKA
GMoZSCOCMV58fAtxJcXjpKsYEpNupGQw68/yr0Okwa2o4idG/I9zkxODpYm3tFex5Pcadeac+y6g
dDnhuoP41JCzIQyMVI6FTivUZIUlQpKgdT1DDIrIufC2nTksiNC3rGePyrujmEZK1RHmvKpU3ekz
nrXWdQhAAuXZR2cA1ox+IL3HzCNvfBFOfwnKh/dXKsPR15po8O3y9DC3vk/4VMpYeWprCGJho7k4
1y7boI1+goOoXUo+aUgHqF4pI9Cu+5jH41ci0SQf6yYfgKzbox2OiMa0tyopJOScnuTzVuCNpD8q
5/lV2HToY8ZBc+rGraqFGFGBWMqq2R0QovdkENqEwW5PpVjtS0Vg23ubpJKyCiiikMKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAP/9k=

--Boundary_(ID_+JExYkw9diA3jcAnk4dnZQ)
Content-id: <image002.jpg@01CCCA5F.D8D2F6B0>
Content-type: image/jpeg; name=image002.jpg
Content-transfer-encoding: base64
Content-disposition: inline; filename=image002.jpg; size=2106;
 creation-date="Wed, 04 Jan 2012 05:37:13 GMT";
 modification-date="Wed, 04 Jan 2012 05:37:13 GMT"
Content-description: image002.jpg

/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0a
HBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIy
MjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAAjAGQDASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDqvEut
+OLbxHew6ZFeGyVwISlmHUjA77TnnNUfCXjXxFqfiqxsbu+8yCR2EiCFQThWPYeor0Dxrq/9ieFb
y4RsTSL5MPrubj9Bk/hXCfCTR/NvrvWJFysA8iIn+8cFj+AwPxrvhKLoSlKK00MGmppJmXe+MvHG
nEG8muLZXJCGa0Vd2PTK1r6D4k8YXMd7cX7XAtF0+aeKVrYKhcLlSGxz/WrXxj/499H/AN+X+S10
Vv8A8knH/YJP/ounKUXSjLlWoknzNX2PObPxl431EN9iuLm5KY3eTaq+3OeuFr0Pwvrt1b+GTe+L
LtbKU3LRiS92wDHG0c4HrXnngPxbZ+FjfG8guJftATb5IBxt3dcketWfjJrdv4i+EMOoW0ckcTai
ihZAAeA47ZqcWuXRRsu46Tvrct/Fv4lnStCsJvCXiKxe6e5KzfZpIpzs2nqOcc4ruNL8ceHJdLsn
ufEmk/aXgQyBryMHeVGeM8c181+MPh7Z+G/APh/xFBfTzTaosZeJ1AVN0e/jHNS+P/h7Z+C9N8OX
lte3Fw2pqXdZVACYCHjH++a4Tc99v/iLBa/EE+DvsjxXUtsXtrqY4jkmIyq/7p5GfUYx3rH+G3xD
vde8M+ILnX2jF/pEkkkwVQoWPaWAx7FWHc8CuV+M+o6FeeIdEex1q3tNe0u7WKYyI48lPvhiQvO0
gdM/erk/EOs6LpEfjC80TxBb3sniGTyks7eGRfLjZ97uxYAD+JQBnh85oA9r+FHjLVPGvhN9Q1W3
hjminaASRcCXAU529uv/AOqu2vXaOxuJEOGSNip9CAa4L4PX2g/8IVaaPpOoRXVzZRLJe+WjDbJI
Sx5IGecj/gNd5qP/ACDbr/ri/wDI0AfMHjvxJBcXekz60L66vJLDJkhn8oFRPKBwMc8Gis7xlpVn
fHRZbjxBpunuNP2iK6juGYjz5vmHlxOMc4654PHTJWiqNKxyTwVOcnJt6+bPWPi1rBuNWtdIhJK2
6+bIo7u3Qfl/6FXonhLR/wCwvDVnZFcShN8vu7ct/h+Fed+Ifh94jv8AxFfX0HkTRzzGSNzLtYDs
Me3H5VNoPgzxbZeIbC6vJibaKZWlH2wtkfTvXXNU5UVGMlp+ZaclNtotfGP/AI99HP8A00l/ktdD
b/8AJJh/2CT/AOi6qfEXwzqfiSLT001I3MDOX3ybcZAx/I1t2mj3A8Dx6PMVjuTZG3Yg7grbcVk5
x9jBX1TL5Xzs82+Gfh/S9eOpjU7QXHkiPy9zEYzuz0I9BWr8WfCMs3w0GleHdOkk8m7Sf7PDl2x8
24gHk8sKwovhz4vti3keXHng+Xd7c/livSfA+lano+gta6s5a5M7OCZfM+U4xz+dXi7SfMp38hUr
rRo+Y/F2reNJvC2k6P4i0qay02xKx2rS2bRElU2gbj1+Wr2vN8RPGdnpEOoeHL57axUC2aHT3UFS
FGc454Uc17V8ZvB+s+M9A0+z0aCOWaC6MriSQIAuwjv9a77SoJLXR7K3mAEkUEcbgHOCFANcRseF
/G7QfCOjGTVZUuJvEOpSq6wifChRgMxXsMDA9z7GqXj/AOHnhjwx4CttfsdH1Ez3Dw7457j/AI9w
3zEPgf8AAfqfwrvJvhMupfFa68U6rdfabAMktvauxYlwBw2f4FIyB+HQc+iavpNnrWlXWnX0Qltb
mMxyIe4P9R1HoaAOQ+F/hvwrpmjHWfCzTmDUo0LiabftK5+X2ILEGu11H/kG3X/XF/5GuF+GHw9u
fATazFLftcW1xOptlDEAIAfmK9A5zg/7oru71GewuERSWaNgAO5waAPjr4if8fWh/wDYM/8Abiei
r3xL0y4sNR0a2voJLe4TTRujbGRmeY/1orvpZbXqQU42s/MlzSPr+iiiuAoKKKKACiiigAooooAK
KKKACiiigDmfEXw+8L+K9Qjvtb0z7VcxxCFX+0SphASQMKwHVj+dFFFaxr1Yqyk0vVisj//Z

--Boundary_(ID_+JExYkw9diA3jcAnk4dnZQ)--

From shao.weixiang@zte.com.cn  Tue Jan  3 22:18:25 2012
Return-Path: <shao.weixiang@zte.com.cn>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 511AE21F85E1 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 22:18:25 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -89.499
X-Spam-Level: 
X-Spam-Status: No, score=-89.499 tagged_above=-999 required=5 tests=[AWL=-1.650, BAYES_50=0.001, CHARSET_FARAWAY_HEADER=3.2, CN_BODY_46=0.256, HTML_FONT_FACE_BAD=0.884, HTML_MESSAGE=0.001, J_CHICKENPOX_210=0.6, J_CHICKENPOX_46=0.6, MIME_8BIT_HEADER=0.3, MIME_BASE64_TEXT=1.753, MIME_CHARSET_FARAWAY=2.45, RCVD_DOUBLE_IP_LOOSE=0.76, SARE_SUB_ENC_GB2312=1.345, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Tj6rwGYZELpN for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 22:18:24 -0800 (PST)
Received: from mx5.zte.com.cn (mx6.zte.com.cn [95.130.199.165]) by ietfa.amsl.com (Postfix) with ESMTP id 83AC821F85E3 for <dc@ietf.org>; Tue,  3 Jan 2012 22:18:23 -0800 (PST)
Received: from [10.30.17.100] by mx5.zte.com.cn with surfront esmtp id 56690122734555; Wed, 4 Jan 2012 13:57:55 +0800 (CST)
Received: from [10.30.3.21] by [192.168.168.16] with StormMail ESMTP id 25201.3098549439; Wed, 4 Jan 2012 14:17:58 +0800 (CST)
Received: from notes_smtp.zte.com.cn ([10.30.1.239]) by mse02.zte.com.cn with ESMTP id q046I07q047070; Wed, 4 Jan 2012 14:18:00 +0800 (GMT-8) (envelope-from shao.weixiang@zte.com.cn)
In-Reply-To: <C0E0A32284495243BDE0AC8A066631A80C243768@szxeml526-mbx.china.huawei.com>
To: Tina TSOU <Tina.Tsou.Zouting@huawei.com>
MIME-Version: 1.0
X-KeepSent: 594C091F:CD18C5A1-4825797B:0020A483; type=4; name=$KeepSent
X-Mailer: Lotus Notes Release 6.5.6 March 06, 2007
Message-ID: <OF594C091F.CD18C5A1-ON4825797B.0020A483-4825797B.0022A40A@zte.com.cn>
From: shao.weixiang@zte.com.cn
Date: Wed, 4 Jan 2012 14:18:09 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.1FP4|July 25, 2010) at 2012-01-04 14:18:02, Serialize complete at 2012-01-04 14:18:02
Content-Type: multipart/related; boundary="=_related 0022A3F64825797B_="
X-MAIL: mse02.zte.com.cn q046I07q047070
Cc: "dc@ietf.org" <dc@ietf.org>, "vumip1@gmail.com" <vumip1@gmail.com>
Subject: [dc] =?gb2312?b?tPC4tDogUkU6IGZlZWRiYWNrIGZvciBodHRwOi8vd3d3Lmll?= =?gb2312?b?dGYub3JnL21haWwtYXJjaGl2ZS93ZWIvZGMvY3VycmVudC9tc2cwMDAzOC5o?= =?gb2312?b?dG1s?=
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 06:18:25 -0000

This is a multipart message in MIME format.
--=_related 0022A3F64825797B_=
Content-Type: multipart/alternative; boundary="=_alternative 0022A3F94825797B_="


--=_alternative 0022A3F94825797B_=
Content-Type: text/plain; charset="GB2312"
Content-Transfer-Encoding: base64

SGVsbG8sVGluYQ0KSXQgaXMgcmlnaHQgdGhhdCBkcmFmdCANCmh0dHA6Ly90b29scy5pZXRmLm9y
Zy9pZC9kcmFmdC10c291LXZyb20tcHJvYmxlbS1zdGF0ZW1lbnQtMDAudHh0IA0KbWVudGlvbmVk
IGJyb2tlcmluZyBbR2FydG5lckNCXS4NCk5JU1QgQ2xvdWQgQnJva2VyIGRlZmluaXRpb24gYWxz
byB1c2UgdGhyZWUgVHlwZXMgb2YgQ2xvdWQgQnJva2VyYWdlcyANCihTZXJ2aWNlIEludGVybWVk
aWF0aW9uIC9hZ2dyZWdhdGlvbiAvYXJiaXRyYWdlKWluIFtHYXJ0bmVyQ0JdLg0KU28gd2hhdGV2
ZXIsIHlvdXIgZHJhZnQgYW5kIG91ciBkcmFmdCBib3RoIHJlZmVyIHRvIENsb3VkIA0KQnJva2Vy
YWdlW0dhcnRuZXJDQl0vW05JU1RdLCBpdCBpcyBubyBtZWFuaW5nIHRvIHNheSB3aG9zZSBkcmFm
dCBlYXJseSBvciANCmxhdGVyLg0KWW91ciBkcmFmdCBpcyBmcm9tIERDIG5ldHdvcmsgdmlldyB0
byBnaXZlIHNvbWUgaXNzdWUgZGlzY3Vzc2lvbixvdXIgZHJhZnQgDQppcyBmcm9tIHNlcnZpY2Ug
dmlldyB0byBnaXZlIGEgc29sdXRpb24uIA0KRm9yIG15IHRoaW5raW5nLCBhIERDIGNhbiBiZSBh
IENsb3VkIEJyb2tlciByb2xlIHdoYXRldmVyIHRoZSBEQyBoYXMgDQp2aXJ0dWFsIHJlc291cmNl
IG9yIG5vdC4NCkJ5IHRoZSB3YXksIG91ciBkcmFmdCBhbHNvIHVzZSBPcGVuIENsb3VkIE1hbmlm
ZXN0byxDbG91ZCBTZWN1cml0eSANCkFsbGlhbmNlLElUVS1UIEZvY3VzIEdyb3VwIGRlZmluaXRp
b24gYWJvdXQgQ2xvdWQgQnJva2VyYWdlLg0KDQoNCg0KDQoNCg0KU2hhbyBXZWl4aWFuZyDJ286w
z+gNClN0YW5kYXJkIERldmVsb3BtZW50IEFuZCBJbmR1c3RyeSBSZWxhdGlvbnMgRGVwdC4NCg0K
UHJvZHVjdCBSJkQgU3lzdGVtIA0KsvrGt9HQt6LM5c+1DQpFMzA1LE5vLjg4OSxCaWJvIFJkLFpo
YW5namlhbmcgSGktVGVjaCBQYXJrLFB1ZG9uZyxTaGFuZ2hhaSANClAuUi5DaGluYSwgMjAxMjAz
DQpUZWw6Kzg2LTIxLTY4ODk2OTc2DQpNb2JpbGU6Kzg2LTEzOTE2NjE1ODE3DQpFbWFpbDpzaGFv
LndlaXhpYW5nQHp0ZS5jb20uY24gDQoNCg0KIA0KDQoNCg0KVGluYSBUU09VIDxUaW5hLlRzb3Uu
Wm91dGluZ0BodWF3ZWkuY29tPiANCjIwMTItMDEtMDQgMTM6MzcNCg0KytW8/sjLDQoic2hhby53
ZWl4aWFuZ0B6dGUuY29tLmNuIiA8c2hhby53ZWl4aWFuZ0B6dGUuY29tLmNuPg0Ks63LzQ0KInZ1
bWlwMUBnbWFpbC5jb20iIDx2dW1pcDFAZ21haWwuY29tPiwgImRjQGlldGYub3JnIiA8ZGNAaWV0
Zi5vcmc+DQrW98ziDQpSRTogZmVlZGJhY2sgZm9yIA0KaHR0cDovL3d3dy5pZXRmLm9yZy9tYWls
LWFyY2hpdmUvd2ViL2RjL2N1cnJlbnQvbXNnMDAwMzguaHRtbA0KDQoNCg0KDQoNCg0KRGVhciBX
ZWl4aWFuZywNCkhhcHB5IE5ldyBZZWFyIQ0KIA0KVGhlIGF1dGhvcnMgYWN0dWFsbHkgaGFkIENs
b3VkIEJyb2tlcmFnZSBpbmNsdWRlZCBpbiB2ZXJzaW9uIDAwIG9mIHRoYXQgDQpkcmFmdCBodHRw
Oi8vdG9vbHMuaWV0Zi5vcmcvaWQvZHJhZnQtdHNvdS12cm9tLXByb2JsZW0tc3RhdGVtZW50LTAw
LnR4dCwgDQpidXQgaXQgc2VlbWVkIHRvbyBlYXJseSB0byBpbmNsdWRlIGl0IGluIElFVEYgc28g
aXQgd2FzIHJlbW92ZWQgYnkgdmVyc2lvbiANCjAyLiANClRoZSBDU0IgZG9jdW1lbnQgY2FtZSB1
cCBsaWtlIDEwIGRheXMgYWZ0ZXIgdmVyc2lvbiAwMCBvZiBkcmFmdC10c291Lg0KSXQgc2VlbXMg
ZHJhZnQtc2hhbyBqdXN0IHB1dCBkb3duIHRoZSBOSVNUIHNwZWNzIGludG8gdGhlIGRyYWZ0LCB3
aGljaCBpcyANCm5vdCBxdWl0ZSB0aGUgYXBwcm9wcmlhdGUgbGV2ZWwgZm9yIElFVEYuIEkgZG8g
YmVsaWV2ZSBpdCBjb3VsZCBmaW5kIGEgDQpwbGFjZSBidXQgbm90IGluIHRoYXQgc2hhcGUuIEFs
c28sIGFzc3VtaW5nIHRoZXJlIGlzIGEgQ1NCIGluIHRoZSBwYXRoLCANCnRoZSBvbmx5IHBsYWNl
IGl0IGNhbiBnbyBpcyBpbiBzb21lIGRhdGEgY2VudGVyIHNvbWV3aGVyZSwgd2hlcmUgZWxzZSwg
YSANCmRlc2t0b3A/DQogDQpUaW5hDQogDQpGcm9tOiBzaGFvLndlaXhpYW5nQHp0ZS5jb20uY24g
W21haWx0bzpzaGFvLndlaXhpYW5nQHp0ZS5jb20uY25dIA0KU2VudDogVHVlc2RheSwgRGVjZW1i
ZXIgMjcsIDIwMTEgMTE6MDAgUE0NClRvOiBUaW5hIFRTT1UNCkNjOiB2dW1pcDFAZ21haWwuY29t
OyBkY0BpZXRmLm9yZw0KU3ViamVjdDogZmVlZGJhY2sgZm9yIA0KaHR0cDovL3d3dy5pZXRmLm9y
Zy9tYWlsLWFyY2hpdmUvd2ViL2RjL2N1cnJlbnQvbXNnMDAwMzguaHRtbA0KIA0KDQpUaGUgZm9s
bG93aW5nIGRyYWZ0IG9uIENsb3VkIFNlcnZpY2UgQnJva2VyIGlzIG5vIHJlbGF0ZWQgdG8gDQpo
dHRwOi8vdG9vbHMuaWV0Zi5vcmcvaWQvZHJhZnQtdHNvdS12cm9tLXByb2JsZW0tc3RhdGVtZW50
LTAyLnR4dCANCg0KDQpodHRwOi8vd3d3LmlldGYub3JnL3N0YWdpbmcvZHJhZnQtc2hhby1vcHNh
d2ctY2xvdWQtc2VydmljZS1icm9rZXItMDIudHh0DQoNCmRyYWZ0LXRzb3UtdnJvbS1wcm9ibGVt
LXN0YXRlbWVudC0wMiBpcyBqdXN0IGEgcHJvYmxlbSBzdGF0ZW1lbnQgYWJvdXQgDQpkYXRhIGNl
bnRlciB2aXJ0dWFsIHJlc291cmNlcyBvcGVyYXRpb25zIGFuZCBtYW5hZ2VtZW50LiANCkEgY2xv
dWQgYnJva2VyIGNhbiBiZSBpbiBkYXRhIGNlbnRlciwgb3Igbm90LiBJdCBpcyBhIG5ldyByb2xl
IGluIGNsb3VkIA0KZWNvc3lzdGVtLiBCeSB0aGUgd2F5LCB0aGUgZHJhZnQgaXMgYSBzb2x1dGlv
bi4gDQoNCg0KIA0KDQoNClNoYW8gV2VpeGlhbmcgydvOsM/oIA0KU3RhbmRhcmQgRGV2ZWxvcG1l
bnQgQW5kIEluZHVzdHJ5IFJlbGF0aW9ucyBEZXB0LiANCg0KUHJvZHVjdCBSJkQgU3lzdGVtIA0K
svrGt9HQt6LM5c+1IA0KRTMwNSxOby44ODksQmlibyBSZCxaaGFuZ2ppYW5nIEhpLVRlY2ggUGFy
ayxQdWRvbmcsU2hhbmdoYWkgDQpQLlIuQ2hpbmEsIDIwMTIwMw0KVGVsOis4Ni0yMS02ODg5Njk3
Ng0KTW9iaWxlOis4Ni0xMzkxNjYxNTgxNw0KRW1haWw6c2hhby53ZWl4aWFuZ0B6dGUuY29tLmNu
IA0KDQoNCg0KICANCg0K
--=_alternative 0022A3F94825797B_=
Content-Type: text/html; charset="GB2312"
Content-Transfer-Encoding: base64

DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPkhlbGxvLFRpbmE8L2ZvbnQ+DQo8
cD48Zm9udCBzaXplPTIgY29sb3I9IzFmNDk3ZCBmYWNlPSJzYW5zLXNlcmlmIj5JdCBpcyByaWdo
dCB0aGF0IGRyYWZ0DQo8L2ZvbnQ+PGEgaHJlZj0iaHR0cDovL3Rvb2xzLmlldGYub3JnL2lkL2Ry
YWZ0LXRzb3UtdnJvbS1wcm9ibGVtLXN0YXRlbWVudC0wMC50eHQiPjxmb250IHNpemU9MiBjb2xv
cj1ibHVlIGZhY2U9InNhbnMtc2VyaWYiPmh0dHA6Ly90b29scy5pZXRmLm9yZy9pZC9kcmFmdC10
c291LXZyb20tcHJvYmxlbS1zdGF0ZW1lbnQtMDAudHh0PC9mb250PjwvYT48Zm9udCBzaXplPTIg
ZmFjZT0ic2Fucy1zZXJpZiI+DQptZW50aW9uZWQgYnJva2VyaW5nIFtHYXJ0bmVyQ0JdLjwvZm9u
dD4NCjxwPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj5OSVNUIENsb3VkIEJyb2tlciBk
ZWZpbml0aW9uIGFsc28gdXNlDQp0aHJlZSBUeXBlcyBvZiBDbG91ZCBCcm9rZXJhZ2VzIChTZXJ2
aWNlIEludGVybWVkaWF0aW9uIC9hZ2dyZWdhdGlvbiAvYXJiaXRyYWdlKWluDQpbR2FydG5lckNC
XS48L2ZvbnQ+DQo8cD48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+U28gd2hhdGV2ZXIs
IHlvdXIgZHJhZnQgYW5kIG91ciBkcmFmdA0KYm90aCByZWZlciB0byBDbG91ZCBCcm9rZXJhZ2Vb
R2FydG5lckNCXS9bTklTVF0sIGl0IGlzIG5vIG1lYW5pbmcgdG8gc2F5DQp3aG9zZSBkcmFmdCBl
YXJseSBvciBsYXRlci48L2ZvbnQ+DQo8cD48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+
WW91ciBkcmFmdCBpcyBmcm9tIERDIG5ldHdvcmsgdmlldyB0bw0KZ2l2ZSBzb21lIGlzc3VlIGRp
c2N1c3Npb24sb3VyIGRyYWZ0IGlzIGZyb20gc2VydmljZSB2aWV3IHRvIGdpdmUgYSBzb2x1dGlv
bi4NCjwvZm9udD4NCjxwPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj5Gb3IgbXkgdGhp
bmtpbmcsIGEgREMgY2FuIGJlIGEgQ2xvdWQNCkJyb2tlciByb2xlIHdoYXRldmVyIHRoZSBEQyBo
YXMgdmlydHVhbCByZXNvdXJjZSBvciBub3QuPC9mb250Pg0KPHA+PGZvbnQgc2l6ZT0yIGZhY2U9
InNhbnMtc2VyaWYiPkJ5IHRoZSB3YXksIG91ciBkcmFmdCBhbHNvIHVzZSBPcGVuIENsb3VkDQpN
YW5pZmVzdG8sQ2xvdWQgU2VjdXJpdHkgQWxsaWFuY2UsSVRVLVQgRm9jdXMgR3JvdXAgZGVmaW5p
dGlvbiBhYm91dCBDbG91ZA0KQnJva2VyYWdlLjwvZm9udD4NCjxwPjxmb250IHNpemU9MiBmYWNl
PSJzYW5zLXNlcmlmIj48YnI+DQo8L2ZvbnQ+DQo8dGFibGU+DQo8dHI+DQo8dGQ+DQo8ZGl2IGFs
aWduPWNlbnRlcj48aW1nIHNyYz1jaWQ6XzJfMDQ0OUNDNDQwNDQ5Qzg4ODAwMjJBM0VENDgyNTc5
N0I+PC9kaXY+DQo8dGQ+DQo8YnI+PGZvbnQgc2l6ZT0yPjxicj4NCjwvZm9udD4NCjx0YWJsZSB3
aWR0aD0xMDAlPg0KPHRyPg0KPHRkIGNvbHNwYW49Mj48Zm9udCBzaXplPTMgZmFjZT0iu6rOxLfC
y84iPjxiPjxpPlNoYW8gPC9pPjwvYj48L2ZvbnQ+PGZvbnQgc2l6ZT0xIGZhY2U9IkFyaWFsIj48
Yj48aT5XZWl4aWFuZzwvaT48L2I+PC9mb250Pjxmb250IHNpemU9MyBmYWNlPSK7qs7Et8LLziI+
PGI+PGk+DQrJ286wz+g8L2k+PC9iPjwvZm9udD4NCjxicj48Zm9udCBzaXplPTEgZmFjZT0iQXJp
YWwiPlN0YW5kYXJkIERldmVsb3BtZW50IEFuZCBJbmR1c3RyeSBSZWxhdGlvbnMNCkRlcHQuPC9m
b250Pg0KPHRyPg0KPHRkIHJvd3NwYW49Mj48aW1nIHNyYz1jaWQ6XzJfMDQ0OUVEODAwNDQ5RTlD
NDAwMjJBM0VENDgyNTc5N0I+DQo8dGQ+PGZvbnQgc2l6ZT0xIGNvbG9yPSM5MDkwOTAgZmFjZT0i
QXJpYWwiPjxiPlByb2R1Y3QgUiZhbXA7RCBTeXN0ZW0gPC9iPjwvZm9udD4NCjx0cj4NCjx0ZD48
Zm9udCBzaXplPTEgY29sb3I9IzkwOTA5MCBmYWNlPSJBcmlhbCI+PGI+svrGt9HQt6LM5c+1PC9i
PjwvZm9udD4NCjx0cj4NCjx0ZCBjb2xzcGFuPTI+PGZvbnQgc2l6ZT0xIGZhY2U9IkFyaWFsIj5F
MzA1LE5vLjg4OSxCaWJvIFJkLFpoYW5namlhbmcNCkhpLVRlY2ggUGFyayxQdWRvbmcsU2hhbmdo
YWkgPGJyPg0KUC5SLkNoaW5hLCAyMDEyMDM8YnI+DQpUZWw6Kzg2LTIxLTY4ODk2OTc2PGJyPg0K
TW9iaWxlOis4Ni0xMzkxNjYxNTgxNzxicj4NCkVtYWlsOnNoYW8ud2VpeGlhbmdAenRlLmNvbS5j
biA8L2ZvbnQ+PC90YWJsZT4NCjxicj48L3RhYmxlPg0KPGJyPg0KPGJyPjxmb250IHNpemU9MSBm
YWNlPSJBcmlhbCI+Jm5ic3A7PC9mb250Pg0KPGJyPg0KPGJyPg0KPGJyPg0KPHRhYmxlIHdpZHRo
PTEwMCU+DQo8dHIgdmFsaWduPXRvcD4NCjx0ZCB3aWR0aD0zNiU+PGZvbnQgc2l6ZT0xIGZhY2U9
InNhbnMtc2VyaWYiPjxiPlRpbmEgVFNPVSAmbHQ7VGluYS5Uc291LlpvdXRpbmdAaHVhd2VpLmNv
bSZndDs8L2I+DQo8L2ZvbnQ+DQo8cD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+MjAx
Mi0wMS0wNCAxMzozNzwvZm9udD4NCjx0ZCB3aWR0aD02MyU+DQo8dGFibGUgd2lkdGg9MTAwJT4N
Cjx0ciB2YWxpZ249dG9wPg0KPHRkPg0KPGRpdiBhbGlnbj1yaWdodD48Zm9udCBzaXplPTEgZmFj
ZT0ic2Fucy1zZXJpZiI+ytW8/sjLPC9mb250PjwvZGl2Pg0KPHRkPjxmb250IHNpemU9MSBmYWNl
PSJzYW5zLXNlcmlmIj4mcXVvdDtzaGFvLndlaXhpYW5nQHp0ZS5jb20uY24mcXVvdDsNCiZsdDtz
aGFvLndlaXhpYW5nQHp0ZS5jb20uY24mZ3Q7PC9mb250Pg0KPHRyIHZhbGlnbj10b3A+DQo8dGQ+
DQo8ZGl2IGFsaWduPXJpZ2h0Pjxmb250IHNpemU9MSBmYWNlPSJzYW5zLXNlcmlmIj6zrcvNPC9m
b250PjwvZGl2Pg0KPHRkPjxmb250IHNpemU9MSBmYWNlPSJzYW5zLXNlcmlmIj4mcXVvdDt2dW1p
cDFAZ21haWwuY29tJnF1b3Q7ICZsdDt2dW1pcDFAZ21haWwuY29tJmd0OywNCiZxdW90O2RjQGll
dGYub3JnJnF1b3Q7ICZsdDtkY0BpZXRmLm9yZyZndDs8L2ZvbnQ+DQo8dHIgdmFsaWduPXRvcD4N
Cjx0ZD4NCjxkaXYgYWxpZ249cmlnaHQ+PGZvbnQgc2l6ZT0xIGZhY2U9InNhbnMtc2VyaWYiPtb3
zOI8L2ZvbnQ+PC9kaXY+DQo8dGQ+PGZvbnQgc2l6ZT0xIGZhY2U9InNhbnMtc2VyaWYiPlJFOiBm
ZWVkYmFjayBmb3IgaHR0cDovL3d3dy5pZXRmLm9yZy9tYWlsLWFyY2hpdmUvd2ViL2RjL2N1cnJl
bnQvbXNnMDAwMzguaHRtbDwvZm9udD48L3RhYmxlPg0KPGJyPg0KPHRhYmxlPg0KPHRyIHZhbGln
bj10b3A+DQo8dGQ+DQo8dGQ+PC90YWJsZT4NCjxicj48L3RhYmxlPg0KPGJyPg0KPGJyPg0KPGJy
Pjxmb250IHNpemU9MiBjb2xvcj0jMWY0OTdkIGZhY2U9IkNhbGlicmkiPkRlYXIgV2VpeGlhbmcs
PC9mb250Pg0KPHA+PGZvbnQgc2l6ZT0yIGNvbG9yPSMxZjQ5N2QgZmFjZT0iQ2FsaWJyaSI+SGFw
cHkgTmV3IFllYXIhPC9mb250Pg0KPHA+PGZvbnQgc2l6ZT0yIGNvbG9yPSMxZjQ5N2QgZmFjZT0i
Q2FsaWJyaSI+Jm5ic3A7PC9mb250Pg0KPHA+PGZvbnQgc2l6ZT0yIGNvbG9yPSMxZjQ5N2QgZmFj
ZT0iQ2FsaWJyaSI+VGhlIGF1dGhvcnMgYWN0dWFsbHkgaGFkIENsb3VkDQpCcm9rZXJhZ2UgaW5j
bHVkZWQgaW4gdmVyc2lvbiAwMCBvZiB0aGF0IGRyYWZ0IDwvZm9udD48YSBocmVmPSJodHRwOi8v
dG9vbHMuaWV0Zi5vcmcvaWQvZHJhZnQtdHNvdS12cm9tLXByb2JsZW0tc3RhdGVtZW50LTAwLnR4
dCI+PGZvbnQgc2l6ZT0yIGNvbG9yPWJsdWUgZmFjZT0iQ2FsaWJyaSI+PHU+aHR0cDovL3Rvb2xz
LmlldGYub3JnL2lkL2RyYWZ0LXRzb3UtdnJvbS1wcm9ibGVtLXN0YXRlbWVudC0wMC50eHQ8L3U+
PC9mb250PjwvYT48Zm9udCBzaXplPTIgY29sb3I9IzFmNDk3ZCBmYWNlPSJDYWxpYnJpIj4sDQpi
dXQgaXQgc2VlbWVkIHRvbyBlYXJseSB0byBpbmNsdWRlIGl0IGluIElFVEYgc28gaXQgd2FzIHJl
bW92ZWQgYnkgdmVyc2lvbg0KMDIuIDwvZm9udD4NCjxwPjxmb250IHNpemU9MiBjb2xvcj0jMWY0
OTdkIGZhY2U9IkNhbGlicmkiPlRoZSBDU0IgZG9jdW1lbnQgY2FtZSB1cCBsaWtlDQoxMCBkYXlz
IGFmdGVyIHZlcnNpb24gMDAgb2YgZHJhZnQtdHNvdS48L2ZvbnQ+DQo8cD48Zm9udCBzaXplPTIg
Y29sb3I9IzFmNDk3ZCBmYWNlPSJDYWxpYnJpIj5JdCBzZWVtcyBkcmFmdC1zaGFvIGp1c3QgcHV0
DQpkb3duIHRoZSBOSVNUIHNwZWNzIGludG8gdGhlIGRyYWZ0LCB3aGljaCBpcyBub3QgcXVpdGUg
dGhlIGFwcHJvcHJpYXRlDQpsZXZlbCBmb3IgSUVURi4gSSBkbyBiZWxpZXZlIGl0IGNvdWxkIGZp
bmQgYSBwbGFjZSBidXQgbm90IGluIHRoYXQgc2hhcGUuDQpBbHNvLCBhc3N1bWluZyB0aGVyZSBp
cyBhIENTQiBpbiB0aGUgcGF0aCwgdGhlIG9ubHkgcGxhY2UgaXQgY2FuIGdvIGlzDQppbiBzb21l
IGRhdGEgY2VudGVyIHNvbWV3aGVyZSwgd2hlcmUgZWxzZSwgYSBkZXNrdG9wPzwvZm9udD4NCjxw
Pjxmb250IHNpemU9MiBjb2xvcj0jMWY0OTdkIGZhY2U9IkNhbGlicmkiPiZuYnNwOzwvZm9udD4N
CjxwPjxmb250IHNpemU9MyBjb2xvcj0jMWY0OTdkIGZhY2U9IlRpbWVzIE5ldyBSb21hbiI+VGlu
YTwvZm9udD4NCjxwPjxmb250IHNpemU9MiBjb2xvcj0jMWY0OTdkIGZhY2U9IkNhbGlicmkiPiZu
YnNwOzwvZm9udD4NCjxwPjxmb250IHNpemU9MiBmYWNlPSJUYWhvbWEiPjxiPkZyb206PC9iPiBz
aGFvLndlaXhpYW5nQHp0ZS5jb20uY24gW21haWx0bzpzaGFvLndlaXhpYW5nQHp0ZS5jb20uY25d
DQo8Yj48YnI+DQpTZW50OjwvYj4gVHVlc2RheSwgRGVjZW1iZXIgMjcsIDIwMTEgMTE6MDAgUE08
Yj48YnI+DQpUbzo8L2I+IFRpbmEgVFNPVTxiPjxicj4NCkNjOjwvYj4gdnVtaXAxQGdtYWlsLmNv
bTsgZGNAaWV0Zi5vcmc8Yj48YnI+DQpTdWJqZWN0OjwvYj4gZmVlZGJhY2sgZm9yIGh0dHA6Ly93
d3cuaWV0Zi5vcmcvbWFpbC1hcmNoaXZlL3dlYi9kYy9jdXJyZW50L21zZzAwMDM4Lmh0bWw8L2Zv
bnQ+DQo8cD48Zm9udCBzaXplPTMgZmFjZT0iy87M5SI+Jm5ic3A7PC9mb250Pg0KPHA+DQo8dGFi
bGUgd2lkdGg9MTAwJT4NCjx0cj4NCjx0ZCB3aWR0aD0xMDAlPjxmb250IHNpemU9MyBmYWNlPSLL
zszlIj5UaGUgZm9sbG93aW5nIGRyYWZ0IG9uIENsb3VkDQpTZXJ2aWNlIEJyb2tlciBpcyBubyBy
ZWxhdGVkIHRvIDwvZm9udD48YSBocmVmPSJodHRwOi8vdG9vbHMuaWV0Zi5vcmcvaWQvZHJhZnQt
dHNvdS12cm9tLXByb2JsZW0tc3RhdGVtZW50LTAyLnR4dCI+PGZvbnQgc2l6ZT0yIGNvbG9yPWJs
dWUgZmFjZT0iQ2FsaWJyaSI+PHU+aHR0cDovL3Rvb2xzLmlldGYub3JnL2lkL2RyYWZ0LXRzb3Ut
dnJvbS1wcm9ibGVtLXN0YXRlbWVudC0wMi50eHQ8L3U+PC9mb250PjwvYT48Zm9udCBzaXplPTIg
Y29sb3I9IzFmNDk3ZCBmYWNlPSJDYWxpYnJpIj4NCjwvZm9udD48L3RhYmxlPg0KPHA+PGZvbnQg
c2l6ZT0yIGZhY2U9IkNhbGlicmkiPjxicj4NCjwvZm9udD48Zm9udCBzaXplPTIgZmFjZT0iQXJp
YWwiPjxicj4NCmh0dHA6Ly93d3cuaWV0Zi5vcmcvc3RhZ2luZy9kcmFmdC1zaGFvLW9wc2F3Zy1j
bG91ZC1zZXJ2aWNlLWJyb2tlci0wMi50eHQ8L2ZvbnQ+PGZvbnQgc2l6ZT0yIGZhY2U9IkNhbGli
cmkiPjxicj4NCjwvZm9udD48Zm9udCBzaXplPTIgY29sb3I9Ymx1ZSBmYWNlPSJDYWxpYnJpIj48
dT48YnI+DQpkcmFmdC10c291LXZyb20tcHJvYmxlbS1zdGF0ZW1lbnQtMDI8L3U+PC9mb250Pjxm
b250IHNpemU9MiBmYWNlPSJBcmlhbCI+DQo8L2ZvbnQ+PGZvbnQgc2l6ZT0zIGZhY2U9IkFyaWFs
Ij5pcyBqdXN0IGEgcHJvYmxlbSBzdGF0ZW1lbnQgYWJvdXQgZGF0YQ0KY2VudGVyIHZpcnR1YWwg
cmVzb3VyY2VzIG9wZXJhdGlvbnMgYW5kIG1hbmFnZW1lbnQuIDwvZm9udD4NCjxwPjxmb250IHNp
emU9MyBmYWNlPSJBcmlhbCI+QSBjbG91ZCBicm9rZXIgY2FuIGJlIGluIGRhdGEgY2VudGVyLCBv
ciBub3QuDQpJdCBpcyBhIG5ldyByb2xlIGluIGNsb3VkIGVjb3N5c3RlbS4gQnkgdGhlIHdheSwg
dGhlIGRyYWZ0IGlzIGEgc29sdXRpb24uPC9mb250Pjxmb250IHNpemU9MyBmYWNlPSLLzszlIj4N
CjwvZm9udD4NCjxwPg0KPHRhYmxlIHdpZHRoPTEwMCU+DQo8dHI+DQo8dGQgd2lkdGg9MjYlPg0K
PGRpdiBhbGlnbj1jZW50ZXI+PGltZyBzcmM9Y2lkOl8yXzEwMUFCMTg4MTAxQUFEQ0MwMDIyQTNF
RDQ4MjU3OTdCPjwvZGl2Pg0KPHRkIHdpZHRoPTczJT48Zm9udCBzaXplPTMgZmFjZT0iy87M5SI+
Jm5ic3A7PC9mb250Pg0KPHA+DQo8YnI+DQo8dGFibGUgd2lkdGg9MTAwJT4NCjx0cj4NCjx0ZCBj
b2xzcGFuPTI+PGZvbnQgc2l6ZT0zIGZhY2U9IruqzsS3wsvOIj48Yj48aT5TaGFvIDwvaT48L2I+
PC9mb250Pjxmb250IHNpemU9MSBmYWNlPSJBcmlhbCI+PGI+PGk+V2VpeGlhbmc8L2k+PC9iPjwv
Zm9udD48Zm9udCBzaXplPTMgZmFjZT0iu6rOxLfCy84iPjxiPjxpPg0KydvOsM/oPC9pPjwvYj48
L2ZvbnQ+PGZvbnQgc2l6ZT0zIGZhY2U9IsvOzOUiPiA8L2ZvbnQ+PGZvbnQgc2l6ZT0xIGZhY2U9
IkFyaWFsIj48YnI+DQpTdGFuZGFyZCBEZXZlbG9wbWVudCBBbmQgSW5kdXN0cnkgUmVsYXRpb25z
IERlcHQuPC9mb250Pjxmb250IHNpemU9MyBmYWNlPSLLzszlIj4NCjwvZm9udD4NCjx0cj4NCjx0
ZCByb3dzcGFuPTI+PGltZyBzcmM9Y2lkOl8yXzEwMUFEODM4MTAxQUQ0N0MwMDIyQTNFRDQ4MjU3
OTdCPg0KPHRkPjxmb250IHNpemU9MSBjb2xvcj0jOTA5MDkwIGZhY2U9IkFyaWFsIj48Yj5Qcm9k
dWN0IFImYW1wO0QgU3lzdGVtIDwvYj48L2ZvbnQ+DQo8dHI+DQo8dGQ+PGZvbnQgc2l6ZT0xIGNv
bG9yPSM5MDkwOTAgZmFjZT0iy87M5SI+PGI+svrGt9HQt6LM5c+1PC9iPjwvZm9udD48Zm9udCBz
aXplPTMgZmFjZT0iy87M5SI+DQo8L2ZvbnQ+DQo8dHI+DQo8dGQgY29sc3Bhbj0yPjxmb250IHNp
emU9MSBmYWNlPSJBcmlhbCI+RTMwNSxOby44ODksQmlibyBSZCxaaGFuZ2ppYW5nDQpIaS1UZWNo
IFBhcmssUHVkb25nLFNoYW5naGFpIDxicj4NClAuUi5DaGluYSwgMjAxMjAzPGJyPg0KVGVsOis4
Ni0yMS02ODg5Njk3Njxicj4NCk1vYmlsZTorODYtMTM5MTY2MTU4MTc8YnI+DQpFbWFpbDpzaGFv
LndlaXhpYW5nQHp0ZS5jb20uY24gPC9mb250PjwvdGFibGU+DQo8YnI+PC90YWJsZT4NCjxwPjxm
b250IHNpemU9MyBmYWNlPSLLzszlIj48YnI+DQo8L2ZvbnQ+PGZvbnQgc2l6ZT0xIGZhY2U9IkFy
aWFsIj48YnI+DQogPC9mb250Pjxmb250IHNpemU9MyBmYWNlPSLLzszlIj4mbmJzcDs8L2ZvbnQ+
DQo8cD4NCg==
--=_alternative 0022A3F94825797B_=--
--=_related 0022A3F64825797B_=
Content-Type: image/jpeg
Content-ID: <_2_0449CC440449C8880022A3ED4825797B>
Content-Transfer-Encoding: base64

/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0a
HBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIy
MjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCACEAIkDASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3+iii
gBKa8iRxs7sFVRkknAFUdW1ez0Wwe8vZQka9B3Y+gHc14t4q8X6j4lkaIs9tp4PyW6HBcf7Z7/Su
ihhpVXpsZVKqgjY134l6rpniC6g026sr+xDAoxQgqT1XI649am034uald3dvaPokUssziNRFMQSS
fcV51HZvPMlvBE8kjcJHGMk/hXe/Czw6Z9Yn1e5jxHaZiiDD/lp3P4dK9KrRoU6d2tUcMKlWc9Hu
ewISUBYbWwMjrink4oyByTgVBJeW0X+snjX6sK8Wzex6ZIz1Cz00ajYvwLqEn/eFTBYZRlSrD1U5
pSjIV77EHmsv3Wx9act4BxIMe4oltm6o2fY1Ql3xnawwfesZScdRNtGwrq4ypyPWnVgpcPC25G57
jsa07S9S5GPuuOqmqhUUtAUky3RRRWpQUUUUAJVTUNQg0yxlu7lwkUYyff2q3Xl3jTWW1XUDaQt/
otsSOOjv6/hWtGn7SViJy5Vc5rxDrN34hv2ubg7Y1OIYc8Rj+pqLRfDl7r979ntF2ouPNmb7sY/q
farml6NPq2oR2kHDNyzkcIvrXrVpaWHhvSBFGBHBGMk93b19ya9CrXVGPJDc5o0+d80irofhrS/D
FqWhUebjMlxJjc349h7Cqr6zBaB4NJtUVWcsz4wCxPJx3qheahPq82XykAPyRD+Z9TU9taZA4rk5
X8U9WOVTpArzS313zPcSMD2BwPyqo9gDywz9ea6ZNObZkrhfWkaxXptzVKpFaIhwk9zkJtPT+7+l
UytzaNvtriWI/wCwxFdlcWG0ZZMehNZNzZDnitoVE9GYTUobFSy8calYSBL+IXUI6sBtcf412mma
xpuv2pe2lEgH3424ZD7ivOL2zxnisTzLnTbxbuyleGdD95e49CO4onhYVFeOgU8ZKLtM9bvrN7cF
1+eL9VrOEzRsrKcOvQ1L4S8W2/iO2aCUCK/iH72Lsw/vL6inatZfZG3pzA3GP7prwsTQlTfod11J
c8TY07UFvYsHiReGFXq4iC7e1uFnQ/MOo9RXY206XMCTRnKMMiroVudWe5cJ8ysT0UUV0GhjeJNQ
OnaPLIhxLJ+7T6nvXl5hxyfxPcmu18XTGa+htx92JdxHuelZGkacL7V7eFhlA29/oP8A69d9C1On
zMxn7zsdR4S0ZdL0wSyLi5uAHcnqB2H5Vh61qratqJiQ/wCiwthR/ePr/hXQeK9TOnaK4jOJZj5a
Y7ep/KuO0uMEqMcCopRc26kjHEVOX3EbNlB0zXQWtuEUOwxnoO9U9LthJukf7i8n3NaSvvbd0HQD
0FZVJ3diqUNLssAO4G48e1MdWU8GpUbApkhFYHT0IpJhIuyRc+9UZrCOcHyW+YfwnrVmTFRNG8sZ
ePO9O464q07bGUlzaM5u+s2Qsrrhh1Brl9StMAkCvSfNgvl+z3eFk6K49a5XWtOe2laOQe6kdGFd
dCtrZnm4ijZXR559pudI1GK/s3MdxC25T2I7qfUEV7do2qWvinw/FeRj5Jlw6d0buPzrxrVoNu41
tfCzWjZeILjR5G/c3amSMHoJAOfzFdGMoqpT510FgazjLkfU6e7ia1uJIH+8hwT6j1rW8M3+JpLN
zw3zpn9ab4rttphu17/u2/pXO2l2bW/guAcbXGfpmvkXL2FY7XLkmemUtNRgyhh0IzTq9dHYcPqy
mbVLlzz820fhV/wvbBbm4mI5ChRUNzETdTE93J/WtbQE2Qz+pYfyrrnL93YhLU5Lx5eGTXLW0B+W
GLeR7k/4CmaPGzyKqjJIrJ8WTF/Gt4rfwBFH0x/9eup8GvEZnVgN+3IJ7Ct5Lkoqx51+evZm+6i1
torZfvYyx96WNuwpt6yveYXk4Ap6MsOP4n/QV57Z3LctIjEZY7R70rPCnT5jVUyu/wB459u1NLVH
MXzdiZ7k/wAKIPwpkd6yygP908YAxUDGoJDRzIhye4l+Qs7q8SEHkdqZGkWrQNZTkh1GY3PUVPqQ
3QwTf3htNY0jvEd6MVccgjtVxlqY1HyvU4/xJp7WNzJBJ94d/UetcfYXjaZ4i0+8U7TDcISR1xnB
/Q13/iWX7dFHcv8Afb9259CK801D5ZGYdVOR+de1QbnTszynaNZOJ9F+IohcaDcEc7VDg/rXnUj5
U+4/Wu81HUre28PRJcEq1xbBV+UkEleledM+Bj8PWvisyklUVj0cVJXVj1fR5vtGkWkpPLRir9cx
4R1W1m062sFkLXMaZdQpwoye+MV09eph5qdNNHbTkpRTRg3MGLiT/ezVvSV2eavrg1NcQ5lz6022
Xy5/Zhg11uV42KtqeUeNwbbx1c56SxxuPfjH9K3PC1wEFzL3WLiqvxXsWhvNO1VBhSDA59+o/rUP
g0TXkN75S5VYhuNdzfNQTPLknDENHcWjtLZBhgzqvPrilWQVi6XqLRyiRfvZ5HqPSt2eFZoftVr8
yHlkHUHvXmTi07HVCfMroBJ70GSqXm0hmFZ8o/aItmQetRtIDVUy00y+9DiJ1DUuSG0aJu6vj+dY
VycKSa0riYDRYV7tIawb642x7Ryx4A96qKbehnWkhunaVDqtvqLXMhSKJcjno2DzXk92nnXiQp8x
eQIPfLAV2+qXkljDLbJKVaRC0wU8HjgVheDtNOseN7CHbujhfz5PZV6frivYoXhBzZxJKc4xSPdL
rS4b7SFsZl+URgA91IHBry670y8t9VOnNGXuC21AOjj+99K9h5qFrSBrlLlokM6qVVyOQK+cxeEj
XaezPUrYdVLGd4e0SLRbJYwA078yvjqf8K2aQZpa6qcIwiox2NoxUVZCMobrUZi9KloqyjD8U6Iv
iHw9dWB4kZd0TH+Fx0P51434Z1i90i7ntSxhd8wzRnqDyK9/ry74jeEZBOfEOnRFiMG7iTqwH8Y9
x3rswtVa05bM4sXSbXPHdFCwv9p2MeQcE+9dNp2rvbuGQ5B4ZT0YVyOqaro2oQ2VxpC+XOV2zpja
M4H61HBqDpgOCp962nQc1fY89VXTla56Q622pKZbRgk/Vo24yaypmeCQpIpRveuei1MghgxBHQg4
NaSeJHeMR3KJOnTLcEfjXN9XkjV4mE99CyZ6a09VRd2N0wW389JG/gVd3Nbmm+HZZ8S3blYuoUjB
I96iUFHcceab90p39yRHa2yDfIqZ2jk5NZl/KmkQ+bOQ94w/dxddnuav65rljotxLBYx+Zdt9+Zu
QvtXnuo6i88jyOxZ2OSSckmuihh3J3aIr1FF2vdlTUrxm812bczdT3r0b4TaCbPSJtZuEKzXxxHn
qIgePz61xPhLwvN4s1UGRSumQMDPJ0Dn+4P617siJBEkUahUUBVVegGK0xlRRXso/M3wNJ/xGT0U
g6UteaemFFFFABRRRQA3cBTGkjwQzLg9QTWP4qspLzQpxCzrLGPMXaSCcdq8qimkdvmlkP1Y13YX
BfWIuSlax4+YZo8JUUHC9zoPFXw+WSeXUPDuxmb5prMNwx9V9DXExX9xaytb3CEOhw0Uy4K16H4b
12HSkaGa3yrnJlXlvx9a6K+0rw/4rhBnSKZwOHU7ZF/HrXT7WdB8lVXj3Mowp4qPPTdpdjyuHUrM
8yWPP+xIRV+LWNJi5/skyEdpJjitm++FkqMW03Usr2S4XP4bhWRN4A8RwE7beGUDoUmHP4EVtGph
pr4jCWHxFN/D+paTxpcwr5en2Nnaj+8q5IHvWfd+KdSlJ/0+Uk/ebOAfYDsKkj8D+JpV2ixjjB/v
TAVpWnwv1OYg3t/DAncRAufzNJvCw1uhqni56Wf5HE3N8WLOzZLHk9ya3fDngPUfEMiXF6slnp5w
SWGJJR6D0Hua9D0rwToOhMJjF9ouB/y1uDuOfYdq3Hvdw2xDCjjca56uMuuWivmddDAqD5qrv5BY
2Vpo9lHZWcKxQxjCov8AnmrCkkknrVVOeevqatxIT8x6V58lbVnqRd9ETL0paKKzNAooooAKKKKA
GMoZSCOCMV58fAtxJcXjpKsYEpNupGQw68/yr0Okwa2o4idG/I9zkxODpYm3tFex5Pcadeac+y6g
dDnhuoP41JCzIQyMVI6FTivUZIUlQpKgdT1DDIrIufC2nTksiNC3rGePyrujmEZK1RHmvKpU3ekz
nrXWdQhAAuXZR2cA1ox+IL3HzCNvfBFOfwnKh/dXKsPR15po8O3y9DC3vk/4VMpYeWprCGJho7k4
1y7boI1+goOoXUo+aUgHqF4pI9Cu+5jH41ci0SQf6yYfgKzbox2OiMa0tyopJOScnuTzVuCNpD8q
5/lV2HToY8ZBc+rGraqFGFGBWMqq2R0QovdkENqEwW5PpVjtS0Vg23ubpJKyCiiikMKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAP/9k=
--=_related 0022A3F64825797B_=
Content-Type: image/jpeg
Content-ID: <_2_0449ED800449E9C40022A3ED4825797B>
Content-Transfer-Encoding: base64

/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0a
HBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIy
MjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAAjAGQDASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDqvEut
+OLbxHew6ZFeGyVwISlmHUjA77TnnNUfCXjXxFqfiqxsbu+8yCR2EiCFQThWPYeor0Dxrq/9ieFb
y4RsTSL5MPrubj9Bk/hXCfCTR/NvrvWJFysA8iIn+8cFj+AwPxrvhKLoSlKK00MGmppJmXe+MvHG
nEG8muLZXJCGa0Vd2PTK1r6D4k8YXMd7cX7XAtF0+aeKVrYKhcLlSGxz/WrXxj/499H/AN+X+S10
Vv8A8knH/YJP/ounKUXSjLlWoknzNX2PObPxl431EN9iuLm5KY3eTaq+3OeuFr0Pwvrt1b+GTe+L
LtbKU3LRiS92wDHG0c4HrXnngPxbZ+FjfG8guJftATb5IBxt3dcketWfjJrdv4i+EMOoW0ckcTai
ihZAAeA47ZqcWuXRRsu46Tvrct/Fv4lnStCsJvCXiKxe6e5KzfZpIpzs2nqOcc4ruNL8ceHJdLsn
ufEmk/aXgQyBryMHeVGeM8c181+MPh7Z+G/APh/xFBfTzTaosZeJ1AVN0e/jHNS+P/h7Z+C9N8OX
lte3Fw2pqXdZVACYCHjH++a4Tc99v/iLBa/EE+DvsjxXUtsXtrqY4jkmIyq/7p5GfUYx3rH+G3xD
vde8M+ILnX2jF/pEkkkwVQoWPaWAx7FWHc8CuV+M+o6FeeIdEex1q3tNe0u7WKYyI48lPvhiQvO0
gdM/erk/EOs6LpEfjC80TxBb3sniGTyks7eGRfLjZ97uxYAD+JQBnh85oA9r+FHjLVPGvhN9Q1W3
hjminaASRcCXAU529uv/AOqu2vXaOxuJEOGSNip9CAa4L4PX2g/8IVaaPpOoRXVzZRLJe+WjDbJI
Sx5IGecj/gNd5qP/ACDbr/ri/wDI0AfMHjvxJBcXekz60L66vJLDJkhn8oFRPKBwMc8Gis7xlpVn
fHRZbjxBpunuNP2iK6juGYjz5vmHlxOMc4654PHTJWiqNKxyTwVOcnJt6+bPWPi1rBuNWtdIhJK2
6+bIo7u3Qfl/6FXonhLR/wCwvDVnZFcShN8vu7ct/h+Fed+Ifh94jv8AxFfX0HkTRzzGSNzLtYDs
Me3H5VNoPgzxbZeIbC6vJibaKZWlH2wtkfTvXXNU5UVGMlp+ZaclNtotfGP/AI99HP8A00l/ktdD
b/8AJJh/2CT/AOi6qfEXwzqfiSLT001I3MDOX3ybcZAx/I1t2mj3A8Dx6PMVjuTZG3Yg7grbcVk5
x9jBX1TL5Xzs82+Gfh/S9eOpjU7QXHkiPy9zEYzuz0I9BWr8WfCMs3w0GleHdOkk8m7Sf7PDl2x8
24gHk8sKwovhz4vti3keXHng+Xd7c/livSfA+lano+gta6s5a5M7OCZfM+U4xz+dXi7SfMp38hUr
rRo+Y/F2reNJvC2k6P4i0qay02xKx2rS2bRElU2gbj1+Wr2vN8RPGdnpEOoeHL57axUC2aHT3UFS
FGc454Uc17V8ZvB+s+M9A0+z0aCOWaC6MriSQIAuwjv9a77SoJLXR7K3mAEkUEcbgHOCFANcRseF
/G7QfCOjGTVZUuJvEOpSq6wifChRgMxXsMDA9z7GqXj/AOHnhjwx4CttfsdH1Ez3Dw7457j/AI9w
3zEPgf8AAfqfwrvJvhMupfFa68U6rdfabAMktvauxYlwBw2f4FIyB+HQc+iavpNnrWlXWnX0Qltb
mMxyIe4P9R1HoaAOQ+F/hvwrpmjHWfCzTmDUo0LiabftK5+X2ILEGu11H/kG3X/XF/5GuF+GHw9u
fATazFLftcW1xOptlDEAIAfmK9A5zg/7oru71GewuERSWaNgAO5waAPjr4if8fWh/wDYM/8Abiei
r3xL0y4sNR0a2voJLe4TTRujbGRmeY/1orvpZbXqQU42s/MlzSPr+iiiuAoKKKKACiiigAooooAK
KKKACiiigDmfEXw+8L+K9Qjvtb0z7VcxxCFX+0SphASQMKwHVj+dFFFaxr1Yqyk0vVisj//Z
--=_related 0022A3F64825797B_=
Content-Type: image/jpeg
Content-ID: <_2_101AB188101AADCC0022A3ED4825797B>
Content-Transfer-Encoding: base64

/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0a
HBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIy
MjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCACEAIkDASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3+iii
gBKa8iRxs7sFVRkknAFUdW1ez0Wwe8vZQka9B3Y+gHc14t4q8X6j4lkaIs9tp4PyW6HBcf7Z7/Su
ihhpVXpsZVKqgjY134l6rpniC6g026sr+xDAoxQgqT1XI649am034uald3dvaPokUssziNRFMQSS
fcV51HZvPMlvBE8kjcJHGMk/hXe/Czw6Z9Yn1e5jxHaZiiDD/lp3P4dK9KrRoU6d2tUcMKlWc9Hu
ewISUBYbWwMjrink4oyByTgVBJeW0X+snjX6sK8Wzex6ZIz1Cz00ajYvwLqEn/eFTBYZRlSrD1U5
pSjIV77EHmsv3Wx9act4BxIMe4oltm6o2fY1Ql3xnawwfesZScdRNtGwrq4ypyPWnVgpcPC25G57
jsa07S9S5GPuuOqmqhUUtAUky3RRRWpQUUUUAJVTUNQg0yxlu7lwkUYyff2q3Xl3jTWW1XUDaQt/
otsSOOjv6/hWtGn7SViJy5Vc5rxDrN34hv2ubg7Y1OIYc8Rj+pqLRfDl7r979ntF2ouPNmb7sY/q
farml6NPq2oR2kHDNyzkcIvrXrVpaWHhvSBFGBHBGMk93b19ya9CrXVGPJDc5o0+d80irofhrS/D
FqWhUebjMlxJjc349h7Cqr6zBaB4NJtUVWcsz4wCxPJx3qheahPq82XykAPyRD+Z9TU9taZA4rk5
X8U9WOVTpArzS313zPcSMD2BwPyqo9gDywz9ea6ZNObZkrhfWkaxXptzVKpFaIhwk9zkJtPT+7+l
UytzaNvtriWI/wCwxFdlcWG0ZZMehNZNzZDnitoVE9GYTUobFSy8calYSBL+IXUI6sBtcf412mma
xpuv2pe2lEgH3424ZD7ivOL2zxnisTzLnTbxbuyleGdD95e49CO4onhYVFeOgU8ZKLtM9bvrN7cF
1+eL9VrOEzRsrKcOvQ1L4S8W2/iO2aCUCK/iH72Lsw/vL6inatZfZG3pzA3GP7prwsTQlTfod11J
c8TY07UFvYsHiReGFXq4iC7e1uFnQ/MOo9RXY206XMCTRnKMMiroVudWe5cJ8ysT0UUV0GhjeJNQ
OnaPLIhxLJ+7T6nvXl5hxyfxPcmu18XTGa+htx92JdxHuelZGkacL7V7eFhlA29/oP8A69d9C1On
zMxn7zsdR4S0ZdL0wSyLi5uAHcnqB2H5Vh61qratqJiQ/wCiwthR/ePr/hXQeK9TOnaK4jOJZj5a
Y7ep/KuO0uMEqMcCopRc26kjHEVOX3EbNlB0zXQWtuEUOwxnoO9U9LthJukf7i8n3NaSvvbd0HQD
0FZVJ3diqUNLssAO4G48e1MdWU8GpUbApkhFYHT0IpJhIuyRc+9UZrCOcHyW+YfwnrVmTFRNG8sZ
ePO9O464q07bGUlzaM5u+s2Qsrrhh1Brl9StMAkCvSfNgvl+z3eFk6K49a5XWtOe2laOQe6kdGFd
dCtrZnm4ijZXR559pudI1GK/s3MdxC25T2I7qfUEV7do2qWvinw/FeRj5Jlw6d0buPzrxrVoNu41
tfCzWjZeILjR5G/c3amSMHoJAOfzFdGMoqpT510FgazjLkfU6e7ia1uJIH+8hwT6j1rW8M3+JpLN
zw3zpn9ab4rttphu17/u2/pXO2l2bW/guAcbXGfpmvkXL2FY7XLkmemUtNRgyhh0IzTq9dHYcPqy
mbVLlzz820fhV/wvbBbm4mI5ChRUNzETdTE93J/WtbQE2Qz+pYfyrrnL93YhLU5Lx5eGTXLW0B+W
GLeR7k/4CmaPGzyKqjJIrJ8WTF/Gt4rfwBFH0x/9eup8GvEZnVgN+3IJ7Ct5Lkoqx51+evZm+6i1
torZfvYyx96WNuwpt6yveYXk4Ap6MsOP4n/QV57Z3LctIjEZY7R70rPCnT5jVUyu/wB459u1NLVH
MXzdiZ7k/wAKIPwpkd6yygP908YAxUDGoJDRzIhye4l+Qs7q8SEHkdqZGkWrQNZTkh1GY3PUVPqQ
3QwTf3htNY0jvEd6MVccgjtVxlqY1HyvU4/xJp7WNzJBJ94d/UetcfYXjaZ4i0+8U7TDcISR1xnB
/Q13/iWX7dFHcv8Afb9259CK801D5ZGYdVOR+de1QbnTszynaNZOJ9F+IohcaDcEc7VDg/rXnUj5
U+4/Wu81HUre28PRJcEq1xbBV+UkEleledM+Bj8PWvisyklUVj0cVJXVj1fR5vtGkWkpPLRir9cx
4R1W1m062sFkLXMaZdQpwoye+MV09eph5qdNNHbTkpRTRg3MGLiT/ezVvSV2eavrg1NcQ5lz6022
Xy5/Zhg11uV42KtqeUeNwbbx1c56SxxuPfjH9K3PC1wEFzL3WLiqvxXsWhvNO1VBhSDA59+o/rUP
g0TXkN75S5VYhuNdzfNQTPLknDENHcWjtLZBhgzqvPrilWQVi6XqLRyiRfvZ5HqPSt2eFZoftVr8
yHlkHUHvXmTi07HVCfMroBJ70GSqXm0hmFZ8o/aItmQetRtIDVUy00y+9DiJ1DUuSG0aJu6vj+dY
VycKSa0riYDRYV7tIawb642x7Ryx4A96qKbehnWkhunaVDqtvqLXMhSKJcjno2DzXk92nnXiQp8x
eQIPfLAV2+qXkljDLbJKVaRC0wU8HjgVheDtNOseN7CHbujhfz5PZV6frivYoXhBzZxJKc4xSPdL
rS4b7SFsZl+URgA91IHBry670y8t9VOnNGXuC21AOjj+99K9h5qFrSBrlLlokM6qVVyOQK+cxeEj
XaezPUrYdVLGd4e0SLRbJYwA078yvjqf8K2aQZpa6qcIwiox2NoxUVZCMobrUZi9KloqyjD8U6Iv
iHw9dWB4kZd0TH+Fx0P51434Z1i90i7ntSxhd8wzRnqDyK9/ry74jeEZBOfEOnRFiMG7iTqwH8Y9
x3rswtVa05bM4sXSbXPHdFCwv9p2MeQcE+9dNp2rvbuGQ5B4ZT0YVyOqaro2oQ2VxpC+XOV2zpja
M4H61HBqDpgOCp962nQc1fY89VXTla56Q622pKZbRgk/Vo24yaypmeCQpIpRveuei1MghgxBHQg4
NaSeJHeMR3KJOnTLcEfjXN9XkjV4mE99CyZ6a09VRd2N0wW389JG/gVd3Nbmm+HZZ8S3blYuoUjB
I96iUFHcceab90p39yRHa2yDfIqZ2jk5NZl/KmkQ+bOQ94w/dxddnuav65rljotxLBYx+Zdt9+Zu
QvtXnuo6i88jyOxZ2OSSckmuihh3J3aIr1FF2vdlTUrxm812bczdT3r0b4TaCbPSJtZuEKzXxxHn
qIgePz61xPhLwvN4s1UGRSumQMDPJ0Dn+4P617siJBEkUahUUBVVegGK0xlRRXso/M3wNJ/xGT0U
g6UteaemFFFFABRRRQA3cBTGkjwQzLg9QTWP4qspLzQpxCzrLGPMXaSCcdq8qimkdvmlkP1Y13YX
BfWIuSlax4+YZo8JUUHC9zoPFXw+WSeXUPDuxmb5prMNwx9V9DXExX9xaytb3CEOhw0Uy4K16H4b
12HSkaGa3yrnJlXlvx9a6K+0rw/4rhBnSKZwOHU7ZF/HrXT7WdB8lVXj3Mowp4qPPTdpdjyuHUrM
8yWPP+xIRV+LWNJi5/skyEdpJjitm++FkqMW03Usr2S4XP4bhWRN4A8RwE7beGUDoUmHP4EVtGph
pr4jCWHxFN/D+paTxpcwr5en2Nnaj+8q5IHvWfd+KdSlJ/0+Uk/ebOAfYDsKkj8D+JpV2ixjjB/v
TAVpWnwv1OYg3t/DAncRAufzNJvCw1uhqni56Wf5HE3N8WLOzZLHk9ya3fDngPUfEMiXF6slnp5w
SWGJJR6D0Hua9D0rwToOhMJjF9ouB/y1uDuOfYdq3Hvdw2xDCjjca56uMuuWivmddDAqD5qrv5BY
2Vpo9lHZWcKxQxjCov8AnmrCkkknrVVOeevqatxIT8x6V58lbVnqRd9ETL0paKKzNAooooAKKKKA
GMoZSCOCMV58fAtxJcXjpKsYEpNupGQw68/yr0Okwa2o4idG/I9zkxODpYm3tFex5Pcadeac+y6g
dDnhuoP41JCzIQyMVI6FTivUZIUlQpKgdT1DDIrIufC2nTksiNC3rGePyrujmEZK1RHmvKpU3ekz
nrXWdQhAAuXZR2cA1ox+IL3HzCNvfBFOfwnKh/dXKsPR15po8O3y9DC3vk/4VMpYeWprCGJho7k4
1y7boI1+goOoXUo+aUgHqF4pI9Cu+5jH41ci0SQf6yYfgKzbox2OiMa0tyopJOScnuTzVuCNpD8q
5/lV2HToY8ZBc+rGraqFGFGBWMqq2R0QovdkENqEwW5PpVjtS0Vg23ubpJKyCiiikMKKKKACiiig
AooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAP/9k=
--=_related 0022A3F64825797B_=
Content-Type: image/jpeg
Content-ID: <_2_101AD838101AD47C0022A3ED4825797B>
Content-Transfer-Encoding: base64

/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0a
HBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIy
MjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAAjAGQDASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDqvEut
+OLbxHew6ZFeGyVwISlmHUjA77TnnNUfCXjXxFqfiqxsbu+8yCR2EiCFQThWPYeor0Dxrq/9ieFb
y4RsTSL5MPrubj9Bk/hXCfCTR/NvrvWJFysA8iIn+8cFj+AwPxrvhKLoSlKK00MGmppJmXe+MvHG
nEG8muLZXJCGa0Vd2PTK1r6D4k8YXMd7cX7XAtF0+aeKVrYKhcLlSGxz/WrXxj/499H/AN+X+S10
Vv8A8knH/YJP/ounKUXSjLlWoknzNX2PObPxl431EN9iuLm5KY3eTaq+3OeuFr0Pwvrt1b+GTe+L
LtbKU3LRiS92wDHG0c4HrXnngPxbZ+FjfG8guJftATb5IBxt3dcketWfjJrdv4i+EMOoW0ckcTai
ihZAAeA47ZqcWuXRRsu46Tvrct/Fv4lnStCsJvCXiKxe6e5KzfZpIpzs2nqOcc4ruNL8ceHJdLsn
ufEmk/aXgQyBryMHeVGeM8c181+MPh7Z+G/APh/xFBfTzTaosZeJ1AVN0e/jHNS+P/h7Z+C9N8OX
lte3Fw2pqXdZVACYCHjH++a4Tc99v/iLBa/EE+DvsjxXUtsXtrqY4jkmIyq/7p5GfUYx3rH+G3xD
vde8M+ILnX2jF/pEkkkwVQoWPaWAx7FWHc8CuV+M+o6FeeIdEex1q3tNe0u7WKYyI48lPvhiQvO0
gdM/erk/EOs6LpEfjC80TxBb3sniGTyks7eGRfLjZ97uxYAD+JQBnh85oA9r+FHjLVPGvhN9Q1W3
hjminaASRcCXAU529uv/AOqu2vXaOxuJEOGSNip9CAa4L4PX2g/8IVaaPpOoRXVzZRLJe+WjDbJI
Sx5IGecj/gNd5qP/ACDbr/ri/wDI0AfMHjvxJBcXekz60L66vJLDJkhn8oFRPKBwMc8Gis7xlpVn
fHRZbjxBpunuNP2iK6juGYjz5vmHlxOMc4654PHTJWiqNKxyTwVOcnJt6+bPWPi1rBuNWtdIhJK2
6+bIo7u3Qfl/6FXonhLR/wCwvDVnZFcShN8vu7ct/h+Fed+Ifh94jv8AxFfX0HkTRzzGSNzLtYDs
Me3H5VNoPgzxbZeIbC6vJibaKZWlH2wtkfTvXXNU5UVGMlp+ZaclNtotfGP/AI99HP8A00l/ktdD
b/8AJJh/2CT/AOi6qfEXwzqfiSLT001I3MDOX3ybcZAx/I1t2mj3A8Dx6PMVjuTZG3Yg7grbcVk5
x9jBX1TL5Xzs82+Gfh/S9eOpjU7QXHkiPy9zEYzuz0I9BWr8WfCMs3w0GleHdOkk8m7Sf7PDl2x8
24gHk8sKwovhz4vti3keXHng+Xd7c/livSfA+lano+gta6s5a5M7OCZfM+U4xz+dXi7SfMp38hUr
rRo+Y/F2reNJvC2k6P4i0qay02xKx2rS2bRElU2gbj1+Wr2vN8RPGdnpEOoeHL57axUC2aHT3UFS
FGc454Uc17V8ZvB+s+M9A0+z0aCOWaC6MriSQIAuwjv9a77SoJLXR7K3mAEkUEcbgHOCFANcRseF
/G7QfCOjGTVZUuJvEOpSq6wifChRgMxXsMDA9z7GqXj/AOHnhjwx4CttfsdH1Ez3Dw7457j/AI9w
3zEPgf8AAfqfwrvJvhMupfFa68U6rdfabAMktvauxYlwBw2f4FIyB+HQc+iavpNnrWlXWnX0Qltb
mMxyIe4P9R1HoaAOQ+F/hvwrpmjHWfCzTmDUo0LiabftK5+X2ILEGu11H/kG3X/XF/5GuF+GHw9u
fATazFLftcW1xOptlDEAIAfmK9A5zg/7oru71GewuERSWaNgAO5waAPjr4if8fWh/wDYM/8Abiei
r3xL0y4sNR0a2voJLe4TTRujbGRmeY/1orvpZbXqQU42s/MlzSPr+iiiuAoKKKKACiiigAooooAK
KKKACiiigDmfEXw+8L+K9Qjvtb0z7VcxxCFX+0SphASQMKwHVj+dFFFaxr1Yqyk0vVisj//Z
--=_related 0022A3F64825797B_=--


From pfrejborg@gmail.com  Tue Jan  3 23:52:21 2012
Return-Path: <pfrejborg@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4CDF121F8622 for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 23:52:21 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.599
X-Spam-Level: 
X-Spam-Status: No, score=-3.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Z-cSLHqSdWSd for <dc@ietfa.amsl.com>; Tue,  3 Jan 2012 23:52:20 -0800 (PST)
Received: from mail-wi0-f172.google.com (mail-wi0-f172.google.com [209.85.212.172]) by ietfa.amsl.com (Postfix) with ESMTP id C524A21F85FC for <dc@ietf.org>; Tue,  3 Jan 2012 23:52:19 -0800 (PST)
Received: by wibhj6 with SMTP id hj6so12125790wib.31 for <dc@ietf.org>; Tue, 03 Jan 2012 23:52:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=HGFZWQto1Fzr194sMgis8rxRZ1Tj19hhBgCECaNYups=; b=rZaEHn6irSgKFVYRfOGVoQAtstAAkf6Fy2FBhdD0GDmB1R106Rcj7fUg77VvIRUNg3 0resAtFV6SnDuJ6c+tMjb8YUQUGwUYcQGBp+pzcdUgRTlvkD9M097aBEZL7lYBbNHtY/ KsK4GFkyQaQpb04Ym0aQfgUNnDcaM2oNTkeO0=
MIME-Version: 1.0
Received: by 10.180.75.7 with SMTP id y7mr23120763wiv.2.1325663538908; Tue, 03 Jan 2012 23:52:18 -0800 (PST)
Received: by 10.227.184.5 with HTTP; Tue, 3 Jan 2012 23:52:18 -0800 (PST)
In-Reply-To: <201201032055.q03KtgnA016017@cichlid.raleigh.ibm.com>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <CANtnpwj3hCD4UbidDzG=4xChJOaQ1T8mLqQkDUWxoRZV1hjuYA@mail.gmail.com> <201112281650.pBSGo7Mn011365@cichlid.raleigh.ibm.com> <CANtnpwgKKh_6emFK2Gx_WfqU929UK3rzQmh1cuWxoJFGH6eHUw@mail.gmail.com> <2E742C02-F621-497D-AE06-6A91EEEBA498@cdl.asgaard.org> <201201032055.q03KtgnA016017@cichlid.raleigh.ibm.com>
Date: Wed, 4 Jan 2012 09:52:18 +0200
Message-ID: <CAHfUk+VEYzCY346A_5fQ+etWskfVNgDbt_qTR0H8eRVcpdsROQ@mail.gmail.com>
From: Patrick Frejborg <pfrejborg@gmail.com>
To: Thomas Narten <narten@us.ibm.com>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: Ronald Bonica <rbonica@juniper.net>, Bhumip Khasnabish <vumip1@gmail.com>, dc@ietf.org, Christopher LILJENSTOLPE <ietf@cdl.asgaard.org>, "So, Ning" <ning.so@verizon.com>
Subject: Re: [dc] Elevator Pitch (was: Scoping the Interim meeting)
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 07:52:21 -0000

On Tue, Jan 3, 2012 at 10:55 PM, Thomas Narten <narten@us.ibm.com> wrote:
> +1 to Chris' comments.
>
> Christopher LILJENSTOLPE <ietf@cdl.asgaard.org> writes:
>
>> Secondly, the question we should be asking is "What does the IETF
>> =A0NEED to do" not "What can the IETF do" We can do a great many
>> =A0things, the bulk of which will not be helpful or a good use of
>> =A0resources.
>
> Agree completely. If those advocating that the IETF "do work" cannot
> answer the above succinctly and in a way that fellow IETFers can
> understand, the only conclusion that can be drawn is that the IETF
> cannot (and should not) take any further action at this time.
>

I have two proposals - having them solved would be helpful in the
design and implementing phase of data center networks.

1. PVLAN
There are use cases where PVLAN is needed, e.g. implementing a
separate Ethernet network for backup traffic. In this use case the
host will have two NICs - one for "production" traffic (clients
reaching the server) and the backup NIC. Usually you assign the 0/0
route for the "production" NIC, the outcome is that the backup NIC
should have a very large subnet assigned so that the backup node can
be located in the very same subnet - and static routing is avoided at
the server (if you apply static routing at server there are two teams
taking care of the routing domain - moves, adds and changes becomes an
operational nightmare). Because the server resides in different
security zones it is a bad idea to have the backup NICs in the same L2
domain - it can be solved by implementing PVLAN and the backup NIC can
only reach the node which is providing the backup service.
The problem with PVLAN is interoperability between switches and switch
vendors,  from RFC5517 section 3:

   When a VLAN spans multiple switches, there is no standard mechanism
   to propagate port-level isolation information to other switches and,
   consequently, the isolation behavior fails in other switches

2. L2 hardening
I'm not aware of a BCP or informational RFC on how to mitigate L2
attacks, such as VLAN hopping, ARP spoofing, MAC spoofing etc I always
get into discussions with security officers how much can we trust the
virtualization of network devices. These issues can be mitigated on
some switches but with increased use of hypervisors these issues
should be taken into account at the virtual switches as well. Maybe it
would be helpful with an RFC listing all the L2 attacks and how they
can be mitigated so that the vendors pay better attention to these
threats.

Patrick

From yakov@juniper.net  Wed Jan  4 05:05:45 2012
Return-Path: <yakov@juniper.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8DCA721F8724 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 05:05:45 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.289
X-Spam-Level: 
X-Spam-Status: No, score=-106.289 tagged_above=-999 required=5 tests=[AWL=-0.005, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id znDdeiPuNhDr for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 05:05:45 -0800 (PST)
Received: from exprod7og106.obsmtp.com (exprod7og106.obsmtp.com [64.18.2.165]) by ietfa.amsl.com (Postfix) with ESMTP id C761C21F8722 for <dc@ietf.org>; Wed,  4 Jan 2012 05:05:44 -0800 (PST)
Received: from P-EMHUB02-HQ.jnpr.net ([66.129.224.36]) (using TLSv1) by exprod7ob106.postini.com ([64.18.6.12]) with SMTP ID DSNKTwROmmLI5Tyeilkqo9Pi6Ve+Q+Nw8+pO@postini.com; Wed, 04 Jan 2012 05:05:44 PST
Received: from magenta.juniper.net (172.17.27.123) by P-EMHUB02-HQ.jnpr.net (172.24.192.33) with Microsoft SMTP Server (TLS) id 8.3.213.0; Wed, 4 Jan 2012 05:01:58 -0800
Received: from juniper.net (sapphire.juniper.net [172.17.28.108])	by magenta.juniper.net (8.11.3/8.11.3) with ESMTP id q04D1kS47564; Wed, 4 Jan 2012 05:01:56 -0800 (PST)	(envelope-from yakov@juniper.net)
Message-ID: <201201041301.q04D1kS47564@magenta.juniper.net>
To: Xuxiaohu <xuxiaohu@huawei.com>
In-Reply-To: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76405C@szxeml525-mbs.china.huawei.com>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE763ACD@szxeml525-mbs.china.huawei.com> <201201031432.q03EWhS44922@magenta.juniper.net> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76405C@szxeml525-mbs.china.huawei.com>
X-MH-In-Reply-To: Xuxiaohu <xuxiaohu@huawei.com> message dated "Wed, 04 Jan 2012 03:50:38 +0000."
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-ID: <86426.1325682072.1@juniper.net>
Date: Wed, 4 Jan 2012 05:01:12 -0800
From: Yakov Rekhter <yakov@juniper.net>
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 13:05:45 -0000

Xuxiaohu,

> Hi Yakov,
> 
> > Xuxiaohu,
> > 
> > > Hi all,
> > >
> > > Since there are some differences in the problems and requirements
> > > between data center network (DCN) and data center interconnect
> > > (DCI), I try to list several problems and requirements for DCN and
> > > DCI separately as follows. Here the dat a centers mainly refer to
> > > those multi-tenant data centers which are operated by public cloud
> > > providers to deliver cloud service (i.e., IaaS) to their customers
> > > (i.e., tenants).
> > >
> > > 1. DCN problems and requirements:
> > >
> > > 1) VM mobility across multiple pods -> LAN/subnet extension across pods
> > >
> > > 2) Some cluster applications use non-IP or link-local multicast 
> > >     (optional) -> Layer2 networking
> > >
> > > 3) Multi-tenancy isolation -> VPN/VLAN instance scalability
> > >
> > > 4) Millions of VMs -> MAC/IP forwarding table scalability
> > >
> > > 5) Increasing bandwidth demands for server-to-server connectivity
> > >    (i.e., east-west traffic)-> ECMP and shortest path forwarding 
> > >    capabilities
> > >
> > > 6) Network resiliency -> Fast convergence and multi-homing
> > 
> > Do you need fast routing convergence, or fast connectivity restoration ?
> 
> Both. 

I understand the rationale for fast connectivity restoration. But
given that there is fast connectivity restoration, why would one
also need fast routing convergence ?

> > > 7) Thousands of network devices -> Simplified provisioning and operation
> > >
> > >
> > >
> > > 2. DCI problems and requirements:
> > >
> > > 1) VMs mobility across data centers -> LAN/subnet extension across
> > >    data centers.
> > >
> > > 2) Multi-tenancy isolation -> VLAN/VPN instance scalability
> > >
> > > 3) Millions of VMs -> MAC/IP forwarding table scalability
> > >
> > > 4) Optimal utilization of WAN bandwidth resource -> Unknown unicast
> > >    and ARP broadcast suppression
> > >
> > > 5) Network resiliency -> Fast convergence and multi-homing
> > 
> > Do you need fast routing convergence, or fast connectivity restoration ?
> 
> Both.

The same question, as above.

Yakov.

From lizhong.jin@zte.com.cn  Wed Jan  4 06:58:54 2012
Return-Path: <lizhong.jin@zte.com.cn>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 90CD521F874C for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 06:58:54 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -101.523
X-Spam-Level: 
X-Spam-Status: No, score=-101.523 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_DOUBLE_IP_LOOSE=0.76,  SARE_MILLIONSOF=0.315, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xQJCUq1Zrrwg for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 06:58:53 -0800 (PST)
Received: from mx5.zte.com.cn (mx5.zte.com.cn [63.217.80.70]) by ietfa.amsl.com (Postfix) with ESMTP id 1610221F872A for <dc@ietf.org>; Wed,  4 Jan 2012 06:58:52 -0800 (PST)
Received: from [10.30.17.99] by mx5.zte.com.cn with surfront esmtp id 53829122734555; Wed, 4 Jan 2012 22:56:14 +0800 (CST)
Received: from [10.30.3.20] by [192.168.168.15] with StormMail ESMTP id 4315.4095645156; Wed, 4 Jan 2012 22:58:39 +0800 (CST)
Received: from notes_smtp.zte.com.cn ([10.30.1.239]) by mse01.zte.com.cn with ESMTP id q04Ewbvd046570; Wed, 4 Jan 2012 22:58:37 +0800 (GMT-8) (envelope-from lizhong.jin@zte.com.cn)
In-Reply-To: <mailman.1574.1325608793.3174.dc@ietf.org>
To: adalela@cisco.com
MIME-Version: 1.0
X-Mailer: Lotus Notes Release 6.5.4 March 27, 2005
Message-ID: <OF3321930E.BFE44129-ON4825797B.005178C6-4825797B.00524603@zte.com.cn>
From: Lizhong Jin<lizhong.jin@zte.com.cn>
Date: Wed, 4 Jan 2012 22:58:25 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.1FP4|July 25, 2010) at 2012-01-04 22:58:39, Serialize complete at 2012-01-04 22:58:39
Content-Type: multipart/alternative; boundary="=_alternative 005246004825797B_="
X-MAIL: mse01.zte.com.cn q04Ewbvd046570
Cc: yakov@juniper.net, aldrin.isaac@gmail.com, dc@ietf.org, robert@raszuk.net
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 14:58:54 -0000

This is a multipart message in MIME format.
--=_alternative 005246004825797B_=
Content-Type: text/plain; charset="US-ASCII"

Hi Ashish,
If we implement VRF on access switch (or ToR), I agree there will be 
scalability problem. Also the cost will also be an issue, the access 
switch would be more expensive than before. How about implement VRF on 
aggregation router? If the aggregation router could solve scalability and 
high availability problem, then we should focus on how to setup connection 
between VM and aggregation router. Hope to see your comments.

Thanks
Lizhong


> 
> -----From "Ashish Dalela (adalela)" <adalela@cisco.com> 
> Tue, 3 Jan 2012 22:09:17 +0530 -----
> 
> Receiver:
> 
> <robert@raszuk.net>
> 
> cc:
> 
> Yakov Rekhter <yakov@juniper.net>, dc@ietf.org, Aldrin Isaac 
> <aldrin.isaac@gmail.com>
> 
> Subject:
> 
> Re: [dc] new drafts
> 
> Robert,
> 
> Here are some things to evaluate scalability against.
> 
> Assume a simple case that under a switch there are 250 VM, split amongst
> 10 customers. Each customer has a unique VRF. Normally, we would have
> advertized a /24 route for that switch. In this case your routes to a
> single switch are segmented and there are 10 VRFs, and you will very
> likely have 250 route table entries total segmented by VRF-ids. That's a
> routing table bloat from 1 entry to 250 entry. This happens everywhere.
> I have assumed a public IP addressing, but the same thing will happen
> for the private addressing as well.
> 
> Then, typically the number of VRFs you can support on a router is about
> 4K. These # of VRFs have to be supported at the access, so you have to
> assume this is the limit from the access viewpoint. 4K is nothing - we
> have 4K VLANs today to segment and that's nothing. Every segmentation
> technique being talked about speaks of a million plus segments. Take
> that to VRFs, you need a million VRFs on the control plane at the access
> switch. Another problem with a VRF is that it will get and store a route
> for a host, even when there is no host talking to it. With dynamic
> learning or learning based on packet arrival you avoid these host routes
> and limit them to active conversations only. That's a huge saving
> because not every host talks to every host.
> 
> Then, at massive scale, the failure rates are also massive. At 5 nines
> reliability, a hardware entity out of 100,000 will fail every 5.25
> minutes. Access switches don't have high availability. Software fails
> even faster - OS is generally 4 9's, which means one out of 10,000 fails
> every 5.25 minutes. At millions of instances of such entities, there are
> rapid failures happening. You have to only look at massive datacenters
> today run by Web 2.0 companies, and they all echo this view. They
> basically form clusters of the same application. Software moves the
> workload from one cluster to another. The whole cluster can fail over.
> That's not what you do in a consumer cloud, where you have to recover.
> At massive failure rates, and rapid recovery rates, you are moving
> things around and injecting host routes for reachability. It's a
> convergence problem, especially with link-state algorithms.
> 
> If the VM can be moved, then all you need to do is install a temporary
> redirect of packets to the new location. Each host will refresh the MAC
> after 15-30 seconds. If the packets are redirected from old to new
> location for these 30 seconds, the redirect can be aged automatically.
> This happens all the time in mobile networks in what is called a "fast
> handoff" where you redirect the packets until handoff is completed.
> 
> Thanks, Ashish
> 
> 

--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail is solely property of the sender's organization. This mail communication is confidential. Recipients named above are obligated to maintain secrecy and are not permitted to disclose the contents of this communication to others.
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the originator of the message. Any views expressed in this message are those of the individual sender.
This message has been scanned for viruses and Spam by ZTE Anti-Spam system.

--=_alternative 005246004825797B_=
Content-Type: text/html; charset="US-ASCII"


<br><font size=2><tt>Hi Ashish,</tt></font>
<br><font size=2><tt>If we implement VRF on access switch (or ToR), I agree
there will be scalability problem. Also the cost will also be an issue,
the access switch would be more expensive than before. How about implement
VRF on aggregation router? If the aggregation router could solve scalability
and high availability problem, then we should focus on how to setup connection
between VM and aggregation router. Hope to see your comments.</tt></font>
<br>
<br><font size=2><tt>Thanks</tt></font>
<br><font size=2><tt>Lizhong</tt></font>
<br>
<br>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; -----From &quot;Ashish Dalela (adalela)&quot;
&lt;adalela@cisco.com&gt; </tt></font>
<br><font size=2><tt>&gt; Tue, 3 Jan 2012 22:09:17 +0530 -----</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; Receiver:</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; &lt;robert@raszuk.net&gt;</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; cc:</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; Yakov Rekhter &lt;yakov@juniper.net&gt;, dc@ietf.org,
Aldrin Isaac </tt></font>
<br><font size=2><tt>&gt; &lt;aldrin.isaac@gmail.com&gt;</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; Subject:</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; Re: [dc] new drafts</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; Robert,</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; Here are some things to evaluate scalability
against.</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; Assume a simple case that under a switch there
are 250 VM, split amongst</tt></font>
<br><font size=2><tt>&gt; 10 customers. Each customer has a unique VRF.
Normally, we would have</tt></font>
<br><font size=2><tt>&gt; advertized a /24 route for that switch. In this
case your routes to a</tt></font>
<br><font size=2><tt>&gt; single switch are segmented and there are 10
VRFs, and you will very</tt></font>
<br><font size=2><tt>&gt; likely have 250 route table entries total segmented
by VRF-ids. That's a</tt></font>
<br><font size=2><tt>&gt; routing table bloat from 1 entry to 250 entry.
This happens everywhere.</tt></font>
<br><font size=2><tt>&gt; I have assumed a public IP addressing, but the
same thing will happen</tt></font>
<br><font size=2><tt>&gt; for the private addressing as well.</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; Then, typically the number of VRFs you can support
on a router is about</tt></font>
<br><font size=2><tt>&gt; 4K. These # of VRFs have to be supported at the
access, so you have to</tt></font>
<br><font size=2><tt>&gt; assume this is the limit from the access viewpoint.
4K is nothing - we</tt></font>
<br><font size=2><tt>&gt; have 4K VLANs today to segment and that's nothing.
Every segmentation</tt></font>
<br><font size=2><tt>&gt; technique being talked about speaks of a million
plus segments. Take</tt></font>
<br><font size=2><tt>&gt; that to VRFs, you need a million VRFs on the
control plane at the access</tt></font>
<br><font size=2><tt>&gt; switch. Another problem with a VRF is that it
will get and store a route</tt></font>
<br><font size=2><tt>&gt; for a host, even when there is no host talking
to it. With dynamic</tt></font>
<br><font size=2><tt>&gt; learning or learning based on packet arrival
you avoid these host routes</tt></font>
<br><font size=2><tt>&gt; and limit them to active conversations only.
That's a huge saving</tt></font>
<br><font size=2><tt>&gt; because not every host talks to every host.</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; Then, at massive scale, the failure rates are
also massive. At 5 nines</tt></font>
<br><font size=2><tt>&gt; reliability, a hardware entity out of 100,000
will fail every 5.25</tt></font>
<br><font size=2><tt>&gt; minutes. Access switches don't have high availability.
Software fails</tt></font>
<br><font size=2><tt>&gt; even faster - OS is generally 4 9's, which means
one out of 10,000 fails</tt></font>
<br><font size=2><tt>&gt; every 5.25 minutes. At millions of instances
of such entities, there are</tt></font>
<br><font size=2><tt>&gt; rapid failures happening. You have to only look
at massive datacenters</tt></font>
<br><font size=2><tt>&gt; today run by Web 2.0 companies, and they all
echo this view. They</tt></font>
<br><font size=2><tt>&gt; basically form clusters of the same application.
Software moves the</tt></font>
<br><font size=2><tt>&gt; workload from one cluster to another. The whole
cluster can fail over.</tt></font>
<br><font size=2><tt>&gt; That's not what you do in a consumer cloud, where
you have to recover.</tt></font>
<br><font size=2><tt>&gt; At massive failure rates, and rapid recovery
rates, you are moving</tt></font>
<br><font size=2><tt>&gt; things around and injecting host routes for reachability.
It's a</tt></font>
<br><font size=2><tt>&gt; convergence problem, especially with link-state
algorithms.</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; If the VM can be moved, then all you need to
do is install a temporary</tt></font>
<br><font size=2><tt>&gt; redirect of packets to the new location. Each
host will refresh the MAC</tt></font>
<br><font size=2><tt>&gt; after 15-30 seconds. If the packets are redirected
from old to new</tt></font>
<br><font size=2><tt>&gt; location for these 30 seconds, the redirect can
be aged automatically.</tt></font>
<br><font size=2><tt>&gt; This happens all the time in mobile networks
in what is called a &quot;fast</tt></font>
<br><font size=2><tt>&gt; handoff&quot; where you redirect the packets
until handoff is completed.</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; Thanks, Ashish</tt></font>
<br><font size=2><tt>&gt; </tt></font>
<br><font size=2><tt>&gt; </tt></font><br><pre>
--------------------------------------------------------
ZTE&nbsp;Information&nbsp;Security&nbsp;Notice:&nbsp;The&nbsp;information&nbsp;contained&nbsp;in&nbsp;this&nbsp;mail&nbsp;is&nbsp;solely&nbsp;property&nbsp;of&nbsp;the&nbsp;sender's&nbsp;organization.&nbsp;This&nbsp;mail&nbsp;communication&nbsp;is&nbsp;confidential.&nbsp;Recipients&nbsp;named&nbsp;above&nbsp;are&nbsp;obligated&nbsp;to&nbsp;maintain&nbsp;secrecy&nbsp;and&nbsp;are&nbsp;not&nbsp;permitted&nbsp;to&nbsp;disclose&nbsp;the&nbsp;contents&nbsp;of&nbsp;this&nbsp;communication&nbsp;to&nbsp;others.
This&nbsp;email&nbsp;and&nbsp;any&nbsp;files&nbsp;transmitted&nbsp;with&nbsp;it&nbsp;are&nbsp;confidential&nbsp;and&nbsp;intended&nbsp;solely&nbsp;for&nbsp;the&nbsp;use&nbsp;of&nbsp;the&nbsp;individual&nbsp;or&nbsp;entity&nbsp;to&nbsp;whom&nbsp;they&nbsp;are&nbsp;addressed.&nbsp;If&nbsp;you&nbsp;have&nbsp;received&nbsp;this&nbsp;email&nbsp;in&nbsp;error&nbsp;please&nbsp;notify&nbsp;the&nbsp;originator&nbsp;of&nbsp;the&nbsp;message.&nbsp;Any&nbsp;views&nbsp;expressed&nbsp;in&nbsp;this&nbsp;message&nbsp;are&nbsp;those&nbsp;of&nbsp;the&nbsp;individual&nbsp;sender.
This&nbsp;message&nbsp;has&nbsp;been&nbsp;scanned&nbsp;for&nbsp;viruses&nbsp;and&nbsp;Spam&nbsp;by&nbsp;ZTE&nbsp;Anti-Spam&nbsp;system.
</pre>
--=_alternative 005246004825797B_=--


From russw@riw.us  Wed Jan  4 07:14:28 2012
Return-Path: <russw@riw.us>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 65D5921F874C for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 07:14:28 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.829
X-Spam-Level: 
X-Spam-Status: No, score=-2.829 tagged_above=-999 required=5 tests=[AWL=-0.230, BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JTl+fBEWQjr2 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 07:14:27 -0800 (PST)
Received: from ecbiz91.inmotionhosting.com (ecbiz91.inmotionhosting.com [173.205.124.250]) by ietfa.amsl.com (Postfix) with ESMTP id 7BE1E21F8746 for <dc@ietf.org>; Wed,  4 Jan 2012 07:14:27 -0800 (PST)
Received: from cpe-065-190-155-146.nc.res.rr.com ([65.190.155.146]:54196 helo=[192.168.100.61]) by ecbiz91.inmotionhosting.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69) (envelope-from <russw@riw.us>) id 1RiSXl-00047c-LN; Wed, 04 Jan 2012 10:14:21 -0500
Message-ID: <4F046CCE.40203@riw.us>
Date: Wed, 04 Jan 2012 10:14:22 -0500
From: Russ White <russw@riw.us>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:8.0) Gecko/20111105 Thunderbird/8.0
MIME-Version: 1.0
To: Yakov Rekhter <yakov@juniper.net>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net><6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com><201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <4EFF0DCA.5090707@riw.us> <201201031450.q03EoKS53547@magenta.juniper.net>
In-Reply-To: <201201031450.q03EoKS53547@magenta.juniper.net>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - ecbiz91.inmotionhosting.com
X-AntiAbuse: Original Domain - ietf.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - riw.us
Cc: dc@ietf.org, Christopher LILJENSTOLPE <cdl@asgaard.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 15:14:28 -0000

> I think we need to distinguish between routing convergence and
> connectivity restoration time, as the two are not always the same.
> With this in mind let me paraphrase what you said above as follows:

Yes and no... How's that for equivocation? :-)

Fast reroute generally means two things --routing around the problem
quickly, and then adjusting the main control plane. IMHO, the first
stage needs to be in millisconds, and the second stage in seconds (5
seconds or less, if possible). My reasoning is that the time between the
first and second stage is a vulnerable moment in the network --if a
second failure occurs just at that moment, really bad things can happen.

So the first stage needs to be faster than people (or increasingly
computers) will notice, the second stage needs to happen faster than a
second failure is likely to happen (can't stop it all the time, but we
can make it rare to the point of being out of mind).

:-)

Russ

From robert@raszuk.net  Wed Jan  4 07:17:20 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C460621F85B0 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 07:17:20 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.548
X-Spam-Level: 
X-Spam-Status: No, score=-2.548 tagged_above=-999 required=5 tests=[AWL=0.051,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id W+L3OO5Fe2KQ for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 07:17:20 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id F1F0F21F85AD for <dc@ietf.org>; Wed,  4 Jan 2012 07:17:19 -0800 (PST)
Received: (qmail 13618 invoked by uid 399); 4 Jan 2012 15:17:19 -0000
Received: from unknown (HELO ?192.168.1.91?) (83.31.237.31) by mail1310.opentransfer.com with ESMTP; 4 Jan 2012 15:17:19 -0000
X-Originating-IP: 83.31.237.31
Message-ID: <4F046D7D.7040705@raszuk.net>
Date: Wed, 04 Jan 2012 16:17:17 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: Lizhong Jin <lizhong.jin@zte.com.cn>
References: <OF3321930E.BFE44129-ON4825797B.005178C6-4825797B.00524603@zte.com.cn>
In-Reply-To: <OF3321930E.BFE44129-ON4825797B.005178C6-4825797B.00524603@zte.com.cn>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: yakov@juniper.net, dc@ietf.org, adalela@cisco.com, aldrin.isaac@gmail.com
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 15:17:20 -0000

Hi Lizhong,

How about neither ?

How about implementing VRF for control plane separation on the x86 
controller out of data plane and simply instructing either host hosting 
VMs or TOR or Access Switch to forward/encapsulate the packets correctly ?

Cheers,
R.


> Hi Ashish,
 >
> If we implement VRF on access switch (or ToR), I agree there will be
> scalability problem. Also the cost will also be an issue, the access
> switch would be more expensive than before. How about implement VRF on
> aggregation router? If the aggregation router could solve scalability
> and high availability problem, then we should focus on how to setup
> connection between VM and aggregation router. Hope to see your comments.
>
> Thanks
> Lizhong

From russw@riw.us  Wed Jan  4 07:24:28 2012
Return-Path: <russw@riw.us>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 12B9321F875C for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 07:24:28 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.804
X-Spam-Level: 
X-Spam-Status: No, score=-2.804 tagged_above=-999 required=5 tests=[AWL=-0.205, BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3Bu5CdopKagN for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 07:24:27 -0800 (PST)
Received: from ecbiz91.inmotionhosting.com (ecbiz91.inmotionhosting.com [173.205.124.250]) by ietfa.amsl.com (Postfix) with ESMTP id 42F0721F8756 for <dc@ietf.org>; Wed,  4 Jan 2012 07:24:27 -0800 (PST)
Received: from cpe-065-190-155-146.nc.res.rr.com ([65.190.155.146]:54230 helo=[192.168.100.61]) by ecbiz91.inmotionhosting.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69) (envelope-from <russw@riw.us>) id 1RiShW-0005iK-1e; Wed, 04 Jan 2012 10:24:26 -0500
Message-ID: <4F046F2A.4060707@riw.us>
Date: Wed, 04 Jan 2012 10:24:26 -0500
From: Russ White <russw@riw.us>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:8.0) Gecko/20111105 Thunderbird/8.0
MIME-Version: 1.0
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net><6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com><201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <4EFF0DCA.5090707@riw.us> <D7F34AF6-E93C-44F5-8C60-B3E9E8C2E38C@asgaard.org>
In-Reply-To: <D7F34AF6-E93C-44F5-8C60-B3E9E8C2E38C@asgaard.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - ecbiz91.inmotionhosting.com
X-AntiAbuse: Original Domain - ietf.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - riw.us
Cc: dc@ietf.org
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 15:24:28 -0000

> If that caching modality is not monitored, I agree.  However, I
> don't believe that this needs to be an unbounded asynchronous cache. 
> However if we can adjust the cache system based on ratios, etc - and 
> monitor it, it's no longer hidden, nor necessarily a cliff.  I'm not 
> saying caching is the only way, but it may be one approach.

So long as the cache isn't "hidden from the hosts" in the network, but
rather is something the host knows about and can interact with
(understands the state of), caching can be useful. When application
state becomes unbundled from cache state, things tend to slow down
beyond "useful," I think, to provide the "working room" between the
cache and the application.

>> I'm not certain I understand this... I think you mean like a DFZ, a
>> control plane that knows every possible destination. But you have 
>> to separate knowing every possible destination from knowing every 
>> possible route to that destination. Even the DFZ in the 'net is 
>> really an aggregated suboptimal subset. I don't know of any
>> network on this scale that has an optimal route to every
>> destination, and I don't think it's really possible to build one
>> unless you want to make processing power and control plane
>> bandwidth usage unbounded.
> 
> For an unbounded network, I agree.  However, if it is within a 
> bounded subset (i.e. a dc or collection of dc's) I believe it is 
> possible.

Except that we are already talking about relatively unbounded sizes, I
think --100 million routes (well, not really unbounded, but bounded at a
point far beyond what we have today).

> However, if that set of optimal paths is computed for every 
> source/dest pair (or at least for every unique best path) once,
> based on a global topological/demand view, a "global" set of best
> paths may be accomplished within that constrained universe.

Assuming you're doing the computation on a very high end set of hardware
--which implies "off line," I think. But this is a different model from
what we deal with today, and I'm not certain how willing the group would
be to go in that direction.

Fast reroute (see the other thread with Yakov) has made it possible to
precompute fast backup paths locally, potentially solving one of the
various problems with off line computation. So long as you have the
second stage react fast enough not to cause double failure failures...

> Agreed, if there are destinations hidden by aggregation - however, a 
> "global" view may not have that problem.

The key to a "global view," done off line (again, assuming this is what
you're driving at), is that you can actually optimize aggregation
--there are heuristics in existence that can do so, and I'm certain
research would yield more. The tradeoff is in the details of getting
forwarding state fast enough, building a separate control plane network,
and things like that.

:-)

Russ

From robert@raszuk.net  Wed Jan  4 07:26:30 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9243321F876B for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 07:26:30 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.553
X-Spam-Level: 
X-Spam-Status: No, score=-2.553 tagged_above=-999 required=5 tests=[AWL=0.046,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DoQbepDqM7tb for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 07:26:29 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 8283421F875C for <dc@ietf.org>; Wed,  4 Jan 2012 07:26:29 -0800 (PST)
Received: (qmail 27382 invoked by uid 399); 4 Jan 2012 15:26:28 -0000
Received: from unknown (HELO ?192.168.1.91?) (83.31.237.31) by mail1310.opentransfer.com with ESMTP; 4 Jan 2012 15:26:28 -0000
X-Originating-IP: 83.31.237.31
Message-ID: <4F046FA3.5050707@raszuk.net>
Date: Wed, 04 Jan 2012 16:26:27 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: Russ White <russw@riw.us>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net><6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com><201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <4EFF0DCA.5090707@riw.us> <201201031450.q03EoKS53547@magenta.juniper.net> <4F046CCE.40203@riw.us>
In-Reply-To: <4F046CCE.40203@riw.us>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: Yakov Rekhter <yakov@juniper.net>, dc@ietf.org, Christopher LILJENSTOLPE <cdl@asgaard.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 15:26:30 -0000

Hi Russ,

+1

And just to add to that fast connectivity restoration is possible only 
if you have sufficient relatively static information about backup paths 
ahead of failure already in the system.

As example .. if you are in very mobile situations where not DCs are 
moving but their clients are moving in a stochastic pattern you can't 
predict the backup path ahead of failure. In such cases where backup 
path can not be precomputed - fast connectivity restoration becomes 
quite a challenge. In those cases you are usually stuck with transient 
suboptimal data paths till re-convergence.

R.

>> I think we need to distinguish between routing convergence and
>> connectivity restoration time, as the two are not always the same.
>> With this in mind let me paraphrase what you said above as follows:
>
> Yes and no... How's that for equivocation? :-)
>
> Fast reroute generally means two things --routing around the problem
> quickly, and then adjusting the main control plane. IMHO, the first
> stage needs to be in millisconds, and the second stage in seconds (5
> seconds or less, if possible). My reasoning is that the time between the
> first and second stage is a vulnerable moment in the network --if a
> second failure occurs just at that moment, really bad things can happen.
>
> So the first stage needs to be faster than people (or increasingly
> computers) will notice, the second stage needs to happen faster than a
> second failure is likely to happen (can't stop it all the time, but we
> can make it rare to the point of being out of mind).
>
> :-)
>
> Russ
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>
>


From lufang@cisco.com  Wed Jan  4 07:53:11 2012
Return-Path: <lufang@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D12AE21F8784 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 07:53:11 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Level: 
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8TtQFsVgLcJD for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 07:53:11 -0800 (PST)
Received: from rcdn-iport-7.cisco.com (rcdn-iport-7.cisco.com [173.37.86.78]) by ietfa.amsl.com (Postfix) with ESMTP id 288B921F876D for <dc@ietf.org>; Wed,  4 Jan 2012 07:53:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=lufang@cisco.com; l=1374; q=dns/txt; s=iport; t=1325692391; x=1326901991; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=vQCBOvkqhA4kVbfXn23zXV/kZKB5Fbi/Fyvvlbmte9Q=; b=hUoHlQQWJL8r/XLROe5ljpXjD+A4GBg4FvixdT+TKWDjBrix6ZuBJW5L IfY0DrBssXbALMnRo0mYufWH3RGr5UB8Jgn7n4QYgLxQhWgMl7pMenYDh hk+Ujca1yk4v/bkD0uzmBySOotcOq6o2slDQ9c2Mh0bfjNAVWcoM/iSAL U=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AhcFALB1BE+tJV2Y/2dsb2JhbABDggWqYoEFgXIBAQEEAQEBDwEdCjQLDAQCAQgRBAEBCwYXAQYBIAYfCQgBAQQBEggah2CXPAGeJQSLLGMEiDiXO4dk
X-IronPort-AV: E=Sophos;i="4.71,456,1320624000"; d="scan'208";a="48638498"
Received: from rcdn-core-1.cisco.com ([173.37.93.152]) by rcdn-iport-7.cisco.com with ESMTP; 04 Jan 2012 15:53:10 +0000
Received: from xbh-rcd-302.cisco.com (xbh-rcd-302.cisco.com [72.163.63.9]) by rcdn-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q04FrAut013981;  Wed, 4 Jan 2012 15:53:10 GMT
Received: from xmb-rcd-201.cisco.com ([72.163.62.208]) by xbh-rcd-302.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Wed, 4 Jan 2012 09:53:10 -0600
X-Mimeole: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Date: Wed, 4 Jan 2012 09:53:07 -0600
Message-ID: <238542D917511A45B6B8AA806E875E25079A0310@XMB-RCD-201.cisco.com>
In-Reply-To: <4F046D7D.7040705@raszuk.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] new drafts
Thread-Index: AczK8/WBfbspFuIARa2xRc1gPA8pVAAAyH6A
References: <OF3321930E.BFE44129-ON4825797B.005178C6-4825797B.00524603@zte.com.cn> <4F046D7D.7040705@raszuk.net>
From: "Luyuan Fang (lufang)" <lufang@cisco.com>
To: <robert@raszuk.net>, "Lizhong Jin" <lizhong.jin@zte.com.cn>
X-OriginalArrivalTime: 04 Jan 2012 15:53:10.0512 (UTC) FILETIME=[F56C5700:01CCCAF8]
Cc: yakov@juniper.net, aldrin.isaac@gmail.com, "Ashish Dalela \(adalela\)" <adalela@cisco.com>, dc@ietf.org
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 15:53:11 -0000

This is an idea well worth investigating, in addition to compare the
options VRF on X in discussion.
Luyuan

> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Robert Raszuk
> Sent: Wednesday, January 04, 2012 10:17 AM
> To: Lizhong Jin
> Cc: yakov@juniper.net; dc@ietf.org; Ashish Dalela (adalela);
> aldrin.isaac@gmail.com
> Subject: Re: [dc] new drafts
>=20
> Hi Lizhong,
>=20
> How about neither ?
>=20
> How about implementing VRF for control plane separation on the x86
> controller out of data plane and simply instructing either host
hosting
> VMs or TOR or Access Switch to forward/encapsulate the packets
> correctly ?
>=20
> Cheers,
> R.
>=20
>=20
> > Hi Ashish,
>  >
> > If we implement VRF on access switch (or ToR), I agree there will be
> > scalability problem. Also the cost will also be an issue, the access
> > switch would be more expensive than before. How about implement VRF
> on
> > aggregation router? If the aggregation router could solve
scalability
> > and high availability problem, then we should focus on how to setup
> > connection between VM and aggregation router. Hope to see your
> comments.
> >
> > Thanks
> > Lizhong
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From diego@tid.es  Wed Jan  4 08:16:26 2012
Return-Path: <diego@tid.es>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E4CCA21F87A1 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 08:16:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.099
X-Spam-Level: 
X-Spam-Status: No, score=-5.099 tagged_above=-999 required=5 tests=[AWL=1.500,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BJ-d8ybRKcBl for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 08:16:26 -0800 (PST)
Received: from correo-bck.tid.es (correo-bck.tid.es [195.235.93.200]) by ietfa.amsl.com (Postfix) with ESMTP id A538321F879F for <dc@ietf.org>; Wed,  4 Jan 2012 08:16:25 -0800 (PST)
Received: from sbrightmailg02.hi.inet (Sbrightmailg02.hi.inet [10.95.78.105]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXA000ZT7VARJ@tid.hi.inet> for dc@ietf.org; Wed, 04 Jan 2012 17:16:22 +0100 (MET)
Received: from vanvan (vanvan.hi.inet [10.95.78.49])	by sbrightmailg02.hi.inet (Symantec Messaging Gateway) with SMTP id 00.F0.02643.65B740F4; Wed, 04 Jan 2012 17:16:22 +0100 (CET)
Received: from correo.tid.es (mailhost.hi.inet [10.95.64.100]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPS id <0LXA000ZL7VARJ@tid.hi.inet> for dc@ietf.org; Wed, 04 Jan 2012 17:16:22 +0100 (MET)
Received: from EXCLU2K7.hi.inet ([10.95.67.65]) by htcasmad2.hi.inet ([192.168.0.2]) with mapi; Wed, 04 Jan 2012 17:16:21 +0100
Date: Wed, 04 Jan 2012 17:16:20 +0100
From: DIEGO LOPEZ GARCIA <diego@tid.es>
In-reply-to: <4F046FA3.5050707@raszuk.net>
To: "robert@raszuk.net" <robert@raszuk.net>
Message-id: <B2F96067-3C88-41CD-8A5F-599DF6CE9F4E@tid.es>
MIME-version: 1.0
Content-type: text/plain; charset=utf-8
Content-language: en-US
Content-transfer-encoding: base64
Accept-Language: en-US
Thread-topic: [dc] Elevator Pitch
Thread-index: AczK/DJVnOwkYPMRQgO3Sjm+riWrSQ==
acceptlanguage: en-US
X-AuditID: 0a5f4e69-b7f6b6d000000a53-dc-4f047b56cb80
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprKKsWRmVeSWpSXmKPExsXCFe9nqBtWzeJvsPqcokXL+busDoweS5b8 ZApgjOKySUnNySxLLdK3S+DK2HPnKEvBPb6K+VP+MjcwruHrYuTgkBAwkfh1xLOLkRPIFJO4 cG89WxcjF4eQwDZGiWlHuxghnK+MEk9/NkNlGhkl3nY8ZwZpYRFQlXi16TcLiM0moC7RcvQb mC0soCDxqOszK4jNKaAlsab7DhOILSKgLbFz0WKwQcwC3YwSdxfPByviFbCUOLJuCZQtKPFj 8j0WkPOYgYZOmZILEmYWEJdobr3JAmErSkxb1MAIYjMCnf391Bqo+YoSK6b1sEPYehL9R7ez QNSIStxpX88I8aaAxJI955khbFGJl4//sUI89odFYvfbbpYJjOKzkJwxC+GMWUjOmIXkjAWM LKsYxYqTijLTM0pyEzNz0g2M9DIy9TLzUks2MULiKHMH4/KdKocYBTgYlXh4Pd4x+guxJpYV V+YeYpTkYFIS5X1dyeIvxJeUn1KZkVicEV9UmpNafIhRgoNZSYR3qSqzvxBvSmJlVWpRPkxK hoNDSYI3sQqoTbAoNT21Ii0zB5gsYNJMHJwg7TxA7SEgNbzFBYm5xZnpEPlTjJJS4ryFIAkB kERGaR5c7ytGcaAjhXnrQLI8wLQG1/UKaCAT0MAoEbCBJYkIKakGxj0GThUfzlbPybq5Xuhp YsPcnxzPTdZ4d6QGRh+LK5KY2m6g2t/bcij42/4Ov6nLw48uWdv+yL721IP+Y0e2uCg77fnw fPrEpY29cW4Sf5OXNu0Q8XTbbiX5tXU7f4q+5G39qHz37pkqM/LFKw1XKrBPfnk6NWy+WctE 9m3db5c9Vdw17+7Tz0osxRmJhlrMRcWJAHAQolQoAwAA
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <4EFF0DCA.5090707@riw.us> <201201031450.q03EoKS53547@magenta.juniper.net> <4F046CCE.40203@riw.us> <4F046FA3.5050707@raszuk.net>
Cc: Russ White <russw@riw.us>, Yakov Rekhter <yakov@juniper.net>, "dc@ietf.org" <dc@ietf.org>, Christopher LILJENSTOLPE <cdl@asgaard.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 16:16:27 -0000

SGkgUm9iZXJ0LA0KDQpPbiA0IEphbiAyMDEyLCBhdCAxNjoyNiAsIFJvYmVydCBSYXN6dWsgd3Jv
dGU6DQo+IEFuZCBqdXN0IHRvIGFkZCB0byB0aGF0IGZhc3QgY29ubmVjdGl2aXR5IHJlc3RvcmF0
aW9uIGlzIHBvc3NpYmxlIG9ubHkNCj4gaWYgeW91IGhhdmUgc3VmZmljaWVudCByZWxhdGl2ZWx5
IHN0YXRpYyBpbmZvcm1hdGlvbiBhYm91dCBiYWNrdXAgcGF0aHMNCj4gYWhlYWQgb2YgZmFpbHVy
ZSBhbHJlYWR5IGluIHRoZSBzeXN0ZW0uDQo+DQo+IEFzIGV4YW1wbGUgLi4gaWYgeW91IGFyZSBp
biB2ZXJ5IG1vYmlsZSBzaXR1YXRpb25zIHdoZXJlIG5vdCBEQ3MgYXJlDQo+IG1vdmluZyBidXQg
dGhlaXIgY2xpZW50cyBhcmUgbW92aW5nIGluIGEgc3RvY2hhc3RpYyBwYXR0ZXJuIHlvdSBjYW4n
dA0KPiBwcmVkaWN0IHRoZSBiYWNrdXAgcGF0aCBhaGVhZCBvZiBmYWlsdXJlLiBJbiBzdWNoIGNh
c2VzIHdoZXJlIGJhY2t1cA0KPiBwYXRoIGNhbiBub3QgYmUgcHJlY29tcHV0ZWQgLSBmYXN0IGNv
bm5lY3Rpdml0eSByZXN0b3JhdGlvbiBiZWNvbWVzDQo+IHF1aXRlIGEgY2hhbGxlbmdlLiBJbiB0
aG9zZSBjYXNlcyB5b3UgYXJlIHVzdWFsbHkgc3R1Y2sgd2l0aCB0cmFuc2llbnQNCj4gc3Vib3B0
aW1hbCBkYXRhIHBhdGhzIHRpbGwgcmUtY29udmVyZ2VuY2UuDQoNCg0KRGVmaW5pdGVseSB5ZXMs
IGFuZCBpdCBpcyB0aGVyZSB3aGVyZSBzb2x1dGlvbnMgYS1sYS1Nb2JpbGUtSVAgY29tZSBpbnRv
IHBsYXkuIFN1Ym9wdGltYWwgYnV0IGZ1bmN0aW9uYWwgYW5kIHJlYXNvbmFibHkgc2ltcGxlIHRv
IGRlYWwgd2l0aC4NCg0KQmUgZ29vZGUsDQoNCi0tDQoiRXN0YSB2ZXogbm8gZmFsbGFyZW1vcywg
RG9jdG9yIEluZmllcm5vIg0KDQpEciBEaWVnbyBSLiBMb3Bleg0KVGVsZWZvbmljYSBJK0QNCg0K
ZS1tYWlsOiBkaWVnb0B0aWQuZXMNClRlbDogICAgICArMzQgOTEzIDEyOSAwNDENCk1vYmlsZTog
KzM0IDY4MiAwNTEgMDkxDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQ0KDQoNCkVzdGUgbWVuc2FqZSBzZSBkaXJpZ2UgZXhjbHVzaXZhbWVudGUgYSBzdSBkZXN0aW5h
dGFyaW8uIFB1ZWRlIGNvbnN1bHRhciBudWVzdHJhIHBvbMOtdGljYSBkZSBlbnbDrW8geSByZWNl
cGNpw7NuIGRlIGNvcnJlbyBlbGVjdHLDs25pY28gZW4gZWwgZW5sYWNlIHNpdHVhZG8gbcOhcyBh
YmFqby4NClRoaXMgbWVzc2FnZSBpcyBpbnRlbmRlZCBleGNsdXNpdmVseSBmb3IgaXRzIGFkZHJl
c3NlZS4gV2Ugb25seSBzZW5kIGFuZCByZWNlaXZlIGVtYWlsIG9uIHRoZSBiYXNpcyBvZiB0aGUg
dGVybXMgc2V0IG91dCBhdC4NCmh0dHA6Ly93d3cudGlkLmVzL0VTL1BBR0lOQVMvZGlzY2xhaW1l
ci5hc3B4DQo=

From adalela@cisco.com  Wed Jan  4 08:16:43 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 33F6021F87B2 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 08:16:43 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.226
X-Spam-Level: 
X-Spam-Status: No, score=-2.226 tagged_above=-999 required=5 tests=[AWL=0.057,  BAYES_00=-2.599, HTML_MESSAGE=0.001, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id At-3ua4O0v6B for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 08:16:40 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 6B79121F879F for <dc@ietf.org>; Wed,  4 Jan 2012 08:16:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=28481; q=dns/txt; s=iport; t=1325693798; x=1326903398; h=mime-version:subject:date:message-id:in-reply-to: references:from:to:cc; bh=sQVm+47lYXmFbfNQRzhRli6AYe+lg9/BFknqlgGdaCI=; b=k2Olcs8ujFWYEaTryA+oTgP4azoeH4QIhsCdw1B1wfNVmC0OHc5PGqHT +xZu28ODpuo44qa+Qgwz11q/mb06X7628qMYagecojip8ohWZ9QoiiV6a ex6K7RTOYwMeB4Z8cd10AXZysMNaYifLulgeGkZPl94Bz1Huedx7gjxMw M=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: ArUEACJ7BE9Io8UY/2dsb2JhbABDggVJqx6BcgEBAQMBEgEJEQM9DAULAgEIEQQBAQsGEAcBBgEgJQkIAQEECwgIGodYlz0BnieLLGMEiDaXPYdN
X-IronPort-AV: E=Sophos;i="4.71,456,1320624000"; d="scan'208,217";a="2773793"
Received: from vla196-nat.cisco.com (HELO bgl-core-1.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 04 Jan 2012 16:16:36 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q04GGaKO013448; Wed, 4 Jan 2012 16:16:36 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Wed, 4 Jan 2012 21:46:36 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CCCAFC.3B35B5DF"
Date: Wed, 4 Jan 2012 21:46:34 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25961@XMB-BGL-416.cisco.com>
In-Reply-To: <OF3321930E.BFE44129-ON4825797B.005178C6-4825797B.00524603@zte.com.cn>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] new drafts
Thread-Index: AczK8WLz6HPfI26/RPGH0mCJTdKG7AABiGIw
References: <mailman.1574.1325608793.3174.dc@ietf.org> <OF3321930E.BFE44129-ON4825797B.005178C6-4825797B.00524603@zte.com.cn>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Lizhong Jin" <lizhong.jin@zte.com.cn>
X-OriginalArrivalTime: 04 Jan 2012 16:16:36.0014 (UTC) FILETIME=[3B2AECE0:01CCCAFC]
Cc: yakov@juniper.net, aldrin.isaac@gmail.com, dc@ietf.org, robert@raszuk.net
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 16:16:43 -0000

This is a multi-part message in MIME format.

------_=_NextPart_001_01CCCAFC.3B35B5DF
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable

Hi Lizhong,

=20

When you do things at the aggregation you need two sets of encaps -
Agg-to-Agg and Agg-to-Access.=20

=20

If there are N hosts under an access, and M accesses under an Agg, and P
Aggs, then you need -=20

=20

N * M for access to agg

(P-1) * N * M for agg to agg

Total =3D N * M + (P-1) * N * M =3D N * M * P

=20

This is the worst case.

=20

For a better case, assume each VM talks to 25 VM outside its Agg.

The total =3D N * M * 25

=20

Let's plug some numbers into this.

=20

For a 48 port access, 50 VM per port, each with 2 VIF, N =3D 48 * 50 * 2 =
=3D
4800

=20

For an agg with 100 ports (50 down and 50 up), M =3D 50

=20

A million VM require 1,000,000 / (48 * 50) =3D 415 accesses.

=20

Each access connects to 4 Aggs, so you need 415 * 4 / 50 =3D 33 Aggs =3D =
P.

=20

Worst case total entries needed at each Agg =3D 4800 * 50 * 33 =3D =
7,920,000

Better case total entries needed at each Agg =3D 4800 * 50 * 25 =3D
6,000,000

=20

You may be wondering why for 1M VM, we need 6-8M host routes? It's
because of multi-pathing. A destination can be reached through many
paths, so you put routes for all paths.

                                                     =20

When you have 6-8M routes, you can imagine the corresponding control
plane load.

=20

The additional complexity is that to send packet from A to B, you need 3
encaps - access-to-agg, agg-to-agg and agg-to-access. When a VM moves,
you have to coordinate all these entries. That's another not so easy
problem.

=20

Thanks, Ashish

=20

=20

From: Lizhong Jin [mailto:lizhong.jin@zte.com.cn]=20
Sent: Wednesday, January 04, 2012 8:28 PM
To: Ashish Dalela (adalela)
Cc: robert@raszuk.net; dc@ietf.org; yakov@juniper.net;
aldrin.isaac@gmail.com
Subject: Re: [dc] new drafts

=20


Hi Ashish,=20
If we implement VRF on access switch (or ToR), I agree there will be
scalability problem. Also the cost will also be an issue, the access
switch would be more expensive than before. How about implement VRF on
aggregation router? If the aggregation router could solve scalability
and high availability problem, then we should focus on how to setup
connection between VM and aggregation router. Hope to see your comments.


Thanks=20
Lizhong=20


>=20
> -----From "Ashish Dalela (adalela)" <adalela@cisco.com>=20
> Tue, 3 Jan 2012 22:09:17 +0530 -----=20
>=20
> Receiver:=20
>=20
> <robert@raszuk.net>=20
>=20
> cc:=20
>=20
> Yakov Rekhter <yakov@juniper.net>, dc@ietf.org, Aldrin Isaac=20
> <aldrin.isaac@gmail.com>=20
>=20
> Subject:=20
>=20
> Re: [dc] new drafts=20
>=20
> Robert,=20
>=20
> Here are some things to evaluate scalability against.=20
>=20
> Assume a simple case that under a switch there are 250 VM, split
amongst=20
> 10 customers. Each customer has a unique VRF. Normally, we would have=20
> advertized a /24 route for that switch. In this case your routes to a=20
> single switch are segmented and there are 10 VRFs, and you will very=20
> likely have 250 route table entries total segmented by VRF-ids. That's
a=20
> routing table bloat from 1 entry to 250 entry. This happens
everywhere.=20
> I have assumed a public IP addressing, but the same thing will happen=20
> for the private addressing as well.=20
>=20
> Then, typically the number of VRFs you can support on a router is
about=20
> 4K. These # of VRFs have to be supported at the access, so you have to

> assume this is the limit from the access viewpoint. 4K is nothing - we

> have 4K VLANs today to segment and that's nothing. Every segmentation=20
> technique being talked about speaks of a million plus segments. Take=20
> that to VRFs, you need a million VRFs on the control plane at the
access=20
> switch. Another problem with a VRF is that it will get and store a
route=20
> for a host, even when there is no host talking to it. With dynamic=20
> learning or learning based on packet arrival you avoid these host
routes=20
> and limit them to active conversations only. That's a huge saving=20
> because not every host talks to every host.=20
>=20
> Then, at massive scale, the failure rates are also massive. At 5 nines

> reliability, a hardware entity out of 100,000 will fail every 5.25=20
> minutes. Access switches don't have high availability. Software fails=20
> even faster - OS is generally 4 9's, which means one out of 10,000
fails=20
> every 5.25 minutes. At millions of instances of such entities, there
are=20
> rapid failures happening. You have to only look at massive datacenters

> today run by Web 2.0 companies, and they all echo this view. They=20
> basically form clusters of the same application. Software moves the=20
> workload from one cluster to another. The whole cluster can fail over.

> That's not what you do in a consumer cloud, where you have to recover.

> At massive failure rates, and rapid recovery rates, you are moving=20
> things around and injecting host routes for reachability. It's a=20
> convergence problem, especially with link-state algorithms.=20
>=20
> If the VM can be moved, then all you need to do is install a temporary

> redirect of packets to the new location. Each host will refresh the
MAC=20
> after 15-30 seconds. If the packets are redirected from old to new=20
> location for these 30 seconds, the redirect can be aged automatically.

> This happens all the time in mobile networks in what is called a "fast

> handoff" where you redirect the packets until handoff is completed.=20
>=20
> Thanks, Ashish=20
>=20
>=20

=20
--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail
is solely property of the sender's organization. This mail communication
is confidential. Recipients named above are obligated to maintain
secrecy and are not permitted to disclose the contents of this
communication to others.
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify the
originator of the message. Any views expressed in this message are those
of the individual sender.
This message has been scanned for viruses and Spam by ZTE Anti-Spam
system.

------_=_NextPart_001_01CCCAFC.3B35B5DF
Content-Type: text/html;
	charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" =
xmlns:p=3D"urn:schemas-microsoft-com:office:powerpoint" =
xmlns:a=3D"urn:schemas-microsoft-com:office:access" =
xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" =
xmlns:s=3D"uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" =
xmlns:rs=3D"urn:schemas-microsoft-com:rowset" xmlns:z=3D"#RowsetSchema" =
xmlns:b=3D"urn:schemas-microsoft-com:office:publisher" =
xmlns:ss=3D"urn:schemas-microsoft-com:office:spreadsheet" =
xmlns:c=3D"urn:schemas-microsoft-com:office:component:spreadsheet" =
xmlns:odc=3D"urn:schemas-microsoft-com:office:odc" =
xmlns:oa=3D"urn:schemas-microsoft-com:office:activation" =
xmlns:html=3D"http://www.w3.org/TR/REC-html40" =
xmlns:q=3D"http://schemas.xmlsoap.org/soap/envelope/" =
xmlns:rtc=3D"http://microsoft.com/officenet/conferencing" =
xmlns:D=3D"DAV:" xmlns:Repl=3D"http://schemas.microsoft.com/repl/" =
xmlns:mt=3D"http://schemas.microsoft.com/sharepoint/soap/meetings/" =
xmlns:x2=3D"http://schemas.microsoft.com/office/excel/2003/xml" =
xmlns:ppda=3D"http://www.passport.com/NameSpace.xsd" =
xmlns:ois=3D"http://schemas.microsoft.com/sharepoint/soap/ois/" =
xmlns:dir=3D"http://schemas.microsoft.com/sharepoint/soap/directory/" =
xmlns:ds=3D"http://www.w3.org/2000/09/xmldsig#" =
xmlns:dsp=3D"http://schemas.microsoft.com/sharepoint/dsp" =
xmlns:udc=3D"http://schemas.microsoft.com/data/udc" =
xmlns:xsd=3D"http://www.w3.org/2001/XMLSchema" =
xmlns:sub=3D"http://schemas.microsoft.com/sharepoint/soap/2002/1/alerts/"=
 xmlns:ec=3D"http://www.w3.org/2001/04/xmlenc#" =
xmlns:sp=3D"http://schemas.microsoft.com/sharepoint/" =
xmlns:sps=3D"http://schemas.microsoft.com/sharepoint/soap/" =
xmlns:xsi=3D"http://www.w3.org/2001/XMLSchema-instance" =
xmlns:udcs=3D"http://schemas.microsoft.com/data/udc/soap" =
xmlns:udcxf=3D"http://schemas.microsoft.com/data/udc/xmlfile" =
xmlns:udcp2p=3D"http://schemas.microsoft.com/data/udc/parttopart" =
xmlns:wf=3D"http://schemas.microsoft.com/sharepoint/soap/workflow/" =
xmlns:dsss=3D"http://schemas.microsoft.com/office/2006/digsig-setup" =
xmlns:dssi=3D"http://schemas.microsoft.com/office/2006/digsig" =
xmlns:mdssi=3D"http://schemas.openxmlformats.org/package/2006/digital-sig=
nature" =
xmlns:mver=3D"http://schemas.openxmlformats.org/markup-compatibility/2006=
" xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns:mrels=3D"http://schemas.openxmlformats.org/package/2006/relationshi=
ps" xmlns:spwp=3D"http://microsoft.com/sharepoint/webpartpages" =
xmlns:ex12t=3D"http://schemas.microsoft.com/exchange/services/2006/types"=
 =
xmlns:ex12m=3D"http://schemas.microsoft.com/exchange/services/2006/messag=
es" =
xmlns:pptsl=3D"http://schemas.microsoft.com/sharepoint/soap/SlideLibrary/=
" =
xmlns:spsl=3D"http://microsoft.com/webservices/SharePointPortalServer/Pub=
lishedLinksService" xmlns:Z=3D"urn:schemas-microsoft-com:" =
xmlns:st=3D"&#1;" xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 12 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
pre
	{mso-style-priority:99;
	mso-style-link:"HTML Preformatted Char";
	margin:0in;
	margin-bottom:.0001pt;
	font-size:10.0pt;
	font-family:"Courier New";}
tt
	{mso-style-priority:99;
	font-family:"Courier New";}
span.HTMLPreformattedChar
	{mso-style-name:"HTML Preformatted Char";
	mso-style-priority:99;
	mso-style-link:"HTML Preformatted";
	font-family:Consolas;}
span.EmailStyle20
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Hi Lizhong,<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>When you do things at the aggregation you need two sets of encaps =
&#8211; Agg-to-Agg and Agg-to-Access. <o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>If there are N hosts under an access, and M accesses under an Agg, =
and P Aggs, then you need &#8211; <o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>N * M for access to agg<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>(P-1) * N * M for agg to agg<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Total =3D N * M + (P-1) * N * M =3D N * M * P<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>This is the worst case.<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>For a better case, assume each VM talks to 25 VM outside its =
Agg.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>The total =3D N * M * 25<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Let&#8217;s plug some numbers into this.<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>For a 48 port access, 50 VM per port, each with 2 VIF, N =3D 48 * 50 =
* 2 =3D 4800<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>For an agg with 100 ports (50 down and 50 up), M =3D =
50<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>A million VM require 1,000,000 / (48 * 50) =3D 415 =
accesses.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Each access connects to 4 Aggs, so you need 415 * 4 / 50 =3D 33 Aggs =
=3D P.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Worst case total entries needed at each Agg =3D 4800 * 50 * 33 =3D =
7,920,000<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Better case total entries needed at each Agg =3D 4800 * 50 * 25 =3D =
6,000,000<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>You may be wondering why for 1M VM, we need 6-8M host routes? =
It&#8217;s because of multi-pathing. A destination can be reached =
through many paths, so you put routes for all =
paths.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>When you have 6-8M routes, you can imagine the corresponding control =
plane load.<o:p></o:p></span></p><p class=3DMsoNormal align=3Dcenter =
style=3D'text-align:center'><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>The additional complexity is that to send packet from A to B, you =
need 3 encaps &#8211; access-to-agg, agg-to-agg and agg-to-access. When =
a VM moves, you have to coordinate all these entries. That&#8217;s =
another not so easy problem.<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Thanks, Ashish<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><div =
style=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in'><p class=3DMsoNormal><b><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>=
</b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> =
Lizhong Jin [mailto:lizhong.jin@zte.com.cn] <br><b>Sent:</b> Wednesday, =
January 04, 2012 8:28 PM<br><b>To:</b> Ashish Dalela =
(adalela)<br><b>Cc:</b> robert@raszuk.net; dc@ietf.org; =
yakov@juniper.net; aldrin.isaac@gmail.com<br><b>Subject:</b> Re: [dc] =
new drafts<o:p></o:p></span></p></div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal><br><tt><span style=3D'font-size:10.0pt'>Hi =
Ashish,</span></tt> <br><tt><span style=3D'font-size:10.0pt'>If we =
implement VRF on access switch (or ToR), I agree there will be =
scalability problem. Also the cost will also be an issue, the access =
switch would be more expensive than before. How about implement VRF on =
aggregation router? If the aggregation router could solve scalability =
and high availability problem, then we should focus on how to setup =
connection between VM and aggregation router. Hope to see your =
comments.</span></tt> <br><br><tt><span =
style=3D'font-size:10.0pt'>Thanks</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>Lizhong</span></tt> <br><br><br><tt><span =
style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; -----From &quot;Ashish Dalela =
(adalela)&quot; &lt;adalela@cisco.com&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; Tue, 3 Jan 2012 22:09:17 +0530 =
-----</span></tt> <br><tt><span style=3D'font-size:10.0pt'>&gt; =
</span></tt><br><tt><span style=3D'font-size:10.0pt'>&gt; =
Receiver:</span></tt> <br><tt><span style=3D'font-size:10.0pt'>&gt; =
</span></tt><br><tt><span style=3D'font-size:10.0pt'>&gt; =
&lt;robert@raszuk.net&gt;</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; cc:</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; Yakov Rekhter &lt;yakov@juniper.net&gt;, =
dc@ietf.org, Aldrin Isaac </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; =
&lt;aldrin.isaac@gmail.com&gt;</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; Subject:</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; Re: [dc] new drafts</span></tt> =
<br><tt><span style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; Robert,</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; Here are some things to evaluate =
scalability against.</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; Assume a simple case that under a switch =
there are 250 VM, split amongst</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; 10 customers. Each customer has a unique =
VRF. Normally, we would have</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; advertized a /24 route for that switch. =
In this case your routes to a</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; single switch are segmented and there =
are 10 VRFs, and you will very</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; likely have 250 route table entries =
total segmented by VRF-ids. That's a</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; routing table bloat from 1 entry to 250 =
entry. This happens everywhere.</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; I have assumed a public IP addressing, =
but the same thing will happen</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; for the private addressing as =
well.</span></tt> <br><tt><span style=3D'font-size:10.0pt'>&gt; =
</span></tt><br><tt><span style=3D'font-size:10.0pt'>&gt; Then, =
typically the number of VRFs you can support on a router is =
about</span></tt> <br><tt><span style=3D'font-size:10.0pt'>&gt; 4K. =
These # of VRFs have to be supported at the access, so you have =
to</span></tt> <br><tt><span style=3D'font-size:10.0pt'>&gt; assume this =
is the limit from the access viewpoint. 4K is nothing - we</span></tt> =
<br><tt><span style=3D'font-size:10.0pt'>&gt; have 4K VLANs today to =
segment and that's nothing. Every segmentation</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; technique being talked about speaks of a =
million plus segments. Take</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; that to VRFs, you need a million VRFs on =
the control plane at the access</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; switch. Another problem with a VRF is =
that it will get and store a route</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; for a host, even when there is no host =
talking to it. With dynamic</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; learning or learning based on packet =
arrival you avoid these host routes</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; and limit them to active conversations =
only. That's a huge saving</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; because not every host talks to every =
host.</span></tt> <br><tt><span style=3D'font-size:10.0pt'>&gt; =
</span></tt><br><tt><span style=3D'font-size:10.0pt'>&gt; Then, at =
massive scale, the failure rates are also massive. At 5 =
nines</span></tt> <br><tt><span style=3D'font-size:10.0pt'>&gt; =
reliability, a hardware entity out of 100,000 will fail every =
5.25</span></tt> <br><tt><span style=3D'font-size:10.0pt'>&gt; minutes. =
Access switches don't have high availability. Software fails</span></tt> =
<br><tt><span style=3D'font-size:10.0pt'>&gt; even faster - OS is =
generally 4 9's, which means one out of 10,000 fails</span></tt> =
<br><tt><span style=3D'font-size:10.0pt'>&gt; every 5.25 minutes. At =
millions of instances of such entities, there are</span></tt> =
<br><tt><span style=3D'font-size:10.0pt'>&gt; rapid failures happening. =
You have to only look at massive datacenters</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; today run by Web 2.0 companies, and they =
all echo this view. They</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; basically form clusters of the same =
application. Software moves the</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; workload from one cluster to another. =
The whole cluster can fail over.</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; That's not what you do in a consumer =
cloud, where you have to recover.</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; At massive failure rates, and rapid =
recovery rates, you are moving</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; things around and injecting host routes =
for reachability. It's a</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; convergence problem, especially with =
link-state algorithms.</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; If the VM can be moved, then all you =
need to do is install a temporary</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; redirect of packets to the new location. =
Each host will refresh the MAC</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; after 15-30 seconds. If the packets are =
redirected from old to new</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; location for these 30 seconds, the =
redirect can be aged automatically.</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; This happens all the time in mobile =
networks in what is called a &quot;fast</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; handoff&quot; where you redirect the =
packets until handoff is completed.</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; Thanks, Ashish</span></tt> <br><tt><span =
style=3D'font-size:10.0pt'>&gt; </span></tt><br><tt><span =
style=3D'font-size:10.0pt'>&gt; =
</span></tt><o:p></o:p></p><pre><o:p>&nbsp;</o:p></pre><pre>-------------=
-------------------------------------------<o:p></o:p></pre><pre>ZTE&nbsp=
;Information&nbsp;Security&nbsp;Notice:&nbsp;The&nbsp;information&nbsp;co=
ntained&nbsp;in&nbsp;this&nbsp;mail&nbsp;is&nbsp;solely&nbsp;property&nbs=
p;of&nbsp;the&nbsp;sender's&nbsp;organization.&nbsp;This&nbsp;mail&nbsp;c=
ommunication&nbsp;is&nbsp;confidential.&nbsp;Recipients&nbsp;named&nbsp;a=
bove&nbsp;are&nbsp;obligated&nbsp;to&nbsp;maintain&nbsp;secrecy&nbsp;and&=
nbsp;are&nbsp;not&nbsp;permitted&nbsp;to&nbsp;disclose&nbsp;the&nbsp;cont=
ents&nbsp;of&nbsp;this&nbsp;communication&nbsp;to&nbsp;others.<o:p></o:p>=
</pre><pre>This&nbsp;email&nbsp;and&nbsp;any&nbsp;files&nbsp;transmitted&=
nbsp;with&nbsp;it&nbsp;are&nbsp;confidential&nbsp;and&nbsp;intended&nbsp;=
solely&nbsp;for&nbsp;the&nbsp;use&nbsp;of&nbsp;the&nbsp;individual&nbsp;o=
r&nbsp;entity&nbsp;to&nbsp;whom&nbsp;they&nbsp;are&nbsp;addressed.&nbsp;I=
f&nbsp;you&nbsp;have&nbsp;received&nbsp;this&nbsp;email&nbsp;in&nbsp;erro=
r&nbsp;please&nbsp;notify&nbsp;the&nbsp;originator&nbsp;of&nbsp;the&nbsp;=
message.&nbsp;Any&nbsp;views&nbsp;expressed&nbsp;in&nbsp;this&nbsp;messag=
e&nbsp;are&nbsp;those&nbsp;of&nbsp;the&nbsp;individual&nbsp;sender.<o:p><=
/o:p></pre><pre>This&nbsp;message&nbsp;has&nbsp;been&nbsp;scanned&nbsp;fo=
r&nbsp;viruses&nbsp;and&nbsp;Spam&nbsp;by&nbsp;ZTE&nbsp;Anti-Spam&nbsp;sy=
stem.<o:p></o:p></pre></div></body></html>
------_=_NextPart_001_01CCCAFC.3B35B5DF--

From linda.dunbar@huawei.com  Wed Jan  4 09:44:59 2012
Return-Path: <linda.dunbar@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9609D21F87E7 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 09:44:59 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.52
X-Spam-Level: 
X-Spam-Status: No, score=-2.52 tagged_above=-999 required=5 tests=[AWL=0.079,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wJCgx33MbiRr for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 09:44:58 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id B828521F87E5 for <dc@ietf.org>; Wed,  4 Jan 2012 09:44:58 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml202-edg.china.huawei.com) ([172.18.9.243]) by dfwrg01-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACI42736; Wed, 04 Jan 2012 12:44:58 -0500 (EST)
Received: from DFWEML403-HUB.china.huawei.com (10.193.5.151) by dfweml202-edg.china.huawei.com (172.18.9.108) with Microsoft SMTP Server (TLS) id 14.1.323.3; Wed, 4 Jan 2012 09:44:28 -0800
Received: from DFWEML505-MBX.china.huawei.com ([10.124.31.100]) by dfweml403-hub.china.huawei.com ([10.193.5.151]) with mapi id 14.01.0323.003; Wed, 4 Jan 2012 09:44:18 -0800
From: Linda Dunbar <linda.dunbar@huawei.com>
To: Patrick Frejborg <pfrejborg@gmail.com>, Thomas Narten <narten@us.ibm.com>
Thread-Topic: [dc] Elevator Pitch (was: Scoping the Interim meeting)
Thread-Index: AQHMwPcEWcMlJrcEJ0umMoVdOchowZXr19OAgAYt3YCAAJRKAIAASFEAgAjV/oCAALd1AIAAGGfw
Date: Wed, 4 Jan 2012 17:44:17 +0000
Message-ID: <4A95BA014132FF49AE685FAB4B9F17F62A4E5712@dfweml505-mbx>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <CANtnpwj3hCD4UbidDzG=4xChJOaQ1T8mLqQkDUWxoRZV1hjuYA@mail.gmail.com> <201112281650.pBSGo7Mn011365@cichlid.raleigh.ibm.com> <CANtnpwgKKh_6emFK2Gx_WfqU929UK3rzQmh1cuWxoJFGH6eHUw@mail.gmail.com> <2E742C02-F621-497D-AE06-6A91EEEBA498@cdl.asgaard.org> <201201032055.q03KtgnA016017@cichlid.raleigh.ibm.com> <CAHfUk+VEYzCY346A_5fQ+etWskfVNgDbt_qTR0H8eRVcpdsROQ@mail.gmail.com>
In-Reply-To: <CAHfUk+VEYzCY346A_5fQ+etWskfVNgDbt_qTR0H8eRVcpdsROQ@mail.gmail.com>
Accept-Language: en-US, zh-CN
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.192.11.97]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: "So, Ning" <ning.so@verizon.com>, Ronald Bonica <rbonica@juniper.net>, Christopher LILJENSTOLPE <ietf@cdl.asgaard.org>, "dc@ietf.org" <dc@ietf.org>, Bhumip Khasnabish <vumip1@gmail.com>
Subject: Re: [dc] Elevator Pitch (was: Scoping the Interim meeting)
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 17:44:59 -0000

Patrick,=20

The PVLAN suggested by RFC 5517 is only within one switch. Those port prope=
rties can't be carried from one switch to another switch.  Are you suggesti=
ng that IETF should have a new work item to allow the VLAN port property to=
 be carried from one switch to another? Then the Port property has to be ca=
rried in the data frame. It is almost like extending some bits for the VLAN=
. Is this work in the domain of IEEE802.1?

My understanding is that the 3 different port types (isolated, community, a=
nd promiscuous), are really for establishing e-tree services or different f=
orwarding behavior for different hosts, but not for using less number of VI=
Ds in providing client isolation.  =20


Other comments are inserted below:=20

Linda=20

> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Patrick Frejborg
> Sent: Wednesday, January 04, 2012 1:52 AM
> To: Thomas Narten
> Cc: Ronald Bonica; Bhumip Khasnabish; dc@ietf.org; Christopher
> LILJENSTOLPE; So, Ning
> Subject: Re: [dc] Elevator Pitch (was: Scoping the Interim meeting)
>=20
[Snip]
> I have two proposals - having them solved would be helpful in the
> design and implementing phase of data center networks.
>=20
> 1. PVLAN
> There are use cases where PVLAN is needed, e.g. implementing a
> separate Ethernet network for backup traffic. In this use case the
> host will have two NICs - one for "production" traffic (clients
> reaching the server) and the backup NIC.=20

[Linda] The "Backup traffic" is for the purpose of carrying traffic when th=
e production NIC fails or is for the purpose of communicating with back end=
 server/storage?=20

>Usually you assign the 0/0
> route for the "production" NIC, the outcome is that the backup NIC
> should have a very large subnet assigned so that the backup node can
> be located in the very same subnet -=20

[Linda] Is the "backup node" providing services to very large number of ser=
vers?  =20

> and static routing is avoided at
> the server (if you apply static routing at server there are two teams
> taking care of the routing domain - moves, adds and changes becomes an
> operational nightmare). Because the server resides in different
> security zones it is a bad idea to have the backup NICs in the same L2
> domain - it can be solved by implementing PVLAN and the backup NIC can
> only reach the node which is providing the backup service.

[Linda] Do you mean that "backup NIC" can only reach "backup (or backend) s=
ervice"?=20

> The problem with PVLAN is interoperability between switches and switch
> vendors,  from RFC5517 section 3:
>=20
>    When a VLAN spans multiple switches, there is no standard mechanism
>    to propagate port-level isolation information to other switches and,
>    consequently, the isolation behavior fails in other switches



>=20
> 2. L2 hardening
> I'm not aware of a BCP or informational RFC on how to mitigate L2
> attacks, such as VLAN hopping, ARP spoofing, MAC spoofing etc I always
> get into discussions with security officers how much can we trust the
> virtualization of network devices. These issues can be mitigated on
> some switches but with increased use of hypervisors these issues
> should be taken into account at the virtual switches as well. Maybe it
> would be helpful with an RFC listing all the L2 attacks and how they
> can be mitigated so that the vendors pay better attention to these
> threats.

[Linda] That is a very good point. One of the ARMD objectives is to deliver=
 an information RFC on ARP/ND related security recommendation. I know many =
companies have implemented those features. But it is difficult to motivate =
people to invest time in writing up those practices. =20

=20
>=20
> Patrick
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From narten@us.ibm.com  Wed Jan  4 10:00:26 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 06AAA21F87F1 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 10:00:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -105.11
X-Spam-Level: 
X-Spam-Status: No, score=-105.11 tagged_above=-999 required=5 tests=[BAYES_05=-1.11, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ehkm-cyhcvQ2 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 10:00:25 -0800 (PST)
Received: from e7.ny.us.ibm.com (e7.ny.us.ibm.com [32.97.182.137]) by ietfa.amsl.com (Postfix) with ESMTP id D7DBF21F87EF for <dc@ietf.org>; Wed,  4 Jan 2012 10:00:24 -0800 (PST)
Received: from /spool/local by e7.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Wed, 4 Jan 2012 13:00:16 -0500
Received: from d01relay04.pok.ibm.com (9.56.227.236) by e7.ny.us.ibm.com (192.168.1.107) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Wed, 4 Jan 2012 12:59:09 -0500
Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q04Hx8GI333824 for <dc@ietf.org>; Wed, 4 Jan 2012 12:59:08 -0500
Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q04Hx8N5027917 for <dc@ietf.org>; Wed, 4 Jan 2012 12:59:08 -0500
Received: from cichlid.raleigh.ibm.com (sig-9-48-42-19.mts.ibm.com [9.48.42.19]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q04Hx7O2027838 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 4 Jan 2012 12:59:08 -0500
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q04Hx68f009409; Wed, 4 Jan 2012 12:59:06 -0500
Message-Id: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Date: Wed, 04 Jan 2012 12:59:05 -0500
From: Thomas Narten <narten@us.ibm.com>
x-cbid: 12010417-5806-0000-0000-0000110FC9C7
Cc: dc@ietf.org
Subject: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 18:00:26 -0000

Hi Ashish.

I had  look at this document as it is focused on requirements. Thanks
for doing this.

One starting comment, as the document says:

>    Scalability hasn't generally been a standards consideration and the
>    problems of scaling are left to implementation. But, in the case of
>    cloud datacenters, scaling is the basic requirement, and all problems
>    of cloud datacenters arise due to scaling. The solution development
>    can't therefore ignore the scaling and optimality problem.

I disagree with the above. Scalability has always been one (of many)
factors that goes into development of a standard. Let's just take it
as a given that any solution has to scale adequately for the
environment in which it is to be deployed. Saying more than that (in
general terms) is probably not a useful discussion. To talk about
scalability, one has to talk about a specific technology and where it
is or will be deployed.

Looking at Section 5, where the main requirements are listed:

>    5.1. The Basic Forwarding Problem
> 
>    Traditionally, datacenter networks have used L2 or L3 technologies.
>    The need to massively scale virtualized hosts breaks both these
>    approaches. L2 networks can't be made to scale because of high number
>    of broadcasts. L3 networks can't support host mobility, since routing
>    uses subnets and an IP cannot be moved out of that subnet. Moving IP
>    in a natively L3 network requires installing host routes at one or
>    more points in the path and that is an approach that can't be
> scaled.

I suspect there is general agreement that the above is a general
"problem". Having one big flat L2 in a data center is great for VM
migration and placement of services "any place, anytime", but can
raise scaling and other concerns. Pushing L3 all the way out to the
edges (e..g, ToR or Hypervisor) makes it hard to place (or move)
services/VMs arbitrarily.

The above is one of the motivations behind the NVO3 work.

> 5.2. The Datacenter Inter-Connectivity Problem
> 
>    There are limits to how much a datacenter would be scaled. Workloads
>    need to be placed closer to the clients to reduce latency and
>    bandwidth. Hence, datacenters need to be split into geographical
>    locations and connected over the Internet. Some of these datacenters
>    may be owned by different administrators, as in the case of private
>    and public cloud interconnectivity. Workloads can move between these
>    datacenters, similar to how they move within the datacenter.

In this section, my take away is that there will be multiple,
geographically separated data centers. And that they will need to be
connected together. I suspect everyone agrees with that.

But I don't see how this implies there is any specific IETF work that
needs doing. We already have geographically separated data centers, and
there are, e.g., plenty of VPN technologies available for connecting
them together.

What specifically is missing that prevents the above from being done
today? What is it that you think needs doing that can't be done with
existing standards?

> 5.3. The Multi-Tenancy Problem
> 
>    Datacenters thus far have been wholly used by single tenant. To
>    separate departments within a tenant, VLANs have been used. This
>    seemed sufficient for the number of segments an enterprise would
>    need. But, this approach can't be extended to cloud datacenters.

I suspect you'll get a lot of agreement on this. And one of the key
aims of NVO3 is to address this.

Is the existing NVO3 approach not adequate for the above? If so why
not?

> 5.4. The Technology-Topology Separation Problem
> 
>    While large datacenters are becoming common, medium and small
>    datacenters will continue to exist. These may include a branch office
>    connected to a central office, or a small enterprise datacenter that
>    is connected to a huge public cloud. To move workloads across these
>    networks, the technologies used in the datacenter must be agnostic of
>    the topology employed in the various sized datacenters.

>    A small datacenter may use a mesh topology. A medium datacenter may
>    use a three-tier topology. And a large datacenter may use a two-tier
>    multi-path architecture. It has to be recognized that all these
>    datacenters of various sizes need to interoperate. In particular, it
>    should be possible to use a common technology to connect large and
>    small datacenters, two large datacenters, or two small datacenters.

Isn't this already possible, and indeed, happening today?

What IETF work is needed? What standards gap needs filling?

>    5.5. The Network Convergence Problem
> 
>    Cloud datacenters will be characterized by elasticity. That means
>    that virtual resources are constantly created and destroyed. Typical
>    hardware and software reliabilities of today mean that failures at
>    scale will be fairly common, and automated recovery mechanisms will
>    need to be put in place. When combined with workload mobility for the
>    sake of resource optimization and improving utilization, the churn in
>    the network forwarding tables can be very significant.

WHat work does the above imply that the IETF needs to do ?

>    Mobility also affects virtualized network devices, such as virtual
>    switches, firewalls, load-balancers, etc. For instance, when a server
>    fails and all the VMs are relocated, the associated virtual switch
>    and firewall must also be relocated. This means that any assumption
>    in mobility that the network is a static firmament on which hosts are
>    dynamically attached becomes false. We have to assume that the
>    network is as dynamic as the hosts themselves.

This here is interesting. The implication is that when moving a VM,
either

a) a FW or LB (or both) may also have to be moved, or

b) some sort of path enforcement is needed that insures traffic from
the (now moved) VM continues to go through the same LB or FW as
before.

Do I understand that correctly? And if so, what is the IETF work that
needs to be done to make all this happen?

>  5.6. The East-West Traffic Problem

Is this section saying anything more than there is a need for
multipathing for East West traffic?

> 5.7. The Network SLA Problem
> 
>    Multi-tenant networks need to protect all tenants from overusing
>    network resources. For example, high-traffic load from one tenant
>    should not starve another tenant of bandwidth. Note that in a multi-
>    tenant environment, no tenant has full control or visibility of what
>    other tenants are doing, and how problems can be fixed. A real-time
>    debugging of such problems is very hard for a provider.

...

>    Second, mechanisms to measure and guarantee network SLAs will have to
>    employ active flow management to guarantee bandwidth to all tenants
>    and keep the network provisioned only to the level required. Flow
>    management can be integrated as part of existing forwarding
>    techniques or may need new techniques. Network SLAs can play an
>    important role in determining if sufficient bandwidth is available
>    before a VM is moved to a new location.

Can this not be done today? What specific IETF work would be needed to
support the enforcement of SLAs?

Thomas


From linda.dunbar@huawei.com  Wed Jan  4 10:42:24 2012
Return-Path: <linda.dunbar@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2644721F860E for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 10:42:24 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.56
X-Spam-Level: 
X-Spam-Status: No, score=-2.56 tagged_above=-999 required=5 tests=[AWL=0.039,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bjO7qRU9keFF for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 10:42:23 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id 91B4E21F8603 for <dc@ietf.org>; Wed,  4 Jan 2012 10:42:23 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml202-edg.china.huawei.com) ([172.18.9.243]) by dfwrg02-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACB43137; Wed, 04 Jan 2012 13:42:23 -0500 (EST)
Received: from DFWEML403-HUB.china.huawei.com (10.193.5.151) by dfweml202-edg.china.huawei.com (172.18.9.108) with Microsoft SMTP Server (TLS) id 14.1.323.3; Wed, 4 Jan 2012 10:41:21 -0800
Received: from DFWEML505-MBX.china.huawei.com ([10.124.31.100]) by dfweml403-hub.china.huawei.com ([10.193.5.151]) with mapi id 14.01.0323.003; Wed, 4 Jan 2012 10:41:01 -0800
From: Linda Dunbar <linda.dunbar@huawei.com>
To: "dc@ietf.org" <dc@ietf.org>
Thread-Topic: draft on BCP for ARP/ND Scaling for Large Data Centers
Thread-Index: AQHMyw5jXYSPvkkPXkS5E8+zC4qNmZX8idxA
Date: Wed, 4 Jan 2012 18:41:00 +0000
Message-ID: <4A95BA014132FF49AE685FAB4B9F17F62A4E57F9@dfweml505-mbx>
Accept-Language: en-US, zh-CN
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.192.11.97]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Subject: [dc] FW: draft on BCP for ARP/ND Scaling for Large Data Centers
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 18:42:24 -0000

Let's make it clear first that this new informational draft in ARMD is NOT =
intended for DC Interim meeting. However, we'd love to hear your comments a=
nd suggestions. Since there are already too many emails on this dc@ietf.org=
 list, please send your comments and suggestions of this draft to armd@ietf=
.org

Thanks, Linda Dunbar

-----Original Message-----
From: armd-bounces@ietf.org [mailto:armd-bounces@ietf.org] On Behalf Of Lin=
da Dunbar
Sent: Wednesday, January 04, 2012 12:27 PM
To: armd@ietf.org
Subject: [armd] draft on BCP for ARP/ND Scaling for Large Data Centers

Here is an informational draft on BCP for ARP/ND Scaling for large Data Cen=
ters.=20

Your comments and suggestions are appreciated. We also will circulate this =
BCP draft at the NANOG BCP group to get more data center operators' feedbac=
k.=20

Linda

-----Original Message-----
From: internet-drafts@ietf.org [mailto:internet-drafts@ietf.org]=20
Sent: Tuesday, January 03, 2012 10:54 AM
To: Linda Dunbar
Cc: Linda Dunbar; warren@kumari.net; igor@yahoo-inc.com
Subject: New Version Notification for draft-dunbar-armd-arp-nd-scaling-bcp-=
00.txt

A new version of I-D, draft-dunbar-armd-arp-nd-scaling-bcp-00.txt has been =
successfully submitted by Linda Dunbar and posted to the IETF repository.

Filename:	 draft-dunbar-armd-arp-nd-scaling-bcp
Revision:	 00
Title:		 BCP for ARP-ND Scaling for Large Data Centers
Creation date:	 2012-01-03
WG ID:		 Individual Submission
Number of pages: 12

Abstract:
   This draft is intended to document some simple well established
   practices which can scale ARP/ND in data center environment.


                                                                           =
      =20


The IETF Secretariat
_______________________________________________
armd mailing list
armd@ietf.org
https://www.ietf.org/mailman/listinfo/armd

From david.i.allan@ericsson.com  Wed Jan  4 10:56:39 2012
Return-Path: <david.i.allan@ericsson.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 96EC111E8087 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 10:56:39 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.598
X-Spam-Level: 
X-Spam-Status: No, score=-6.598 tagged_above=-999 required=5 tests=[AWL=0.001,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vKjrQVhM0VlR for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 10:56:38 -0800 (PST)
Received: from imr4.ericy.com (imr4.ericy.com [198.24.6.9]) by ietfa.amsl.com (Postfix) with ESMTP id D56BE11E8086 for <dc@ietf.org>; Wed,  4 Jan 2012 10:56:34 -0800 (PST)
Received: from eusaamw0706.eamcs.ericsson.se ([147.117.20.31]) by imr4.ericy.com (8.14.3/8.14.3/Debian-9.1ubuntu1) with ESMTP id q04IuKmM016910; Wed, 4 Jan 2012 12:56:34 -0600
Received: from EUSAACMS0703.eamcs.ericsson.se ([169.254.1.43]) by eusaamw0706.eamcs.ericsson.se ([147.117.20.31]) with mapi; Wed, 4 Jan 2012 13:56:27 -0500
From: David Allan I <david.i.allan@ericsson.com>
To: Thomas Narten <narten@us.ibm.com>, "Ashish Dalela (adalela)" <adalela@cisco.com>
Date: Wed, 4 Jan 2012 13:56:26 -0500
Thread-Topic: [dc] draft-dalela-dc-requirements-00.txt
Thread-Index: AczLCsm+jZpuzuroSy6ift2TqX44qAABEINw
Message-ID: <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se>
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com>
In-Reply-To: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 04 Jan 2012 18:56:39 -0000

If the goal is to describe the generalized characteristics of what is neede=
d:
An absolutely flat broadcast domain does not scale...duh!
An absolutely flat L2 network does not scale...duh!
Partitioning the network into a large number of virtual broadcast domains o=
r L2VPNs/VLANs is what works for many adopters as it supports PMO. This is =
what numerous existing standardized and proprietary solutions offer with va=
rious shades of grey attribute wise (e.g. scaling, ordering guarantees, pro=
perties when failures occur, broadcast containment etc.). The one observati=
on is that a 24 bit VLAN tag seems to be the current gold standard, both wi=
th the IEEE and with proprietary or proposed approaches.

It would be doing to group a service if the issues with Ethernet were not p=
resented based on a view stuck in perhaps the 2004-2005 timeframe, or perha=
ps even before the standardization of the original 12 bit VLAN tag, let alo=
ne what has come since.

;-)
Dave

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Thomas =
Narten
Sent: Wednesday, January 04, 2012 9:59 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: [dc] draft-dalela-dc-requirements-00.txt

Hi Ashish.

I had  look at this document as it is focused on requirements. Thanks for d=
oing this.

One starting comment, as the document says:

>    Scalability hasn't generally been a standards consideration and the
>    problems of scaling are left to implementation. But, in the case of
>    cloud datacenters, scaling is the basic requirement, and all problems
>    of cloud datacenters arise due to scaling. The solution development
>    can't therefore ignore the scaling and optimality problem.

I disagree with the above. Scalability has always been one (of many) factor=
s that goes into development of a standard. Let's just take it as a given t=
hat any solution has to scale adequately for the environment in which it is=
 to be deployed. Saying more than that (in general terms) is probably not a=
 useful discussion. To talk about scalability, one has to talk about a spec=
ific technology and where it is or will be deployed.

Looking at Section 5, where the main requirements are listed:

>    5.1. The Basic Forwarding Problem
>=20
>    Traditionally, datacenter networks have used L2 or L3 technologies.
>    The need to massively scale virtualized hosts breaks both these
>    approaches. L2 networks can't be made to scale because of high number
>    of broadcasts. L3 networks can't support host mobility, since routing
>    uses subnets and an IP cannot be moved out of that subnet. Moving IP
>    in a natively L3 network requires installing host routes at one or
>    more points in the path and that is an approach that can't be=20
> scaled.

I suspect there is general agreement that the above is a general "problem".=
 Having one big flat L2 in a data center is great for VM migration and plac=
ement of services "any place, anytime", but can raise scaling and other con=
cerns. Pushing L3 all the way out to the edges (e..g, ToR or Hypervisor) ma=
kes it hard to place (or move) services/VMs arbitrarily.

The above is one of the motivations behind the NVO3 work.

> 5.2. The Datacenter Inter-Connectivity Problem
>=20
>    There are limits to how much a datacenter would be scaled. Workloads
>    need to be placed closer to the clients to reduce latency and
>    bandwidth. Hence, datacenters need to be split into geographical
>    locations and connected over the Internet. Some of these datacenters
>    may be owned by different administrators, as in the case of private
>    and public cloud interconnectivity. Workloads can move between these
>    datacenters, similar to how they move within the datacenter.

In this section, my take away is that there will be multiple, geographicall=
y separated data centers. And that they will need to be connected together.=
 I suspect everyone agrees with that.

But I don't see how this implies there is any specific IETF work that needs=
 doing. We already have geographically separated data centers, and there ar=
e, e.g., plenty of VPN technologies available for connecting them together.

What specifically is missing that prevents the above from being done today?=
 What is it that you think needs doing that can't be done with existing sta=
ndards?

> 5.3. The Multi-Tenancy Problem
>=20
>    Datacenters thus far have been wholly used by single tenant. To
>    separate departments within a tenant, VLANs have been used. This
>    seemed sufficient for the number of segments an enterprise would
>    need. But, this approach can't be extended to cloud datacenters.

I suspect you'll get a lot of agreement on this. And one of the key aims of=
 NVO3 is to address this.

Is the existing NVO3 approach not adequate for the above? If so why not?

> 5.4. The Technology-Topology Separation Problem
>=20
>    While large datacenters are becoming common, medium and small
>    datacenters will continue to exist. These may include a branch office
>    connected to a central office, or a small enterprise datacenter that
>    is connected to a huge public cloud. To move workloads across these
>    networks, the technologies used in the datacenter must be agnostic of
>    the topology employed in the various sized datacenters.

>    A small datacenter may use a mesh topology. A medium datacenter may
>    use a three-tier topology. And a large datacenter may use a two-tier
>    multi-path architecture. It has to be recognized that all these
>    datacenters of various sizes need to interoperate. In particular, it
>    should be possible to use a common technology to connect large and
>    small datacenters, two large datacenters, or two small datacenters.

Isn't this already possible, and indeed, happening today?

What IETF work is needed? What standards gap needs filling?

>    5.5. The Network Convergence Problem
>=20
>    Cloud datacenters will be characterized by elasticity. That means
>    that virtual resources are constantly created and destroyed. Typical
>    hardware and software reliabilities of today mean that failures at
>    scale will be fairly common, and automated recovery mechanisms will
>    need to be put in place. When combined with workload mobility for the
>    sake of resource optimization and improving utilization, the churn in
>    the network forwarding tables can be very significant.

WHat work does the above imply that the IETF needs to do ?

>    Mobility also affects virtualized network devices, such as virtual
>    switches, firewalls, load-balancers, etc. For instance, when a server
>    fails and all the VMs are relocated, the associated virtual switch
>    and firewall must also be relocated. This means that any assumption
>    in mobility that the network is a static firmament on which hosts are
>    dynamically attached becomes false. We have to assume that the
>    network is as dynamic as the hosts themselves.

This here is interesting. The implication is that when moving a VM, either

a) a FW or LB (or both) may also have to be moved, or

b) some sort of path enforcement is needed that insures traffic from the (n=
ow moved) VM continues to go through the same LB or FW as before.

Do I understand that correctly? And if so, what is the IETF work that needs=
 to be done to make all this happen?

>  5.6. The East-West Traffic Problem

Is this section saying anything more than there is a need for multipathing =
for East West traffic?

> 5.7. The Network SLA Problem
>=20
>    Multi-tenant networks need to protect all tenants from overusing
>    network resources. For example, high-traffic load from one tenant
>    should not starve another tenant of bandwidth. Note that in a multi-
>    tenant environment, no tenant has full control or visibility of what
>    other tenants are doing, and how problems can be fixed. A real-time
>    debugging of such problems is very hard for a provider.

...

>    Second, mechanisms to measure and guarantee network SLAs will have to
>    employ active flow management to guarantee bandwidth to all tenants
>    and keep the network provisioned only to the level required. Flow
>    management can be integrated as part of existing forwarding
>    techniques or may need new techniques. Network SLAs can play an
>    important role in determining if sufficient bandwidth is available
>    before a VM is moved to a new location.

Can this not be done today? What specific IETF work would be needed to supp=
ort the enforcement of SLAs?

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From xuxiaohu@huawei.com  Wed Jan  4 17:34:45 2012
Return-Path: <xuxiaohu@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3EAF01F0C44 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 17:34:45 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.388
X-Spam-Level: 
X-Spam-Status: No, score=-2.388 tagged_above=-999 required=5 tests=[AWL=-0.646, BAYES_00=-2.599, CN_BODY_35=0.339, MIME_BASE64_TEXT=1.753, MIME_CHARSET_FARAWAY=2.45, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tQLN4CqoMtId for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 17:34:44 -0800 (PST)
Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [119.145.14.66]) by ietfa.amsl.com (Postfix) with ESMTP id 403261F0C43 for <dc@ietf.org>; Wed,  4 Jan 2012 17:34:44 -0800 (PST)
Received: from huawei.com (szxga03-in [172.24.2.9]) by szxga03-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXA006FIXO98D@szxga03-in.huawei.com> for dc@ietf.org; Thu, 05 Jan 2012 09:33:45 +0800 (CST)
Received: from szxrg01-dlp.huawei.com ([172.24.2.119]) by szxga03-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXA00BNAXO9QA@szxga03-in.huawei.com> for dc@ietf.org; Thu, 05 Jan 2012 09:33:45 +0800 (CST)
Received: from szxeml203-edg.china.huawei.com ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGE71436; Thu, 05 Jan 2012 09:33:44 +0800
Received: from SZXEML422-HUB.china.huawei.com (10.82.67.161) by szxeml203-edg.china.huawei.com (172.24.2.55) with Microsoft SMTP Server (TLS) id 14.1.323.3; Thu, 05 Jan 2012 09:33:43 +0800
Received: from SZXEML525-MBS.china.huawei.com ([169.254.8.55]) by szxeml422-hub.china.huawei.com ([10.82.67.161]) with mapi id 14.01.0323.003; Thu, 05 Jan 2012 09:33:35 +0800
Date: Thu, 05 Jan 2012 01:33:35 +0000
From: Xuxiaohu <xuxiaohu@huawei.com>
In-reply-to: <201201041301.q04D1kS47564@magenta.juniper.net>
X-Originating-IP: [10.108.4.80]
To: Yakov Rekhter <yakov@juniper.net>
Message-id: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7642D0@szxeml525-mbs.china.huawei.com>
MIME-version: 1.0
Content-type: text/plain; charset=gb2312
Content-language: zh-CN
Content-transfer-encoding: base64
Accept-Language: zh-CN, en-US
Thread-topic: [dc] Elevator Pitch
Thread-index: AQHMyuGgTYUT1g/AcEmLauaSqFQ87pX9AOoA
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE763ACD@szxeml525-mbs.china.huawei.com> <201201031432.q03EWhS44922@magenta.juniper.net> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76405C@szxeml525-mbs.china.huawei.com> <201201041301.q04D1kS47564@magenta.juniper.net>
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 01:34:45 -0000

DQo+IC0tLS0t08q8/tStvP4tLS0tLQ0KPiC3orz+yMs6IGRjLWJvdW5jZXNAaWV0Zi5vcmcgW21h
aWx0bzpkYy1ib3VuY2VzQGlldGYub3JnXSC0+rHtIFlha292DQo+IFJla2h0ZXINCj4gt6LLzcqx
vOQ6IDIwMTLE6jHUwjTI1SAyMTowMQ0KPiDK1bz+yMs6IFh1eGlhb2h1DQo+ILOty806IGRjQGll
dGYub3JnDQo+INb3zOI6IFJlOiBbZGNdIEVsZXZhdG9yIFBpdGNoDQo+IA0KPiBYdXhpYW9odSwN
Cj4gDQo+ID4gSGkgWWFrb3YsDQo+ID4NCj4gPiA+IFh1eGlhb2h1LA0KPiA+ID4NCj4gPiA+ID4g
SGkgYWxsLA0KPiA+ID4gPg0KPiA+ID4gPiBTaW5jZSB0aGVyZSBhcmUgc29tZSBkaWZmZXJlbmNl
cyBpbiB0aGUgcHJvYmxlbXMgYW5kIHJlcXVpcmVtZW50cw0KPiA+ID4gPiBiZXR3ZWVuIGRhdGEg
Y2VudGVyIG5ldHdvcmsgKERDTikgYW5kIGRhdGEgY2VudGVyIGludGVyY29ubmVjdA0KPiA+ID4g
PiAoRENJKSwgSSB0cnkgdG8gbGlzdCBzZXZlcmFsIHByb2JsZW1zIGFuZCByZXF1aXJlbWVudHMg
Zm9yIERDTiBhbmQNCj4gPiA+ID4gRENJIHNlcGFyYXRlbHkgYXMgZm9sbG93cy4gSGVyZSB0aGUg
ZGF0IGEgY2VudGVycyBtYWlubHkgcmVmZXIgdG8NCj4gPiA+ID4gdGhvc2UgbXVsdGktdGVuYW50
IGRhdGEgY2VudGVycyB3aGljaCBhcmUgb3BlcmF0ZWQgYnkgcHVibGljIGNsb3VkDQo+ID4gPiA+
IHByb3ZpZGVycyB0byBkZWxpdmVyIGNsb3VkIHNlcnZpY2UgKGkuZS4sIElhYVMpIHRvIHRoZWly
IGN1c3RvbWVycw0KPiA+ID4gPiAoaS5lLiwgdGVuYW50cykuDQo+ID4gPiA+DQo+ID4gPiA+IDEu
IERDTiBwcm9ibGVtcyBhbmQgcmVxdWlyZW1lbnRzOg0KPiA+ID4gPg0KPiA+ID4gPiAxKSBWTSBt
b2JpbGl0eSBhY3Jvc3MgbXVsdGlwbGUgcG9kcyAtPiBMQU4vc3VibmV0IGV4dGVuc2lvbiBhY3Jv
c3MgcG9kcw0KPiA+ID4gPg0KPiA+ID4gPiAyKSBTb21lIGNsdXN0ZXIgYXBwbGljYXRpb25zIHVz
ZSBub24tSVAgb3IgbGluay1sb2NhbCBtdWx0aWNhc3QNCj4gPiA+ID4gICAgIChvcHRpb25hbCkg
LT4gTGF5ZXIyIG5ldHdvcmtpbmcNCj4gPiA+ID4NCj4gPiA+ID4gMykgTXVsdGktdGVuYW5jeSBp
c29sYXRpb24gLT4gVlBOL1ZMQU4gaW5zdGFuY2Ugc2NhbGFiaWxpdHkNCj4gPiA+ID4NCj4gPiA+
ID4gNCkgTWlsbGlvbnMgb2YgVk1zIC0+IE1BQy9JUCBmb3J3YXJkaW5nIHRhYmxlIHNjYWxhYmls
aXR5DQo+ID4gPiA+DQo+ID4gPiA+IDUpIEluY3JlYXNpbmcgYmFuZHdpZHRoIGRlbWFuZHMgZm9y
IHNlcnZlci10by1zZXJ2ZXIgY29ubmVjdGl2aXR5DQo+ID4gPiA+ICAgIChpLmUuLCBlYXN0LXdl
c3QgdHJhZmZpYyktPiBFQ01QIGFuZCBzaG9ydGVzdCBwYXRoIGZvcndhcmRpbmcNCj4gPiA+ID4g
ICAgY2FwYWJpbGl0aWVzDQo+ID4gPiA+DQo+ID4gPiA+IDYpIE5ldHdvcmsgcmVzaWxpZW5jeSAt
PiBGYXN0IGNvbnZlcmdlbmNlIGFuZCBtdWx0aS1ob21pbmcNCj4gPiA+DQo+ID4gPiBEbyB5b3Ug
bmVlZCBmYXN0IHJvdXRpbmcgY29udmVyZ2VuY2UsIG9yIGZhc3QgY29ubmVjdGl2aXR5IHJlc3Rv
cmF0aW9uID8NCj4gPg0KPiA+IEJvdGguDQo+IA0KPiBJIHVuZGVyc3RhbmQgdGhlIHJhdGlvbmFs
ZSBmb3IgZmFzdCBjb25uZWN0aXZpdHkgcmVzdG9yYXRpb24uIEJ1dA0KPiBnaXZlbiB0aGF0IHRo
ZXJlIGlzIGZhc3QgY29ubmVjdGl2aXR5IHJlc3RvcmF0aW9uLCB3aHkgd291bGQgb25lDQo+IGFs
c28gbmVlZCBmYXN0IHJvdXRpbmcgY29udmVyZ2VuY2UgPw0KDQpIaSBZYWtvdiwNCg0KSSBoYXZl
IGEgbXVjaCBzaW1pbGFyIHF1ZXN0aW9uIHRvIHlvdTogZ2l2ZW4gdGhhdCB0aGVyZSBpcyBmYXN0
IGNvbm5lY3Rpdml0eSByZXN0b3JhdGlvbiwgd2h5IGRpZCB3ZSBhbHNvIG5lZWQgZmFzdCByb3V0
aW5nIGNvbnZlcmdlbmNlIGZvciB0aGUgSW50ZXJuZXQgcm91dGluZyBzeXN0ZW0/DQoNCkJlc3Qg
cmVnYXJkcywNClhpYW9odQ0KDQo+ID4gPiA+IDcpIFRob3VzYW5kcyBvZiBuZXR3b3JrIGRldmlj
ZXMgLT4gU2ltcGxpZmllZCBwcm92aXNpb25pbmcgYW5kIG9wZXJhdGlvbg0KPiA+ID4gPg0KPiA+
ID4gPg0KPiA+ID4gPg0KPiA+ID4gPiAyLiBEQ0kgcHJvYmxlbXMgYW5kIHJlcXVpcmVtZW50czoN
Cj4gPiA+ID4NCj4gPiA+ID4gMSkgVk1zIG1vYmlsaXR5IGFjcm9zcyBkYXRhIGNlbnRlcnMgLT4g
TEFOL3N1Ym5ldCBleHRlbnNpb24gYWNyb3NzDQo+ID4gPiA+ICAgIGRhdGEgY2VudGVycy4NCj4g
PiA+ID4NCj4gPiA+ID4gMikgTXVsdGktdGVuYW5jeSBpc29sYXRpb24gLT4gVkxBTi9WUE4gaW5z
dGFuY2Ugc2NhbGFiaWxpdHkNCj4gPiA+ID4NCj4gPiA+ID4gMykgTWlsbGlvbnMgb2YgVk1zIC0+
IE1BQy9JUCBmb3J3YXJkaW5nIHRhYmxlIHNjYWxhYmlsaXR5DQo+ID4gPiA+DQo+ID4gPiA+IDQp
IE9wdGltYWwgdXRpbGl6YXRpb24gb2YgV0FOIGJhbmR3aWR0aCByZXNvdXJjZSAtPiBVbmtub3du
IHVuaWNhc3QNCj4gPiA+ID4gICAgYW5kIEFSUCBicm9hZGNhc3Qgc3VwcHJlc3Npb24NCj4gPiA+
ID4NCj4gPiA+ID4gNSkgTmV0d29yayByZXNpbGllbmN5IC0+IEZhc3QgY29udmVyZ2VuY2UgYW5k
IG11bHRpLWhvbWluZw0KPiA+ID4NCj4gPiA+IERvIHlvdSBuZWVkIGZhc3Qgcm91dGluZyBjb252
ZXJnZW5jZSwgb3IgZmFzdCBjb25uZWN0aXZpdHkgcmVzdG9yYXRpb24gPw0KPiA+DQo+ID4gQm90
aC4NCj4gDQo+IFRoZSBzYW1lIHF1ZXN0aW9uLCBhcyBhYm92ZS4NCj4gDQo+IFlha292Lg0KPiBf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KPiBkYyBtYWls
aW5nIGxpc3QNCj4gZGNAaWV0Zi5vcmcNCj4gaHR0cHM6Ly93d3cuaWV0Zi5vcmcvbWFpbG1hbi9s
aXN0aW5mby9kYw0K

From xuxiaohu@huawei.com  Wed Jan  4 18:49:08 2012
Return-Path: <xuxiaohu@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id EE96521F84F7 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 18:49:08 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.352
X-Spam-Level: 
X-Spam-Status: No, score=-2.352 tagged_above=-999 required=5 tests=[AWL=-0.610, BAYES_00=-2.599, CN_BODY_35=0.339, MIME_BASE64_TEXT=1.753, MIME_CHARSET_FARAWAY=2.45, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fvNvnyrnodbB for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 18:49:08 -0800 (PST)
Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [119.145.14.66]) by ietfa.amsl.com (Postfix) with ESMTP id E3D2621F84E2 for <dc@ietf.org>; Wed,  4 Jan 2012 18:49:07 -0800 (PST)
Received: from huawei.com (szxga03-in [172.24.2.9]) by szxga03-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXB0003015BAL@szxga03-in.huawei.com> for dc@ietf.org; Thu, 05 Jan 2012 10:48:48 +0800 (CST)
Received: from szxrg02-dlp.huawei.com ([172.24.2.119]) by szxga03-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXB00BIP14TZ9@szxga03-in.huawei.com> for dc@ietf.org; Thu, 05 Jan 2012 10:48:47 +0800 (CST)
Received: from szxeml207-edg.china.huawei.com ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGC62894; Thu, 05 Jan 2012 10:48:46 +0800
Received: from SZXEML420-HUB.china.huawei.com (10.82.67.159) by szxeml207-edg.china.huawei.com (172.24.2.59) with Microsoft SMTP Server (TLS) id 14.1.323.3; Thu, 05 Jan 2012 10:48:46 +0800
Received: from SZXEML525-MBS.china.huawei.com ([169.254.8.55]) by szxeml420-hub.china.huawei.com ([10.82.67.159]) with mapi id 14.01.0323.003; Thu, 05 Jan 2012 10:48:41 +0800
Date: Thu, 05 Jan 2012 02:48:40 +0000
From: Xuxiaohu <xuxiaohu@huawei.com>
In-reply-to: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7642D0@szxeml525-mbs.china.huawei.com>
X-Originating-IP: [10.108.4.80]
To: Yakov Rekhter <yakov@juniper.net>
Message-id: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76432A@szxeml525-mbs.china.huawei.com>
MIME-version: 1.0
Content-type: text/plain; charset=gb2312
Content-language: zh-CN
Content-transfer-encoding: base64
Accept-Language: zh-CN, en-US
Thread-topic: [dc] Elevator Pitch
Thread-index: AQHMyuGgTYUT1g/AcEmLauaSqFQ87pX9AOoAgAAUayA=
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE763ACD@szxeml525-mbs.china.huawei.com> <201201031432.q03EWhS44922@magenta.juniper.net> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76405C@szxeml525-mbs.china.huawei.com> <201201041301.q04D1kS47564@magenta.juniper.net> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7642D0@szxeml525-mbs.china.huawei.com>
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 02:49:09 -0000

DQo+IC0tLS0t08q8/tStvP4tLS0tLQ0KPiC3orz+yMs6IGRjLWJvdW5jZXNAaWV0Zi5vcmcgW21h
aWx0bzpkYy1ib3VuY2VzQGlldGYub3JnXSC0+rHtIFh1eGlhb2h1DQo+ILeiy83KsbzkOiAyMDEy
xOox1MI1yNUgOTozNA0KPiDK1bz+yMs6IFlha292IFJla2h0ZXINCj4gs63LzTogZGNAaWV0Zi5v
cmcNCj4g1vfM4jogUmU6IFtkY10gRWxldmF0b3IgUGl0Y2gNCj4gDQo+IA0KPiA+IC0tLS0t08q8
/tStvP4tLS0tLQ0KPiA+ILeivP7IyzogZGMtYm91bmNlc0BpZXRmLm9yZyBbbWFpbHRvOmRjLWJv
dW5jZXNAaWV0Zi5vcmddILT6se0gWWFrb3YNCj4gPiBSZWtodGVyDQo+ID4gt6LLzcqxvOQ6IDIw
MTLE6jHUwjTI1SAyMTowMQ0KPiA+IMrVvP7IyzogWHV4aWFvaHUNCj4gPiCzrcvNOiBkY0BpZXRm
Lm9yZw0KPiA+INb3zOI6IFJlOiBbZGNdIEVsZXZhdG9yIFBpdGNoDQo+ID4NCj4gPiBYdXhpYW9o
dSwNCj4gPg0KPiA+ID4gSGkgWWFrb3YsDQo+ID4gPg0KPiA+ID4gPiBYdXhpYW9odSwNCj4gPiA+
ID4NCj4gPiA+ID4gPiBIaSBhbGwsDQo+ID4gPiA+ID4NCj4gPiA+ID4gPiBTaW5jZSB0aGVyZSBh
cmUgc29tZSBkaWZmZXJlbmNlcyBpbiB0aGUgcHJvYmxlbXMgYW5kIHJlcXVpcmVtZW50cw0KPiA+
ID4gPiA+IGJldHdlZW4gZGF0YSBjZW50ZXIgbmV0d29yayAoRENOKSBhbmQgZGF0YSBjZW50ZXIg
aW50ZXJjb25uZWN0DQo+ID4gPiA+ID4gKERDSSksIEkgdHJ5IHRvIGxpc3Qgc2V2ZXJhbCBwcm9i
bGVtcyBhbmQgcmVxdWlyZW1lbnRzIGZvciBEQ04gYW5kDQo+ID4gPiA+ID4gRENJIHNlcGFyYXRl
bHkgYXMgZm9sbG93cy4gSGVyZSB0aGUgZGF0IGEgY2VudGVycyBtYWlubHkgcmVmZXIgdG8NCj4g
PiA+ID4gPiB0aG9zZSBtdWx0aS10ZW5hbnQgZGF0YSBjZW50ZXJzIHdoaWNoIGFyZSBvcGVyYXRl
ZCBieSBwdWJsaWMgY2xvdWQNCj4gPiA+ID4gPiBwcm92aWRlcnMgdG8gZGVsaXZlciBjbG91ZCBz
ZXJ2aWNlIChpLmUuLCBJYWFTKSB0byB0aGVpciBjdXN0b21lcnMNCj4gPiA+ID4gPiAoaS5lLiwg
dGVuYW50cykuDQo+ID4gPiA+ID4NCj4gPiA+ID4gPiAxLiBEQ04gcHJvYmxlbXMgYW5kIHJlcXVp
cmVtZW50czoNCj4gPiA+ID4gPg0KPiA+ID4gPiA+IDEpIFZNIG1vYmlsaXR5IGFjcm9zcyBtdWx0
aXBsZSBwb2RzIC0+IExBTi9zdWJuZXQgZXh0ZW5zaW9uIGFjcm9zcw0KPiBwb2RzDQo+ID4gPiA+
ID4NCj4gPiA+ID4gPiAyKSBTb21lIGNsdXN0ZXIgYXBwbGljYXRpb25zIHVzZSBub24tSVAgb3Ig
bGluay1sb2NhbCBtdWx0aWNhc3QNCj4gPiA+ID4gPiAgICAgKG9wdGlvbmFsKSAtPiBMYXllcjIg
bmV0d29ya2luZw0KPiA+ID4gPiA+DQo+ID4gPiA+ID4gMykgTXVsdGktdGVuYW5jeSBpc29sYXRp
b24gLT4gVlBOL1ZMQU4gaW5zdGFuY2Ugc2NhbGFiaWxpdHkNCj4gPiA+ID4gPg0KPiA+ID4gPiA+
IDQpIE1pbGxpb25zIG9mIFZNcyAtPiBNQUMvSVAgZm9yd2FyZGluZyB0YWJsZSBzY2FsYWJpbGl0
eQ0KPiA+ID4gPiA+DQo+ID4gPiA+ID4gNSkgSW5jcmVhc2luZyBiYW5kd2lkdGggZGVtYW5kcyBm
b3Igc2VydmVyLXRvLXNlcnZlciBjb25uZWN0aXZpdHkNCj4gPiA+ID4gPiAgICAoaS5lLiwgZWFz
dC13ZXN0IHRyYWZmaWMpLT4gRUNNUCBhbmQgc2hvcnRlc3QgcGF0aCBmb3J3YXJkaW5nDQo+ID4g
PiA+ID4gICAgY2FwYWJpbGl0aWVzDQo+ID4gPiA+ID4NCj4gPiA+ID4gPiA2KSBOZXR3b3JrIHJl
c2lsaWVuY3kgLT4gRmFzdCBjb252ZXJnZW5jZSBhbmQgbXVsdGktaG9taW5nDQo+ID4gPiA+DQo+
ID4gPiA+IERvIHlvdSBuZWVkIGZhc3Qgcm91dGluZyBjb252ZXJnZW5jZSwgb3IgZmFzdCBjb25u
ZWN0aXZpdHkgcmVzdG9yYXRpb24gPw0KPiA+ID4NCj4gPiA+IEJvdGguDQo+ID4NCj4gPiBJIHVu
ZGVyc3RhbmQgdGhlIHJhdGlvbmFsZSBmb3IgZmFzdCBjb25uZWN0aXZpdHkgcmVzdG9yYXRpb24u
IEJ1dA0KPiA+IGdpdmVuIHRoYXQgdGhlcmUgaXMgZmFzdCBjb25uZWN0aXZpdHkgcmVzdG9yYXRp
b24sIHdoeSB3b3VsZCBvbmUNCj4gPiBhbHNvIG5lZWQgZmFzdCByb3V0aW5nIGNvbnZlcmdlbmNl
ID8NCj4gDQo+IEhpIFlha292LA0KPiANCj4gSSBoYXZlIGEgbXVjaCBzaW1pbGFyIHF1ZXN0aW9u
IHRvIHlvdTogZ2l2ZW4gdGhhdCB0aGVyZSBpcyBmYXN0IGNvbm5lY3Rpdml0eQ0KPiByZXN0b3Jh
dGlvbiwgd2h5IGRpZCB3ZSBhbHNvIG5lZWQgZmFzdCByb3V0aW5nIGNvbnZlcmdlbmNlIGZvciB0
aGUgSW50ZXJuZXQNCj4gcm91dGluZyBzeXN0ZW0/DQoNCkhpIFlha292LA0KDQpCeSB0aGUgd2F5
LCB0aGUgImZhc3QgY29udmVyZ2VuY2UiIGhlcmUgaXMgbWFpbmx5IGluIGNvbnRyYXN0IHRvIHRo
ZSAic2xvdyBjb252ZXJnZW5jZSIgb2YgdGhlIHNwYW5uaW5nLXRyZWUgcHJvdG9jb2wuDQoNCkJl
c3QgcmVnYXJkcywNClhpYW9odQ0KDQo+IEJlc3QgcmVnYXJkcywNCj4gWGlhb2h1DQo+IA0KPiA+
ID4gPiA+IDcpIFRob3VzYW5kcyBvZiBuZXR3b3JrIGRldmljZXMgLT4gU2ltcGxpZmllZCBwcm92
aXNpb25pbmcgYW5kIG9wZXJhdGlvbg0KPiA+ID4gPiA+DQo+ID4gPiA+ID4NCj4gPiA+ID4gPg0K
PiA+ID4gPiA+IDIuIERDSSBwcm9ibGVtcyBhbmQgcmVxdWlyZW1lbnRzOg0KPiA+ID4gPiA+DQo+
ID4gPiA+ID4gMSkgVk1zIG1vYmlsaXR5IGFjcm9zcyBkYXRhIGNlbnRlcnMgLT4gTEFOL3N1Ym5l
dCBleHRlbnNpb24gYWNyb3NzDQo+ID4gPiA+ID4gICAgZGF0YSBjZW50ZXJzLg0KPiA+ID4gPiA+
DQo+ID4gPiA+ID4gMikgTXVsdGktdGVuYW5jeSBpc29sYXRpb24gLT4gVkxBTi9WUE4gaW5zdGFu
Y2Ugc2NhbGFiaWxpdHkNCj4gPiA+ID4gPg0KPiA+ID4gPiA+IDMpIE1pbGxpb25zIG9mIFZNcyAt
PiBNQUMvSVAgZm9yd2FyZGluZyB0YWJsZSBzY2FsYWJpbGl0eQ0KPiA+ID4gPiA+DQo+ID4gPiA+
ID4gNCkgT3B0aW1hbCB1dGlsaXphdGlvbiBvZiBXQU4gYmFuZHdpZHRoIHJlc291cmNlIC0+IFVu
a25vd24gdW5pY2FzdA0KPiA+ID4gPiA+ICAgIGFuZCBBUlAgYnJvYWRjYXN0IHN1cHByZXNzaW9u
DQo+ID4gPiA+ID4NCj4gPiA+ID4gPiA1KSBOZXR3b3JrIHJlc2lsaWVuY3kgLT4gRmFzdCBjb252
ZXJnZW5jZSBhbmQgbXVsdGktaG9taW5nDQo+ID4gPiA+DQo+ID4gPiA+IERvIHlvdSBuZWVkIGZh
c3Qgcm91dGluZyBjb252ZXJnZW5jZSwgb3IgZmFzdCBjb25uZWN0aXZpdHkgcmVzdG9yYXRpb24g
Pw0KPiA+ID4NCj4gPiA+IEJvdGguDQo+ID4NCj4gPiBUaGUgc2FtZSBxdWVzdGlvbiwgYXMgYWJv
dmUuDQo+ID4NCj4gPiBZYWtvdi4NCj4gPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXw0KPiA+IGRjIG1haWxpbmcgbGlzdA0KPiA+IGRjQGlldGYub3JnDQo+
ID4gaHR0cHM6Ly93d3cuaWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9kYw0KPiBfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KPiBkYyBtYWlsaW5nIGxpc3QN
Cj4gZGNAaWV0Zi5vcmcNCj4gaHR0cHM6Ly93d3cuaWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9k
Yw0K

From adalela@cisco.com  Wed Jan  4 22:48:55 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 35AE41F0C36 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 22:48:55 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.386
X-Spam-Level: 
X-Spam-Status: No, score=-2.386 tagged_above=-999 required=5 tests=[AWL=0.213,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vUL9aPG54TK0 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 22:48:54 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 9F69D21F865E for <dc@ietf.org>; Wed,  4 Jan 2012 22:48:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=13341; q=dns/txt; s=iport; t=1325746132; x=1326955732; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=Zk6FYx0MEJk3f2b+zCkFAjnVi1w5StEQjkGR8joHnv0=; b=IHkm6237v+dk9KY7I8cAofIPnzJqM/MBksPcYOpeWbDiYXf9wzdrc6Jv PmZq7h05GNXBQqUxRioZ+w70s4h1P9b9onA5jSmDZb7h/5vJMhMPnEP2j ITfNI9cQm1j7+B1tE/crw/IppQpL7mXXFr4KOUKuSZwVeTKhy1sNHbQaK Y=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqAEAKRGBU9Io8UY/2dsb2JhbAA5Cq14gXIBAQEDAQEBAQ8BHQorCQIJBQcEAgEIEQEDAQELBgUSAQYBJh8DBggBAQQLCAgTB4dYCJcoAZ15iFYVgkNjBIg3nwo
X-IronPort-AV: E=Sophos;i="4.71,460,1320624000";  d="scan'208";a="2818325"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 05 Jan 2012 06:48:50 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q056mo8D009410; Thu, 5 Jan 2012 06:48:50 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Thu, 5 Jan 2012 12:18:50 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Thu, 5 Jan 2012 12:18:45 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25AA7@XMB-BGL-416.cisco.com>
In-Reply-To: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-dalela-dc-requirements-00.txt
Thread-Index: AczLCsQ7N3QuxAmFS6q3RBxCp/3ocAAZzt0A
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Thomas Narten" <narten@us.ibm.com>
X-OriginalArrivalTime: 05 Jan 2012 06:48:50.0748 (UTC) FILETIME=[151997C0:01CCCB76]
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 06:48:55 -0000

>> I disagree with the above. Scalability has always been one (of many)
factors that goes into development of a standard. Let's just take it
as a given that any solution has to scale adequately for the
environment in which it is to be deployed. Saying more than that (in
general terms) is probably not a useful discussion. To talk about
scalability, one has to talk about a specific technology and where it
is or will be deployed.

Going by the number of map-encap approaches out there that don't scale
at the network boundaries (access, interconnect, etc.) it would *seem*
that we did not give adequate attention to scale. It might be that the
alternatives haven't been present. I'm ok to modify this statement as
follows: "Scalability is a primary consideration to be kept in mind,
because without the intended scale, any solution will work". Does that
sound right?

>> I suspect there is general agreement that the above is a general
"problem".

Yes, so we have restated the obvious to a lot of people in IETF. When
collecting the problem statement, I believe the starting point should be
from what is very well deployed today, and that is STP and OSPF.=20

>> What specifically is missing that prevents the above from being done
today? What is it that you think needs doing that can't be done with
existing standards?

You perhaps missed the following problem statement:

<snip>
   treating inter and intra datacenter
   as entirely independent leads to new issues at the edge that arise
   from trying to map one forwarding approach within datacenter to
   another forwarding approach between datacenters. In some cases, both
   L2 and L3 approaches may be needed to connect two datacenters.
   Further, ideally, customer segmentation in the internet needs to be
   done similar to the segmentation in the datacenter. This simplifies
   the identification of a customer's packets in the Internet as in the
   datacenter. Common QoS and Security policies can be applied, in both
   the domains if there is a common way to identify packets
</snip>

The key problem is still that datacenter inter-connectivity is not
necessarily about connecting DC between the same provider. It is also
about connecting them between private and public domains. We could use
an approach that makes the provider DC edge scale very high, but you
can't do that for a customer who has a smaller datacenter - you will be
pushing the complexity to the customer edge, and their devices aren't
designed to scale. In other words, if I have a large public cloud
connected to a small private cloud, should the small cloud bear the
burden of the large cloud? That needs better stating, I agree.


>> I suspect you'll get a lot of agreement on this. And one of the key
aims of NVO3 is to address this. Is the existing NVO3 approach not
adequate for the above? If so why not?

There are many approaches out there, and the discussion of approaches is
in the separate draft.
tools.ietf.org/html/draft-dalela-dc-approaches-00. We understand that
many of these problems may have been stated in other places. But, we
can't avoid that.

>> Isn't this already possible, and indeed, happening today? What IETF
work is needed? What standards gap needs filling?

Not necessarily. Take the example of two architectures - scale-up vs.
scale-out and compare them for map-encap. The scale-up model requires
less switch-to-switch map-encaps but a lot more internal mapping. So,
from a technology perspective, there is no problem at all if you look at
this from the outside. I can claim that I have one huge switch in which
everything is connected. The problem is just abstracted from view, and
it may be inside the huge switch. Contrast this with the scale-out model
where there are many smaller switches and the map-encap is externalized.
The problem is more visible. The technology you devise has to be such
that I can do both scale-up and scale-out.=20

>> WHat work does the above imply that the IETF needs to do ?

If you have looked at the other thread which talked about number of
routes, a 1M VM datacenter can require several million host-routes. That
implies slow convergence. There have been other discussions on
convergence as well that talked about pulling out a route on-demand when
packets arrived.

>> Is this section saying anything more than there is a need for
multipathing for East West traffic?

No it is not overtly. Internally, this is also tied to the flow mgmt
problem.

>> Can this not be done today? What specific IETF work would be needed
to
support the enforcement of SLAs?

No there is no work in IETF done to define SLAs. For service provider
environments, you can define a SLA at access for a given user. When you
have 10 VM talking to each other, and you want to guarantee a bandwidth
SLA on a VLAN, there is nothing out there. The other fact is that in a
multi-tenant environment, there is no guarantee that you will get 1G
bandwidth because you have a 1G interface on the VM. Typical network
planning too into account the "whole" network design including what
applications you are going to run and what bandwidths they need. That
isn't true for cloud at least.

After we are done discussing these, we can discuss what specific
modifications we need to clarify the problem statement further, since it
is obvious from the email that not all things may be obvious.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Thomas Narten
Sent: Wednesday, January 04, 2012 11:29 PM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: [dc] draft-dalela-dc-requirements-00.txt

Hi Ashish.

I had  look at this document as it is focused on requirements. Thanks
for doing this.

One starting comment, as the document says:

>    Scalability hasn't generally been a standards consideration and the
>    problems of scaling are left to implementation. But, in the case of
>    cloud datacenters, scaling is the basic requirement, and all
problems
>    of cloud datacenters arise due to scaling. The solution development
>    can't therefore ignore the scaling and optimality problem.

I disagree with the above. Scalability has always been one (of many)
factors that goes into development of a standard. Let's just take it
as a given that any solution has to scale adequately for the
environment in which it is to be deployed. Saying more than that (in
general terms) is probably not a useful discussion. To talk about
scalability, one has to talk about a specific technology and where it
is or will be deployed.

Looking at Section 5, where the main requirements are listed:

>    5.1. The Basic Forwarding Problem
>=20
>    Traditionally, datacenter networks have used L2 or L3 technologies.
>    The need to massively scale virtualized hosts breaks both these
>    approaches. L2 networks can't be made to scale because of high
number
>    of broadcasts. L3 networks can't support host mobility, since
routing
>    uses subnets and an IP cannot be moved out of that subnet. Moving
IP
>    in a natively L3 network requires installing host routes at one or
>    more points in the path and that is an approach that can't be
> scaled.

I suspect there is general agreement that the above is a general
"problem". Having one big flat L2 in a data center is great for VM
migration and placement of services "any place, anytime", but can
raise scaling and other concerns. Pushing L3 all the way out to the
edges (e..g, ToR or Hypervisor) makes it hard to place (or move)
services/VMs arbitrarily.

The above is one of the motivations behind the NVO3 work.

> 5.2. The Datacenter Inter-Connectivity Problem
>=20
>    There are limits to how much a datacenter would be scaled.
Workloads
>    need to be placed closer to the clients to reduce latency and
>    bandwidth. Hence, datacenters need to be split into geographical
>    locations and connected over the Internet. Some of these
datacenters
>    may be owned by different administrators, as in the case of private
>    and public cloud interconnectivity. Workloads can move between
these
>    datacenters, similar to how they move within the datacenter.

In this section, my take away is that there will be multiple,
geographically separated data centers. And that they will need to be
connected together. I suspect everyone agrees with that.

But I don't see how this implies there is any specific IETF work that
needs doing. We already have geographically separated data centers, and
there are, e.g., plenty of VPN technologies available for connecting
them together.

What specifically is missing that prevents the above from being done
today? What is it that you think needs doing that can't be done with
existing standards?

> 5.3. The Multi-Tenancy Problem
>=20
>    Datacenters thus far have been wholly used by single tenant. To
>    separate departments within a tenant, VLANs have been used. This
>    seemed sufficient for the number of segments an enterprise would
>    need. But, this approach can't be extended to cloud datacenters.

I suspect you'll get a lot of agreement on this. And one of the key
aims of NVO3 is to address this.

Is the existing NVO3 approach not adequate for the above? If so why
not?

> 5.4. The Technology-Topology Separation Problem
>=20
>    While large datacenters are becoming common, medium and small
>    datacenters will continue to exist. These may include a branch
office
>    connected to a central office, or a small enterprise datacenter
that
>    is connected to a huge public cloud. To move workloads across these
>    networks, the technologies used in the datacenter must be agnostic
of
>    the topology employed in the various sized datacenters.

>    A small datacenter may use a mesh topology. A medium datacenter may
>    use a three-tier topology. And a large datacenter may use a
two-tier
>    multi-path architecture. It has to be recognized that all these
>    datacenters of various sizes need to interoperate. In particular,
it
>    should be possible to use a common technology to connect large and
>    small datacenters, two large datacenters, or two small datacenters.

Isn't this already possible, and indeed, happening today?

What IETF work is needed? What standards gap needs filling?

>    5.5. The Network Convergence Problem
>=20
>    Cloud datacenters will be characterized by elasticity. That means
>    that virtual resources are constantly created and destroyed.
Typical
>    hardware and software reliabilities of today mean that failures at
>    scale will be fairly common, and automated recovery mechanisms will
>    need to be put in place. When combined with workload mobility for
the
>    sake of resource optimization and improving utilization, the churn
in
>    the network forwarding tables can be very significant.

WHat work does the above imply that the IETF needs to do ?

>    Mobility also affects virtualized network devices, such as virtual
>    switches, firewalls, load-balancers, etc. For instance, when a
server
>    fails and all the VMs are relocated, the associated virtual switch
>    and firewall must also be relocated. This means that any assumption
>    in mobility that the network is a static firmament on which hosts
are
>    dynamically attached becomes false. We have to assume that the
>    network is as dynamic as the hosts themselves.

This here is interesting. The implication is that when moving a VM,
either

a) a FW or LB (or both) may also have to be moved, or

b) some sort of path enforcement is needed that insures traffic from
the (now moved) VM continues to go through the same LB or FW as
before.

Do I understand that correctly? And if so, what is the IETF work that
needs to be done to make all this happen?

>  5.6. The East-West Traffic Problem

Is this section saying anything more than there is a need for
multipathing for East West traffic?

> 5.7. The Network SLA Problem
>=20
>    Multi-tenant networks need to protect all tenants from overusing
>    network resources. For example, high-traffic load from one tenant
>    should not starve another tenant of bandwidth. Note that in a
multi-
>    tenant environment, no tenant has full control or visibility of
what
>    other tenants are doing, and how problems can be fixed. A real-time
>    debugging of such problems is very hard for a provider.

...

>    Second, mechanisms to measure and guarantee network SLAs will have
to
>    employ active flow management to guarantee bandwidth to all tenants
>    and keep the network provisioned only to the level required. Flow
>    management can be integrated as part of existing forwarding
>    techniques or may need new techniques. Network SLAs can play an
>    important role in determining if sufficient bandwidth is available
>    before a VM is moved to a new location.

Can this not be done today? What specific IETF work would be needed to
support the enforcement of SLAs?

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From adalela@cisco.com  Wed Jan  4 22:58:07 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4B0BC1F0C47 for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 22:58:07 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.393
X-Spam-Level: 
X-Spam-Status: No, score=-2.393 tagged_above=-999 required=5 tests=[AWL=0.206,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id MEz5TNg2YiUH for <dc@ietfa.amsl.com>; Wed,  4 Jan 2012 22:58:06 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 149AE1F0C38 for <dc@ietf.org>; Wed,  4 Jan 2012 22:58:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=10055; q=dns/txt; s=iport; t=1325746685; x=1326956285; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=fNTHHFKZRJPjOAgdp4EtHhi8QwoZWbShhXCRHew6HHI=; b=gaODLpXJ5WsXxjQJeI38NZTyhOav2L7n3FF+19AG5/hG7Ae/1FaAJ45z 2HebUnfIaVsYqcEi5B8dkN0w3Wo1GM2VaQJsfNaKbTj35pN46wh8kMkLC 1Yiqqm6mjk/KUKpArfapQArAH6oe06vGfeM+MwmuHjzO3Q3ar7iija6C2 g=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqAEAF9JBU9Io8UY/2dsb2JhbABDrXiBcgEBAQMBAQEBDwEdCisJAgkFBwQCAQgRAQMBAQsGFwEGASYfAwYIAQEEAQoICBMHh1gIlyYBnXcEiy5jBIg3nwo
X-IronPort-AV: E=Sophos;i="4.71,460,1320624000";  d="scan'208";a="2813388"
Received: from vla196-nat.cisco.com (HELO bgl-core-3.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 05 Jan 2012 06:58:03 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-3.cisco.com (8.14.3/8.14.3) with ESMTP id q056w3Uv000818; Thu, 5 Jan 2012 06:58:03 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Thu, 5 Jan 2012 12:28:03 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Thu, 5 Jan 2012 12:28:01 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25ABD@XMB-BGL-416.cisco.com>
In-Reply-To: <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-dalela-dc-requirements-00.txt
Thread-Index: AczLCsm+jZpuzuroSy6ift2TqX44qAABEINwABnOOyA=
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com> <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "David Allan I" <david.i.allan@ericsson.com>, "Thomas Narten" <narten@us.ibm.com>
X-OriginalArrivalTime: 05 Jan 2012 06:58:03.0557 (UTC) FILETIME=[5E998D50:01CCCB77]
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 06:58:07 -0000

David,

A total of 1 sentence was dedicated to the L2 problem - "L2 networks
can't be made to scale because of high number of broadcasts." And Linda
has commented on this that the scaling issue is also due to MAC
summarization, modification to be done. There is absolutely no
discussion about pre-VLAN days, and I would like to know where you see
it. Broadcast is contained by VLAN, and we are talking of that.=20

L2VPN is another story - because there was VPLS, and then there are lots
of new things. There are problems to be solved, and then problems to be
solved given some solution. So, the problem boundary shifts from the
time you take something as a "given". People generally take OSPF (or
some L3 routing protocol) and VLAN as a given in the datacenter. Rest is
not a "given". So, we have to start from what is given.

I'm fine, if we want to change the given to something else. Let me know
what we think is the given today.

Thanks, Ashish

-----Original Message-----
From: David Allan I [mailto:david.i.allan@ericsson.com]=20
Sent: Thursday, January 05, 2012 12:26 AM
To: Thomas Narten; Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt


If the goal is to describe the generalized characteristics of what is
needed:
An absolutely flat broadcast domain does not scale...duh!
An absolutely flat L2 network does not scale...duh!
Partitioning the network into a large number of virtual broadcast
domains or L2VPNs/VLANs is what works for many adopters as it supports
PMO. This is what numerous existing standardized and proprietary
solutions offer with various shades of grey attribute wise (e.g.
scaling, ordering guarantees, properties when failures occur, broadcast
containment etc.). The one observation is that a 24 bit VLAN tag seems
to be the current gold standard, both with the IEEE and with proprietary
or proposed approaches.

It would be doing to group a service if the issues with Ethernet were
not presented based on a view stuck in perhaps the 2004-2005 timeframe,
or perhaps even before the standardization of the original 12 bit VLAN
tag, let alone what has come since.

;-)
Dave

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Thomas Narten
Sent: Wednesday, January 04, 2012 9:59 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: [dc] draft-dalela-dc-requirements-00.txt

Hi Ashish.

I had  look at this document as it is focused on requirements. Thanks
for doing this.

One starting comment, as the document says:

>    Scalability hasn't generally been a standards consideration and the
>    problems of scaling are left to implementation. But, in the case of
>    cloud datacenters, scaling is the basic requirement, and all
problems
>    of cloud datacenters arise due to scaling. The solution development
>    can't therefore ignore the scaling and optimality problem.

I disagree with the above. Scalability has always been one (of many)
factors that goes into development of a standard. Let's just take it as
a given that any solution has to scale adequately for the environment in
which it is to be deployed. Saying more than that (in general terms) is
probably not a useful discussion. To talk about scalability, one has to
talk about a specific technology and where it is or will be deployed.

Looking at Section 5, where the main requirements are listed:

>    5.1. The Basic Forwarding Problem
>=20
>    Traditionally, datacenter networks have used L2 or L3 technologies.
>    The need to massively scale virtualized hosts breaks both these
>    approaches. L2 networks can't be made to scale because of high
number
>    of broadcasts. L3 networks can't support host mobility, since
routing
>    uses subnets and an IP cannot be moved out of that subnet. Moving
IP
>    in a natively L3 network requires installing host routes at one or
>    more points in the path and that is an approach that can't be=20
> scaled.

I suspect there is general agreement that the above is a general
"problem". Having one big flat L2 in a data center is great for VM
migration and placement of services "any place, anytime", but can raise
scaling and other concerns. Pushing L3 all the way out to the edges
(e..g, ToR or Hypervisor) makes it hard to place (or move) services/VMs
arbitrarily.

The above is one of the motivations behind the NVO3 work.

> 5.2. The Datacenter Inter-Connectivity Problem
>=20
>    There are limits to how much a datacenter would be scaled.
Workloads
>    need to be placed closer to the clients to reduce latency and
>    bandwidth. Hence, datacenters need to be split into geographical
>    locations and connected over the Internet. Some of these
datacenters
>    may be owned by different administrators, as in the case of private
>    and public cloud interconnectivity. Workloads can move between
these
>    datacenters, similar to how they move within the datacenter.

In this section, my take away is that there will be multiple,
geographically separated data centers. And that they will need to be
connected together. I suspect everyone agrees with that.

But I don't see how this implies there is any specific IETF work that
needs doing. We already have geographically separated data centers, and
there are, e.g., plenty of VPN technologies available for connecting
them together.

What specifically is missing that prevents the above from being done
today? What is it that you think needs doing that can't be done with
existing standards?

> 5.3. The Multi-Tenancy Problem
>=20
>    Datacenters thus far have been wholly used by single tenant. To
>    separate departments within a tenant, VLANs have been used. This
>    seemed sufficient for the number of segments an enterprise would
>    need. But, this approach can't be extended to cloud datacenters.

I suspect you'll get a lot of agreement on this. And one of the key aims
of NVO3 is to address this.

Is the existing NVO3 approach not adequate for the above? If so why not?

> 5.4. The Technology-Topology Separation Problem
>=20
>    While large datacenters are becoming common, medium and small
>    datacenters will continue to exist. These may include a branch
office
>    connected to a central office, or a small enterprise datacenter
that
>    is connected to a huge public cloud. To move workloads across these
>    networks, the technologies used in the datacenter must be agnostic
of
>    the topology employed in the various sized datacenters.

>    A small datacenter may use a mesh topology. A medium datacenter may
>    use a three-tier topology. And a large datacenter may use a
two-tier
>    multi-path architecture. It has to be recognized that all these
>    datacenters of various sizes need to interoperate. In particular,
it
>    should be possible to use a common technology to connect large and
>    small datacenters, two large datacenters, or two small datacenters.

Isn't this already possible, and indeed, happening today?

What IETF work is needed? What standards gap needs filling?

>    5.5. The Network Convergence Problem
>=20
>    Cloud datacenters will be characterized by elasticity. That means
>    that virtual resources are constantly created and destroyed.
Typical
>    hardware and software reliabilities of today mean that failures at
>    scale will be fairly common, and automated recovery mechanisms will
>    need to be put in place. When combined with workload mobility for
the
>    sake of resource optimization and improving utilization, the churn
in
>    the network forwarding tables can be very significant.

WHat work does the above imply that the IETF needs to do ?

>    Mobility also affects virtualized network devices, such as virtual
>    switches, firewalls, load-balancers, etc. For instance, when a
server
>    fails and all the VMs are relocated, the associated virtual switch
>    and firewall must also be relocated. This means that any assumption
>    in mobility that the network is a static firmament on which hosts
are
>    dynamically attached becomes false. We have to assume that the
>    network is as dynamic as the hosts themselves.

This here is interesting. The implication is that when moving a VM,
either

a) a FW or LB (or both) may also have to be moved, or

b) some sort of path enforcement is needed that insures traffic from the
(now moved) VM continues to go through the same LB or FW as before.

Do I understand that correctly? And if so, what is the IETF work that
needs to be done to make all this happen?

>  5.6. The East-West Traffic Problem

Is this section saying anything more than there is a need for
multipathing for East West traffic?

> 5.7. The Network SLA Problem
>=20
>    Multi-tenant networks need to protect all tenants from overusing
>    network resources. For example, high-traffic load from one tenant
>    should not starve another tenant of bandwidth. Note that in a
multi-
>    tenant environment, no tenant has full control or visibility of
what
>    other tenants are doing, and how problems can be fixed. A real-time
>    debugging of such problems is very hard for a provider.

...

>    Second, mechanisms to measure and guarantee network SLAs will have
to
>    employ active flow management to guarantee bandwidth to all tenants
>    and keep the network provisioned only to the level required. Flow
>    management can be integrated as part of existing forwarding
>    techniques or may need new techniques. Network SLAs can play an
>    important role in determining if sufficient bandwidth is available
>    before a VM is moved to a new location.

Can this not be done today? What specific IETF work would be needed to
support the enforcement of SLAs?

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From pfrejborg@gmail.com  Thu Jan  5 00:20:37 2012
Return-Path: <pfrejborg@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8D67621F86DF for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 00:20:37 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.599
X-Spam-Level: 
X-Spam-Status: No, score=-3.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id K9MmnGGHvYV9 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 00:20:35 -0800 (PST)
Received: from mail-we0-f172.google.com (mail-we0-f172.google.com [74.125.82.172]) by ietfa.amsl.com (Postfix) with ESMTP id 11E9821F8681 for <dc@ietf.org>; Thu,  5 Jan 2012 00:20:33 -0800 (PST)
Received: by werb14 with SMTP id b14so210043wer.31 for <dc@ietf.org>; Thu, 05 Jan 2012 00:20:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=SWB+hd3O0x/2h78hPDgPHT3Sp8+8lQn0wsJzyXQ/RTg=; b=GHZSa3jEMgYUF0slpY7FdRNDomKQqNBXoQuUEABdUu6FkWfMDAoCDENB3j5YWh2Xlp 3nAS4HlDsazYgY9iqIxTH2yDi5+K0Zc96rcVfd0oQdxnSrGHrb46pu+FhKohnqRwJH0Z bc7fcnZoiZV0tEezRoBzEbXUJmPrR8PhsJSB8=
MIME-Version: 1.0
Received: by 10.216.139.15 with SMTP id b15mr496366wej.15.1325751632191; Thu, 05 Jan 2012 00:20:32 -0800 (PST)
Received: by 10.227.184.5 with HTTP; Thu, 5 Jan 2012 00:20:32 -0800 (PST)
In-Reply-To: <4A95BA014132FF49AE685FAB4B9F17F62A4E5712@dfweml505-mbx>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <CANtnpwj3hCD4UbidDzG=4xChJOaQ1T8mLqQkDUWxoRZV1hjuYA@mail.gmail.com> <201112281650.pBSGo7Mn011365@cichlid.raleigh.ibm.com> <CANtnpwgKKh_6emFK2Gx_WfqU929UK3rzQmh1cuWxoJFGH6eHUw@mail.gmail.com> <2E742C02-F621-497D-AE06-6A91EEEBA498@cdl.asgaard.org> <201201032055.q03KtgnA016017@cichlid.raleigh.ibm.com> <CAHfUk+VEYzCY346A_5fQ+etWskfVNgDbt_qTR0H8eRVcpdsROQ@mail.gmail.com> <4A95BA014132FF49AE685FAB4B9F17F62A4E5712@dfweml505-mbx>
Date: Thu, 5 Jan 2012 10:20:32 +0200
Message-ID: <CAHfUk+X7nhx0dFBxTR3eBUOi8noqq6nRuh_ywVgA471=-o3fAQ@mail.gmail.com>
From: Patrick Frejborg <pfrejborg@gmail.com>
To: Linda Dunbar <linda.dunbar@huawei.com>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: Thomas Narten <narten@us.ibm.com>, "So, Ning" <ning.so@verizon.com>, "dc@ietf.org" <dc@ietf.org>, Bhumip Khasnabish <vumip1@gmail.com>, Ronald Bonica <rbonica@juniper.net>, Christopher LILJENSTOLPE <ietf@cdl.asgaard.org>
Subject: Re: [dc] Elevator Pitch (was: Scoping the Interim meeting)
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 08:20:37 -0000

On Wed, Jan 4, 2012 at 7:44 PM, Linda Dunbar <linda.dunbar@huawei.com> wrot=
e:
> Patrick,
>
> The PVLAN suggested by RFC 5517 is only within one switch. Those port pro=
perties can't be carried from one switch to another switch. =A0Are you sugg=
esting that IETF should have a new work item to allow the VLAN port propert=
y to be carried from one switch to another? Then the Port property has to b=
e carried in the data frame. It is almost like extending some bits for the =
VLAN. Is this work in the domain of IEEE802.1?

[Patrick] Hmm, can we avoid the data plane and instead use the control
plane to carry port properties by e.g. applying extensions to LLDP -
should be quite straightforward since LLDP make use of TLVs ?
It might be that this topic is better suited for IEEE802.1 than IETF -
anyway it is pain point for me, I have seen quite many data centers
using PVLAN.

>
> My understanding is that the 3 different port types (isolated, community,=
 and promiscuous), are really for establishing e-tree services or different=
 forwarding behavior for different hosts, but not for using less number of =
VIDs in providing client isolation.
>

[Patrick] The main benefit is to preserve global IP addresses by
avoiding a lot of subnets and static routes on the servers, i.e.
create a lot of isolated VLANs but span one large IP addresses over
those VLANs. You can't use RFC1918 addresses because when you move
customers' backend servers into your data center they have been
allocated RFC1918 addresses - and there is no sponsor with a budget
for a renumbering project to avoid overlapping addresses.

>
> Other comments are inserted below:
>
> Linda
>
>> -----Original Message-----
>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
>> Patrick Frejborg
>> Sent: Wednesday, January 04, 2012 1:52 AM
>> To: Thomas Narten
>> Cc: Ronald Bonica; Bhumip Khasnabish; dc@ietf.org; Christopher
>> LILJENSTOLPE; So, Ning
>> Subject: Re: [dc] Elevator Pitch (was: Scoping the Interim meeting)
>>
> [Snip]
>> I have two proposals - having them solved would be helpful in the
>> design and implementing phase of data center networks.
>>
>> 1. PVLAN
>> There are use cases where PVLAN is needed, e.g. implementing a
>> separate Ethernet network for backup traffic. In this use case the
>> host will have two NICs - one for "production" traffic (clients
>> reaching the server) and the backup NIC.
>
> [Linda] The "Backup traffic" is for the purpose of carrying traffic when =
the production NIC fails or is for the purpose of communicating with back e=
nd server/storage?
>
>>Usually you assign the 0/0
>> route for the "production" NIC, the outcome is that the backup NIC
>> should have a very large subnet assigned so that the backup node can
>> be located in the very same subnet -
>
> [Linda] Is the "backup node" providing services to very large number of s=
ervers?

[Patrick] Yes, the servers are multi-tenant and are located in
different security zones

>
>> and static routing is avoided at
>> the server (if you apply static routing at server there are two teams
>> taking care of the routing domain - moves, adds and changes becomes an
>> operational nightmare). Because the server resides in different
>> security zones it is a bad idea to have the backup NICs in the same L2
>> domain - it can be solved by implementing PVLAN and the backup NIC can
>> only reach the node which is providing the backup service.
>
> [Linda] Do you mean that "backup NIC" can only reach "backup (or backend)=
 service"?

[Patrick] Yes, that is the purpose. Then there might be other purposes
as well, e.g. management of the hypervisors, monitoring tools such as
Openview Tivoli, BMC Patrol, heart beat traffic for application
clusters etc. but these are usually assigned their own VLAN and these
have no need for PVLAN.

>
>> The problem with PVLAN is interoperability between switches and switch
>> vendors, =A0from RFC5517 section 3:
>>
>> =A0 =A0When a VLAN spans multiple switches, there is no standard mechani=
sm
>> =A0 =A0to propagate port-level isolation information to other switches a=
nd,
>> =A0 =A0consequently, the isolation behavior fails in other switches
>
>
>
>>
>> 2. L2 hardening
>> I'm not aware of a BCP or informational RFC on how to mitigate L2
>> attacks, such as VLAN hopping, ARP spoofing, MAC spoofing etc I always
>> get into discussions with security officers how much can we trust the
>> virtualization of network devices. These issues can be mitigated on
>> some switches but with increased use of hypervisors these issues
>> should be taken into account at the virtual switches as well. Maybe it
>> would be helpful with an RFC listing all the L2 attacks and how they
>> can be mitigated so that the vendors pay better attention to these
>> threats.
>
> [Linda] That is a very good point. One of the ARMD objectives is to deliv=
er an information RFC on ARP/ND related security recommendation. I know man=
y companies have implemented those features. But it is difficult to motivat=
e people to invest time in writing up those practices.
>

[Patrick] Have a look on the virtual switches, most of them have a
very poor set of security features. Seems that they aren't aware of
the security issues with L2 attacks.

From xuxiaohu@huawei.com  Thu Jan  5 02:01:02 2012
Return-Path: <xuxiaohu@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1452C21F858B for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 02:01:02 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.877
X-Spam-Level: 
X-Spam-Status: No, score=-1.877 tagged_above=-999 required=5 tests=[AWL=-1.020, BAYES_00=-2.599, CN_BODY_35=0.339, J_CHICKENPOX_13=0.6, J_CHICKENPOX_43=0.6, MIME_BASE64_TEXT=1.753, MIME_CHARSET_FARAWAY=2.45, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TP1VmrmPQ4U4 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 02:01:01 -0800 (PST)
Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [119.145.14.67]) by ietfa.amsl.com (Postfix) with ESMTP id 6F64521F8625 for <dc@ietf.org>; Thu,  5 Jan 2012 02:01:00 -0800 (PST)
Received: from huawei.com (szxga04-in [172.24.2.12]) by szxga04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXB00LYIL43BO@szxga04-in.huawei.com> for dc@ietf.org; Thu, 05 Jan 2012 18:00:03 +0800 (CST)
Received: from szxrg02-dlp.huawei.com ([172.24.2.119]) by szxga04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXB00934L436N@szxga04-in.huawei.com> for dc@ietf.org; Thu, 05 Jan 2012 18:00:03 +0800 (CST)
Received: from szxeml206-edg.china.huawei.com ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGD07484; Thu, 05 Jan 2012 17:59:53 +0800
Received: from SZXEML420-HUB.china.huawei.com (10.82.67.159) by szxeml206-edg.china.huawei.com (172.24.2.58) with Microsoft SMTP Server (TLS) id 14.1.323.3; Thu, 05 Jan 2012 17:59:52 +0800
Received: from SZXEML525-MBS.china.huawei.com ([169.254.8.55]) by szxeml420-hub.china.huawei.com ([10.82.67.159]) with mapi id 14.01.0323.003; Thu, 05 Jan 2012 17:59:47 +0800
Date: Thu, 05 Jan 2012 09:59:46 +0000
From: Xuxiaohu <xuxiaohu@huawei.com>
In-reply-to: <618BE8B40039924EB9AED233D4A09C5102B25ABD@XMB-BGL-416.cisco.com>
X-Originating-IP: [10.108.4.80]
To: "Ashish Dalela (adalela)" <adalela@cisco.com>, David Allan I <david.i.allan@ericsson.com>, Thomas Narten <narten@us.ibm.com>
Message-id: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7653B3@szxeml525-mbs.china.huawei.com>
MIME-version: 1.0
Content-type: text/plain; charset=gb2312
Content-language: zh-CN
Content-transfer-encoding: base64
Accept-Language: zh-CN, en-US
Thread-topic: [dc] draft-dalela-dc-requirements-00.txt
Thread-index: AQHMywrHhEgV5yAHFkOyfFdfFdu+BpX8COwAgADJnICAALrPAA==
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com> <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se> <618BE8B40039924EB9AED233D4A09C5102B25ABD@XMB-BGL-416.cisco.com>
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 10:01:02 -0000

DQo+IC0tLS0t08q8/tStvP4tLS0tLQ0KPiC3orz+yMs6IGRjLWJvdW5jZXNAaWV0Zi5vcmcgW21h
aWx0bzpkYy1ib3VuY2VzQGlldGYub3JnXSC0+rHtIEFzaGlzaA0KPiBEYWxlbGEgKGFkYWxlbGEp
DQo+ILeiy83KsbzkOiAyMDEyxOox1MI1yNUgMTQ6NTgNCj4gytW8/sjLOiBEYXZpZCBBbGxhbiBJ
OyBUaG9tYXMgTmFydGVuDQo+ILOty806IGRjQGlldGYub3JnDQo+INb3zOI6IFJlOiBbZGNdIGRy
YWZ0LWRhbGVsYS1kYy1yZXF1aXJlbWVudHMtMDAudHh0DQo+IA0KPiBEYXZpZCwNCj4gDQo+IEEg
dG90YWwgb2YgMSBzZW50ZW5jZSB3YXMgZGVkaWNhdGVkIHRvIHRoZSBMMiBwcm9ibGVtIC0gIkwy
IG5ldHdvcmtzDQo+IGNhbid0IGJlIG1hZGUgdG8gc2NhbGUgYmVjYXVzZSBvZiBoaWdoIG51bWJl
ciBvZiBicm9hZGNhc3RzLiIgQW5kIExpbmRhDQo+IGhhcyBjb21tZW50ZWQgb24gdGhpcyB0aGF0
IHRoZSBzY2FsaW5nIGlzc3VlIGlzIGFsc28gZHVlIHRvIE1BQw0KPiBzdW1tYXJpemF0aW9uLCBt
b2RpZmljYXRpb24gdG8gYmUgZG9uZS4gVGhlcmUgaXMgYWJzb2x1dGVseSBubw0KPiBkaXNjdXNz
aW9uIGFib3V0IHByZS1WTEFOIGRheXMsIGFuZCBJIHdvdWxkIGxpa2UgdG8ga25vdyB3aGVyZSB5
b3Ugc2VlDQo+IGl0LiBCcm9hZGNhc3QgaXMgY29udGFpbmVkIGJ5IFZMQU4sIGFuZCB3ZSBhcmUg
dGFsa2luZyBvZiB0aGF0Lg0KPiANCj4gTDJWUE4gaXMgYW5vdGhlciBzdG9yeSAtIGJlY2F1c2Ug
dGhlcmUgd2FzIFZQTFMsIGFuZCB0aGVuIHRoZXJlIGFyZSBsb3RzDQo+IG9mIG5ldyB0aGluZ3Mu
IFRoZXJlIGFyZSBwcm9ibGVtcyB0byBiZSBzb2x2ZWQsIGFuZCB0aGVuIHByb2JsZW1zIHRvIGJl
DQo+IHNvbHZlZCBnaXZlbiBzb21lIHNvbHV0aW9uLiBTbywgdGhlIHByb2JsZW0gYm91bmRhcnkg
c2hpZnRzIGZyb20gdGhlDQo+IHRpbWUgeW91IHRha2Ugc29tZXRoaW5nIGFzIGEgImdpdmVuIi4g
UGVvcGxlIGdlbmVyYWxseSB0YWtlIE9TUEYgKG9yDQo+IHNvbWUgTDMgcm91dGluZyBwcm90b2Nv
bCkgYW5kIFZMQU4gYXMgYSBnaXZlbiBpbiB0aGUgZGF0YWNlbnRlci4gUmVzdCBpcw0KPiBub3Qg
YSAiZ2l2ZW4iLiBTbywgd2UgaGF2ZSB0byBzdGFydCBmcm9tIHdoYXQgaXMgZ2l2ZW4uDQoNClRv
IGhlbHAgdW5kZXJzdGFuZGluZyB3aGV0aGVyIHRoZXJlIGlzIGFueSBzcGVjaWZpYyBJRVRGIHdv
cmsgdGhhdCBuZWVkcyB0byBiZSBkb25lLCBpdCdzIHdvcnRod2hpbGUgdG8gZXZhbHVhdGUgd2hl
dGhlciB0aGUgZXhpc3RpbmcgYW5kIHByb3ZlbiBWUE4gdGVjaG5vbG9naWVzIGNvdWxkIGJlIHVz
ZWQgaW4gdGhlIGRhdGEgY2VudGVyIHRvIHNvbHZlIHRoZSBwcm9ibGVtcyBmYWNlZCBieSAidGhl
IGdpdmVuIHNvbHV0aW9ucyIgYXMgeW91IG1lbnRpb25lZC4gRm9yIGluc3RhbmNlLCBjb3VsZCB0
aGUgVlBMUyBzb2x1dGlvbiBhZGRyZXNzIHRoZSBzY2FsaW5nIGlzc3VlIGFzc29jaWF0ZWQgd2l0
aCB0aGUgVkxBTitTVFAgc29sdXRpb24sIG9yIGNvdWxkIHRoZSBob3N0IHJvdXRlIGJhc2VkIEwz
VlBOIHNvbHV0aW9uIGNvdWxkIHNvbHZlIHRoZSBwcm9ibGVtIGFzc29jaWF0ZWQgd2l0aCB0aGUg
aG9zdCByb3V0ZSBiYXNlZCBPU1BGIHNvbHV0aW9uLi4uIGlmIG5vdCwgd2hhdCdzIG1pc3Npbmc/
DQoNCkJlc3QgcmVnYXJkcywNClhpYW9odQ0KDQo+IEknbSBmaW5lLCBpZiB3ZSB3YW50IHRvIGNo
YW5nZSB0aGUgZ2l2ZW4gdG8gc29tZXRoaW5nIGVsc2UuIExldCBtZSBrbm93DQo+IHdoYXQgd2Ug
dGhpbmsgaXMgdGhlIGdpdmVuIHRvZGF5Lg0KPiANCj4gVGhhbmtzLCBBc2hpc2gNCj4gDQo+IC0t
LS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IERhdmlkIEFsbGFuIEkgW21haWx0bzpk
YXZpZC5pLmFsbGFuQGVyaWNzc29uLmNvbV0NCj4gU2VudDogVGh1cnNkYXksIEphbnVhcnkgMDUs
IDIwMTIgMTI6MjYgQU0NCj4gVG86IFRob21hcyBOYXJ0ZW47IEFzaGlzaCBEYWxlbGEgKGFkYWxl
bGEpDQo+IENjOiBkY0BpZXRmLm9yZw0KPiBTdWJqZWN0OiBSRTogW2RjXSBkcmFmdC1kYWxlbGEt
ZGMtcmVxdWlyZW1lbnRzLTAwLnR4dA0KPiANCj4gDQo+IElmIHRoZSBnb2FsIGlzIHRvIGRlc2Ny
aWJlIHRoZSBnZW5lcmFsaXplZCBjaGFyYWN0ZXJpc3RpY3Mgb2Ygd2hhdCBpcw0KPiBuZWVkZWQ6
DQo+IEFuIGFic29sdXRlbHkgZmxhdCBicm9hZGNhc3QgZG9tYWluIGRvZXMgbm90IHNjYWxlLi4u
ZHVoIQ0KPiBBbiBhYnNvbHV0ZWx5IGZsYXQgTDIgbmV0d29yayBkb2VzIG5vdCBzY2FsZS4uLmR1
aCENCj4gUGFydGl0aW9uaW5nIHRoZSBuZXR3b3JrIGludG8gYSBsYXJnZSBudW1iZXIgb2Ygdmly
dHVhbCBicm9hZGNhc3QNCj4gZG9tYWlucyBvciBMMlZQTnMvVkxBTnMgaXMgd2hhdCB3b3JrcyBm
b3IgbWFueSBhZG9wdGVycyBhcyBpdCBzdXBwb3J0cw0KPiBQTU8uIFRoaXMgaXMgd2hhdCBudW1l
cm91cyBleGlzdGluZyBzdGFuZGFyZGl6ZWQgYW5kIHByb3ByaWV0YXJ5DQo+IHNvbHV0aW9ucyBv
ZmZlciB3aXRoIHZhcmlvdXMgc2hhZGVzIG9mIGdyZXkgYXR0cmlidXRlIHdpc2UgKGUuZy4NCj4g
c2NhbGluZywgb3JkZXJpbmcgZ3VhcmFudGVlcywgcHJvcGVydGllcyB3aGVuIGZhaWx1cmVzIG9j
Y3VyLCBicm9hZGNhc3QNCj4gY29udGFpbm1lbnQgZXRjLikuIFRoZSBvbmUgb2JzZXJ2YXRpb24g
aXMgdGhhdCBhIDI0IGJpdCBWTEFOIHRhZyBzZWVtcw0KPiB0byBiZSB0aGUgY3VycmVudCBnb2xk
IHN0YW5kYXJkLCBib3RoIHdpdGggdGhlIElFRUUgYW5kIHdpdGggcHJvcHJpZXRhcnkNCj4gb3Ig
cHJvcG9zZWQgYXBwcm9hY2hlcy4NCj4gDQo+IEl0IHdvdWxkIGJlIGRvaW5nIHRvIGdyb3VwIGEg
c2VydmljZSBpZiB0aGUgaXNzdWVzIHdpdGggRXRoZXJuZXQgd2VyZQ0KPiBub3QgcHJlc2VudGVk
IGJhc2VkIG9uIGEgdmlldyBzdHVjayBpbiBwZXJoYXBzIHRoZSAyMDA0LTIwMDUgdGltZWZyYW1l
LA0KPiBvciBwZXJoYXBzIGV2ZW4gYmVmb3JlIHRoZSBzdGFuZGFyZGl6YXRpb24gb2YgdGhlIG9y
aWdpbmFsIDEyIGJpdCBWTEFODQo+IHRhZywgbGV0IGFsb25lIHdoYXQgaGFzIGNvbWUgc2luY2Uu
DQo+IA0KPiA7LSkNCj4gRGF2ZQ0KPiANCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4g
RnJvbTogZGMtYm91bmNlc0BpZXRmLm9yZyBbbWFpbHRvOmRjLWJvdW5jZXNAaWV0Zi5vcmddIE9u
IEJlaGFsZiBPZg0KPiBUaG9tYXMgTmFydGVuDQo+IFNlbnQ6IFdlZG5lc2RheSwgSmFudWFyeSAw
NCwgMjAxMiA5OjU5IEFNDQo+IFRvOiBBc2hpc2ggRGFsZWxhIChhZGFsZWxhKQ0KPiBDYzogZGNA
aWV0Zi5vcmcNCj4gU3ViamVjdDogW2RjXSBkcmFmdC1kYWxlbGEtZGMtcmVxdWlyZW1lbnRzLTAw
LnR4dA0KPiANCj4gSGkgQXNoaXNoLg0KPiANCj4gSSBoYWQgIGxvb2sgYXQgdGhpcyBkb2N1bWVu
dCBhcyBpdCBpcyBmb2N1c2VkIG9uIHJlcXVpcmVtZW50cy4gVGhhbmtzDQo+IGZvciBkb2luZyB0
aGlzLg0KPiANCj4gT25lIHN0YXJ0aW5nIGNvbW1lbnQsIGFzIHRoZSBkb2N1bWVudCBzYXlzOg0K
PiANCj4gPiAgICBTY2FsYWJpbGl0eSBoYXNuJ3QgZ2VuZXJhbGx5IGJlZW4gYSBzdGFuZGFyZHMg
Y29uc2lkZXJhdGlvbiBhbmQgdGhlDQo+ID4gICAgcHJvYmxlbXMgb2Ygc2NhbGluZyBhcmUgbGVm
dCB0byBpbXBsZW1lbnRhdGlvbi4gQnV0LCBpbiB0aGUgY2FzZSBvZg0KPiA+ICAgIGNsb3VkIGRh
dGFjZW50ZXJzLCBzY2FsaW5nIGlzIHRoZSBiYXNpYyByZXF1aXJlbWVudCwgYW5kIGFsbA0KPiBw
cm9ibGVtcw0KPiA+ICAgIG9mIGNsb3VkIGRhdGFjZW50ZXJzIGFyaXNlIGR1ZSB0byBzY2FsaW5n
LiBUaGUgc29sdXRpb24gZGV2ZWxvcG1lbnQNCj4gPiAgICBjYW4ndCB0aGVyZWZvcmUgaWdub3Jl
IHRoZSBzY2FsaW5nIGFuZCBvcHRpbWFsaXR5IHByb2JsZW0uDQo+IA0KPiBJIGRpc2FncmVlIHdp
dGggdGhlIGFib3ZlLiBTY2FsYWJpbGl0eSBoYXMgYWx3YXlzIGJlZW4gb25lIChvZiBtYW55KQ0K
PiBmYWN0b3JzIHRoYXQgZ29lcyBpbnRvIGRldmVsb3BtZW50IG9mIGEgc3RhbmRhcmQuIExldCdz
IGp1c3QgdGFrZSBpdCBhcw0KPiBhIGdpdmVuIHRoYXQgYW55IHNvbHV0aW9uIGhhcyB0byBzY2Fs
ZSBhZGVxdWF0ZWx5IGZvciB0aGUgZW52aXJvbm1lbnQgaW4NCj4gd2hpY2ggaXQgaXMgdG8gYmUg
ZGVwbG95ZWQuIFNheWluZyBtb3JlIHRoYW4gdGhhdCAoaW4gZ2VuZXJhbCB0ZXJtcykgaXMNCj4g
cHJvYmFibHkgbm90IGEgdXNlZnVsIGRpc2N1c3Npb24uIFRvIHRhbGsgYWJvdXQgc2NhbGFiaWxp
dHksIG9uZSBoYXMgdG8NCj4gdGFsayBhYm91dCBhIHNwZWNpZmljIHRlY2hub2xvZ3kgYW5kIHdo
ZXJlIGl0IGlzIG9yIHdpbGwgYmUgZGVwbG95ZWQuDQo+IA0KPiBMb29raW5nIGF0IFNlY3Rpb24g
NSwgd2hlcmUgdGhlIG1haW4gcmVxdWlyZW1lbnRzIGFyZSBsaXN0ZWQ6DQo+IA0KPiA+ICAgIDUu
MS4gVGhlIEJhc2ljIEZvcndhcmRpbmcgUHJvYmxlbQ0KPiA+DQo+ID4gICAgVHJhZGl0aW9uYWxs
eSwgZGF0YWNlbnRlciBuZXR3b3JrcyBoYXZlIHVzZWQgTDIgb3IgTDMgdGVjaG5vbG9naWVzLg0K
PiA+ICAgIFRoZSBuZWVkIHRvIG1hc3NpdmVseSBzY2FsZSB2aXJ0dWFsaXplZCBob3N0cyBicmVh
a3MgYm90aCB0aGVzZQ0KPiA+ICAgIGFwcHJvYWNoZXMuIEwyIG5ldHdvcmtzIGNhbid0IGJlIG1h
ZGUgdG8gc2NhbGUgYmVjYXVzZSBvZiBoaWdoDQo+IG51bWJlcg0KPiA+ICAgIG9mIGJyb2FkY2Fz
dHMuIEwzIG5ldHdvcmtzIGNhbid0IHN1cHBvcnQgaG9zdCBtb2JpbGl0eSwgc2luY2UNCj4gcm91
dGluZw0KPiA+ICAgIHVzZXMgc3VibmV0cyBhbmQgYW4gSVAgY2Fubm90IGJlIG1vdmVkIG91dCBv
ZiB0aGF0IHN1Ym5ldC4gTW92aW5nDQo+IElQDQo+ID4gICAgaW4gYSBuYXRpdmVseSBMMyBuZXR3
b3JrIHJlcXVpcmVzIGluc3RhbGxpbmcgaG9zdCByb3V0ZXMgYXQgb25lIG9yDQo+ID4gICAgbW9y
ZSBwb2ludHMgaW4gdGhlIHBhdGggYW5kIHRoYXQgaXMgYW4gYXBwcm9hY2ggdGhhdCBjYW4ndCBi
ZQ0KPiA+IHNjYWxlZC4NCj4gDQo+IEkgc3VzcGVjdCB0aGVyZSBpcyBnZW5lcmFsIGFncmVlbWVu
dCB0aGF0IHRoZSBhYm92ZSBpcyBhIGdlbmVyYWwNCj4gInByb2JsZW0iLiBIYXZpbmcgb25lIGJp
ZyBmbGF0IEwyIGluIGEgZGF0YSBjZW50ZXIgaXMgZ3JlYXQgZm9yIFZNDQo+IG1pZ3JhdGlvbiBh
bmQgcGxhY2VtZW50IG9mIHNlcnZpY2VzICJhbnkgcGxhY2UsIGFueXRpbWUiLCBidXQgY2FuIHJh
aXNlDQo+IHNjYWxpbmcgYW5kIG90aGVyIGNvbmNlcm5zLiBQdXNoaW5nIEwzIGFsbCB0aGUgd2F5
IG91dCB0byB0aGUgZWRnZXMNCj4gKGUuLmcsIFRvUiBvciBIeXBlcnZpc29yKSBtYWtlcyBpdCBo
YXJkIHRvIHBsYWNlIChvciBtb3ZlKSBzZXJ2aWNlcy9WTXMNCj4gYXJiaXRyYXJpbHkuDQo+IA0K
PiBUaGUgYWJvdmUgaXMgb25lIG9mIHRoZSBtb3RpdmF0aW9ucyBiZWhpbmQgdGhlIE5WTzMgd29y
ay4NCj4gDQo+ID4gNS4yLiBUaGUgRGF0YWNlbnRlciBJbnRlci1Db25uZWN0aXZpdHkgUHJvYmxl
bQ0KPiA+DQo+ID4gICAgVGhlcmUgYXJlIGxpbWl0cyB0byBob3cgbXVjaCBhIGRhdGFjZW50ZXIg
d291bGQgYmUgc2NhbGVkLg0KPiBXb3JrbG9hZHMNCj4gPiAgICBuZWVkIHRvIGJlIHBsYWNlZCBj
bG9zZXIgdG8gdGhlIGNsaWVudHMgdG8gcmVkdWNlIGxhdGVuY3kgYW5kDQo+ID4gICAgYmFuZHdp
ZHRoLiBIZW5jZSwgZGF0YWNlbnRlcnMgbmVlZCB0byBiZSBzcGxpdCBpbnRvIGdlb2dyYXBoaWNh
bA0KPiA+ICAgIGxvY2F0aW9ucyBhbmQgY29ubmVjdGVkIG92ZXIgdGhlIEludGVybmV0LiBTb21l
IG9mIHRoZXNlDQo+IGRhdGFjZW50ZXJzDQo+ID4gICAgbWF5IGJlIG93bmVkIGJ5IGRpZmZlcmVu
dCBhZG1pbmlzdHJhdG9ycywgYXMgaW4gdGhlIGNhc2Ugb2YgcHJpdmF0ZQ0KPiA+ICAgIGFuZCBw
dWJsaWMgY2xvdWQgaW50ZXJjb25uZWN0aXZpdHkuIFdvcmtsb2FkcyBjYW4gbW92ZSBiZXR3ZWVu
DQo+IHRoZXNlDQo+ID4gICAgZGF0YWNlbnRlcnMsIHNpbWlsYXIgdG8gaG93IHRoZXkgbW92ZSB3
aXRoaW4gdGhlIGRhdGFjZW50ZXIuDQo+IA0KPiBJbiB0aGlzIHNlY3Rpb24sIG15IHRha2UgYXdh
eSBpcyB0aGF0IHRoZXJlIHdpbGwgYmUgbXVsdGlwbGUsDQo+IGdlb2dyYXBoaWNhbGx5IHNlcGFy
YXRlZCBkYXRhIGNlbnRlcnMuIEFuZCB0aGF0IHRoZXkgd2lsbCBuZWVkIHRvIGJlDQo+IGNvbm5l
Y3RlZCB0b2dldGhlci4gSSBzdXNwZWN0IGV2ZXJ5b25lIGFncmVlcyB3aXRoIHRoYXQuDQo+IA0K
PiBCdXQgSSBkb24ndCBzZWUgaG93IHRoaXMgaW1wbGllcyB0aGVyZSBpcyBhbnkgc3BlY2lmaWMg
SUVURiB3b3JrIHRoYXQNCj4gbmVlZHMgZG9pbmcuIFdlIGFscmVhZHkgaGF2ZSBnZW9ncmFwaGlj
YWxseSBzZXBhcmF0ZWQgZGF0YSBjZW50ZXJzLCBhbmQNCj4gdGhlcmUgYXJlLCBlLmcuLCBwbGVu
dHkgb2YgVlBOIHRlY2hub2xvZ2llcyBhdmFpbGFibGUgZm9yIGNvbm5lY3RpbmcNCj4gdGhlbSB0
b2dldGhlci4NCj4gDQo+IFdoYXQgc3BlY2lmaWNhbGx5IGlzIG1pc3NpbmcgdGhhdCBwcmV2ZW50
cyB0aGUgYWJvdmUgZnJvbSBiZWluZyBkb25lDQo+IHRvZGF5PyBXaGF0IGlzIGl0IHRoYXQgeW91
IHRoaW5rIG5lZWRzIGRvaW5nIHRoYXQgY2FuJ3QgYmUgZG9uZSB3aXRoDQo+IGV4aXN0aW5nIHN0
YW5kYXJkcz8NCj4gDQo+ID4gNS4zLiBUaGUgTXVsdGktVGVuYW5jeSBQcm9ibGVtDQo+ID4NCj4g
PiAgICBEYXRhY2VudGVycyB0aHVzIGZhciBoYXZlIGJlZW4gd2hvbGx5IHVzZWQgYnkgc2luZ2xl
IHRlbmFudC4gVG8NCj4gPiAgICBzZXBhcmF0ZSBkZXBhcnRtZW50cyB3aXRoaW4gYSB0ZW5hbnQs
IFZMQU5zIGhhdmUgYmVlbiB1c2VkLiBUaGlzDQo+ID4gICAgc2VlbWVkIHN1ZmZpY2llbnQgZm9y
IHRoZSBudW1iZXIgb2Ygc2VnbWVudHMgYW4gZW50ZXJwcmlzZSB3b3VsZA0KPiA+ICAgIG5lZWQu
IEJ1dCwgdGhpcyBhcHByb2FjaCBjYW4ndCBiZSBleHRlbmRlZCB0byBjbG91ZCBkYXRhY2VudGVy
cy4NCj4gDQo+IEkgc3VzcGVjdCB5b3UnbGwgZ2V0IGEgbG90IG9mIGFncmVlbWVudCBvbiB0aGlz
LiBBbmQgb25lIG9mIHRoZSBrZXkgYWltcw0KPiBvZiBOVk8zIGlzIHRvIGFkZHJlc3MgdGhpcy4N
Cj4gDQo+IElzIHRoZSBleGlzdGluZyBOVk8zIGFwcHJvYWNoIG5vdCBhZGVxdWF0ZSBmb3IgdGhl
IGFib3ZlPyBJZiBzbyB3aHkgbm90Pw0KPiANCj4gPiA1LjQuIFRoZSBUZWNobm9sb2d5LVRvcG9s
b2d5IFNlcGFyYXRpb24gUHJvYmxlbQ0KPiA+DQo+ID4gICAgV2hpbGUgbGFyZ2UgZGF0YWNlbnRl
cnMgYXJlIGJlY29taW5nIGNvbW1vbiwgbWVkaXVtIGFuZCBzbWFsbA0KPiA+ICAgIGRhdGFjZW50
ZXJzIHdpbGwgY29udGludWUgdG8gZXhpc3QuIFRoZXNlIG1heSBpbmNsdWRlIGEgYnJhbmNoDQo+
IG9mZmljZQ0KPiA+ICAgIGNvbm5lY3RlZCB0byBhIGNlbnRyYWwgb2ZmaWNlLCBvciBhIHNtYWxs
IGVudGVycHJpc2UgZGF0YWNlbnRlcg0KPiB0aGF0DQo+ID4gICAgaXMgY29ubmVjdGVkIHRvIGEg
aHVnZSBwdWJsaWMgY2xvdWQuIFRvIG1vdmUgd29ya2xvYWRzIGFjcm9zcyB0aGVzZQ0KPiA+ICAg
IG5ldHdvcmtzLCB0aGUgdGVjaG5vbG9naWVzIHVzZWQgaW4gdGhlIGRhdGFjZW50ZXIgbXVzdCBi
ZSBhZ25vc3RpYw0KPiBvZg0KPiA+ICAgIHRoZSB0b3BvbG9neSBlbXBsb3llZCBpbiB0aGUgdmFy
aW91cyBzaXplZCBkYXRhY2VudGVycy4NCj4gDQo+ID4gICAgQSBzbWFsbCBkYXRhY2VudGVyIG1h
eSB1c2UgYSBtZXNoIHRvcG9sb2d5LiBBIG1lZGl1bSBkYXRhY2VudGVyIG1heQ0KPiA+ICAgIHVz
ZSBhIHRocmVlLXRpZXIgdG9wb2xvZ3kuIEFuZCBhIGxhcmdlIGRhdGFjZW50ZXIgbWF5IHVzZSBh
DQo+IHR3by10aWVyDQo+ID4gICAgbXVsdGktcGF0aCBhcmNoaXRlY3R1cmUuIEl0IGhhcyB0byBi
ZSByZWNvZ25pemVkIHRoYXQgYWxsIHRoZXNlDQo+ID4gICAgZGF0YWNlbnRlcnMgb2YgdmFyaW91
cyBzaXplcyBuZWVkIHRvIGludGVyb3BlcmF0ZS4gSW4gcGFydGljdWxhciwNCj4gaXQNCj4gPiAg
ICBzaG91bGQgYmUgcG9zc2libGUgdG8gdXNlIGEgY29tbW9uIHRlY2hub2xvZ3kgdG8gY29ubmVj
dCBsYXJnZSBhbmQNCj4gPiAgICBzbWFsbCBkYXRhY2VudGVycywgdHdvIGxhcmdlIGRhdGFjZW50
ZXJzLCBvciB0d28gc21hbGwgZGF0YWNlbnRlcnMuDQo+IA0KPiBJc24ndCB0aGlzIGFscmVhZHkg
cG9zc2libGUsIGFuZCBpbmRlZWQsIGhhcHBlbmluZyB0b2RheT8NCj4gDQo+IFdoYXQgSUVURiB3
b3JrIGlzIG5lZWRlZD8gV2hhdCBzdGFuZGFyZHMgZ2FwIG5lZWRzIGZpbGxpbmc/DQo+IA0KPiA+
ICAgIDUuNS4gVGhlIE5ldHdvcmsgQ29udmVyZ2VuY2UgUHJvYmxlbQ0KPiA+DQo+ID4gICAgQ2xv
dWQgZGF0YWNlbnRlcnMgd2lsbCBiZSBjaGFyYWN0ZXJpemVkIGJ5IGVsYXN0aWNpdHkuIFRoYXQg
bWVhbnMNCj4gPiAgICB0aGF0IHZpcnR1YWwgcmVzb3VyY2VzIGFyZSBjb25zdGFudGx5IGNyZWF0
ZWQgYW5kIGRlc3Ryb3llZC4NCj4gVHlwaWNhbA0KPiA+ICAgIGhhcmR3YXJlIGFuZCBzb2Z0d2Fy
ZSByZWxpYWJpbGl0aWVzIG9mIHRvZGF5IG1lYW4gdGhhdCBmYWlsdXJlcyBhdA0KPiA+ICAgIHNj
YWxlIHdpbGwgYmUgZmFpcmx5IGNvbW1vbiwgYW5kIGF1dG9tYXRlZCByZWNvdmVyeSBtZWNoYW5p
c21zIHdpbGwNCj4gPiAgICBuZWVkIHRvIGJlIHB1dCBpbiBwbGFjZS4gV2hlbiBjb21iaW5lZCB3
aXRoIHdvcmtsb2FkIG1vYmlsaXR5IGZvcg0KPiB0aGUNCj4gPiAgICBzYWtlIG9mIHJlc291cmNl
IG9wdGltaXphdGlvbiBhbmQgaW1wcm92aW5nIHV0aWxpemF0aW9uLCB0aGUgY2h1cm4NCj4gaW4N
Cj4gPiAgICB0aGUgbmV0d29yayBmb3J3YXJkaW5nIHRhYmxlcyBjYW4gYmUgdmVyeSBzaWduaWZp
Y2FudC4NCj4gDQo+IFdIYXQgd29yayBkb2VzIHRoZSBhYm92ZSBpbXBseSB0aGF0IHRoZSBJRVRG
IG5lZWRzIHRvIGRvID8NCj4gDQo+ID4gICAgTW9iaWxpdHkgYWxzbyBhZmZlY3RzIHZpcnR1YWxp
emVkIG5ldHdvcmsgZGV2aWNlcywgc3VjaCBhcyB2aXJ0dWFsDQo+ID4gICAgc3dpdGNoZXMsIGZp
cmV3YWxscywgbG9hZC1iYWxhbmNlcnMsIGV0Yy4gRm9yIGluc3RhbmNlLCB3aGVuIGENCj4gc2Vy
dmVyDQo+ID4gICAgZmFpbHMgYW5kIGFsbCB0aGUgVk1zIGFyZSByZWxvY2F0ZWQsIHRoZSBhc3Nv
Y2lhdGVkIHZpcnR1YWwgc3dpdGNoDQo+ID4gICAgYW5kIGZpcmV3YWxsIG11c3QgYWxzbyBiZSBy
ZWxvY2F0ZWQuIFRoaXMgbWVhbnMgdGhhdCBhbnkgYXNzdW1wdGlvbg0KPiA+ICAgIGluIG1vYmls
aXR5IHRoYXQgdGhlIG5ldHdvcmsgaXMgYSBzdGF0aWMgZmlybWFtZW50IG9uIHdoaWNoIGhvc3Rz
DQo+IGFyZQ0KPiA+ICAgIGR5bmFtaWNhbGx5IGF0dGFjaGVkIGJlY29tZXMgZmFsc2UuIFdlIGhh
dmUgdG8gYXNzdW1lIHRoYXQgdGhlDQo+ID4gICAgbmV0d29yayBpcyBhcyBkeW5hbWljIGFzIHRo
ZSBob3N0cyB0aGVtc2VsdmVzLg0KPiANCj4gVGhpcyBoZXJlIGlzIGludGVyZXN0aW5nLiBUaGUg
aW1wbGljYXRpb24gaXMgdGhhdCB3aGVuIG1vdmluZyBhIFZNLA0KPiBlaXRoZXINCj4gDQo+IGEp
IGEgRlcgb3IgTEIgKG9yIGJvdGgpIG1heSBhbHNvIGhhdmUgdG8gYmUgbW92ZWQsIG9yDQo+IA0K
PiBiKSBzb21lIHNvcnQgb2YgcGF0aCBlbmZvcmNlbWVudCBpcyBuZWVkZWQgdGhhdCBpbnN1cmVz
IHRyYWZmaWMgZnJvbSB0aGUNCj4gKG5vdyBtb3ZlZCkgVk0gY29udGludWVzIHRvIGdvIHRocm91
Z2ggdGhlIHNhbWUgTEIgb3IgRlcgYXMgYmVmb3JlLg0KPiANCj4gRG8gSSB1bmRlcnN0YW5kIHRo
YXQgY29ycmVjdGx5PyBBbmQgaWYgc28sIHdoYXQgaXMgdGhlIElFVEYgd29yayB0aGF0DQo+IG5l
ZWRzIHRvIGJlIGRvbmUgdG8gbWFrZSBhbGwgdGhpcyBoYXBwZW4/DQo+IA0KPiA+ICA1LjYuIFRo
ZSBFYXN0LVdlc3QgVHJhZmZpYyBQcm9ibGVtDQo+IA0KPiBJcyB0aGlzIHNlY3Rpb24gc2F5aW5n
IGFueXRoaW5nIG1vcmUgdGhhbiB0aGVyZSBpcyBhIG5lZWQgZm9yDQo+IG11bHRpcGF0aGluZyBm
b3IgRWFzdCBXZXN0IHRyYWZmaWM/DQo+IA0KPiA+IDUuNy4gVGhlIE5ldHdvcmsgU0xBIFByb2Js
ZW0NCj4gPg0KPiA+ICAgIE11bHRpLXRlbmFudCBuZXR3b3JrcyBuZWVkIHRvIHByb3RlY3QgYWxs
IHRlbmFudHMgZnJvbSBvdmVydXNpbmcNCj4gPiAgICBuZXR3b3JrIHJlc291cmNlcy4gRm9yIGV4
YW1wbGUsIGhpZ2gtdHJhZmZpYyBsb2FkIGZyb20gb25lIHRlbmFudA0KPiA+ICAgIHNob3VsZCBu
b3Qgc3RhcnZlIGFub3RoZXIgdGVuYW50IG9mIGJhbmR3aWR0aC4gTm90ZSB0aGF0IGluIGENCj4g
bXVsdGktDQo+ID4gICAgdGVuYW50IGVudmlyb25tZW50LCBubyB0ZW5hbnQgaGFzIGZ1bGwgY29u
dHJvbCBvciB2aXNpYmlsaXR5IG9mDQo+IHdoYXQNCj4gPiAgICBvdGhlciB0ZW5hbnRzIGFyZSBk
b2luZywgYW5kIGhvdyBwcm9ibGVtcyBjYW4gYmUgZml4ZWQuIEEgcmVhbC10aW1lDQo+ID4gICAg
ZGVidWdnaW5nIG9mIHN1Y2ggcHJvYmxlbXMgaXMgdmVyeSBoYXJkIGZvciBhIHByb3ZpZGVyLg0K
PiANCj4gLi4uDQo+IA0KPiA+ICAgIFNlY29uZCwgbWVjaGFuaXNtcyB0byBtZWFzdXJlIGFuZCBn
dWFyYW50ZWUgbmV0d29yayBTTEFzIHdpbGwgaGF2ZQ0KPiB0bw0KPiA+ICAgIGVtcGxveSBhY3Rp
dmUgZmxvdyBtYW5hZ2VtZW50IHRvIGd1YXJhbnRlZSBiYW5kd2lkdGggdG8gYWxsIHRlbmFudHMN
Cj4gPiAgICBhbmQga2VlcCB0aGUgbmV0d29yayBwcm92aXNpb25lZCBvbmx5IHRvIHRoZSBsZXZl
bCByZXF1aXJlZC4gRmxvdw0KPiA+ICAgIG1hbmFnZW1lbnQgY2FuIGJlIGludGVncmF0ZWQgYXMg
cGFydCBvZiBleGlzdGluZyBmb3J3YXJkaW5nDQo+ID4gICAgdGVjaG5pcXVlcyBvciBtYXkgbmVl
ZCBuZXcgdGVjaG5pcXVlcy4gTmV0d29yayBTTEFzIGNhbiBwbGF5IGFuDQo+ID4gICAgaW1wb3J0
YW50IHJvbGUgaW4gZGV0ZXJtaW5pbmcgaWYgc3VmZmljaWVudCBiYW5kd2lkdGggaXMgYXZhaWxh
YmxlDQo+ID4gICAgYmVmb3JlIGEgVk0gaXMgbW92ZWQgdG8gYSBuZXcgbG9jYXRpb24uDQo+IA0K
PiBDYW4gdGhpcyBub3QgYmUgZG9uZSB0b2RheT8gV2hhdCBzcGVjaWZpYyBJRVRGIHdvcmsgd291
bGQgYmUgbmVlZGVkIHRvDQo+IHN1cHBvcnQgdGhlIGVuZm9yY2VtZW50IG9mIFNMQXM/DQo+IA0K
PiBUaG9tYXMNCj4gDQo+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fDQo+IGRjIG1haWxpbmcgbGlzdA0KPiBkY0BpZXRmLm9yZw0KPiBodHRwczovL3d3dy5p
ZXRmLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2RjDQo+IF9fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fDQo+IGRjIG1haWxpbmcgbGlzdA0KPiBkY0BpZXRmLm9yZw0K
PiBodHRwczovL3d3dy5pZXRmLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2RjDQo=

From adalela@cisco.com  Thu Jan  5 03:18:55 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2C87221F8661 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 03:18:55 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.406
X-Spam-Level: 
X-Spam-Status: No, score=-0.406 tagged_above=-999 required=5 tests=[AWL=-1.796, BAYES_00=-2.599, CN_BODY_35=0.339, J_CHICKENPOX_13=0.6, J_CHICKENPOX_43=0.6, MIME_CHARSET_FARAWAY=2.45]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TCnnSxTyfdWl for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 03:18:53 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 357BB21F864B for <dc@ietf.org>; Thu,  5 Jan 2012 03:18:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=13483; q=dns/txt; s=iport; t=1325762332; x=1326971932; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=8rOdluwSnfHWtvF1iL7c4Y62KA2tpi+8PdXTXrR/9Dc=; b=Tt3XXJTHeTFcs2n+MjLrslgwg35UkyEu7DOY3phEK1zXDb8S7dcRt4pl IgouX2DpTUcB7OMfGNuyqbCVldTtnOi5GNmkRHyl46G8xl582qmsYXLRs P7u5SbAvdqwzSvQ1FOJvYopceAPb4gFtQAgnb5q0KUSHxXj9/tk4wqeWP 0=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqEEAJeFBU9Io8UY/2dsb2JhbABDhQ+ocoFyAQEBAwEBAQEPAR01CQIJBQcEAgEGAhEBAwEBBQYGFwEEAgEmHwMGCAIEAQoICBMHh1gIlycBjFcIkSIEgSuJTDdjBIg3iHaWFg
X-IronPort-AV: E=Sophos;i="4.71,461,1320624000";  d="scan'208";a="2841489"
Received: from vla196-nat.cisco.com (HELO bgl-core-3.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 05 Jan 2012 11:18:50 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-3.cisco.com (8.14.3/8.14.3) with ESMTP id q05BIoFA016808; Thu, 5 Jan 2012 11:18:50 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Thu, 5 Jan 2012 16:48:50 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: quoted-printable
Date: Thu, 5 Jan 2012 16:48:46 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25BD4@XMB-BGL-416.cisco.com>
In-Reply-To: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7653B3@szxeml525-mbs.china.huawei.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-dalela-dc-requirements-00.txt
Thread-Index: AQHMywrHhEgV5yAHFkOyfFdfFdu+BpX8COwAgADJnICAALrPAIAAD4ew
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com><60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se><618BE8B40039924EB9AED233D4A09C5102B25ABD@XMB-BGL-416.cisco.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7653B3@szxeml525-mbs.china.huawei.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Xuxiaohu" <xuxiaohu@huawei.com>, "David Allan I" <david.i.allan@ericsson.com>, "Thomas Narten" <narten@us.ibm.com>
X-OriginalArrivalTime: 05 Jan 2012 11:18:50.0189 (UTC) FILETIME=[CCB81FD0:01CCCB9B]
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 11:18:55 -0000

Hi Xuxiaohu,

I think there is some confusion about the purpose of these drafts.

dc-requirements - defines the problems that need to be solved for =
datacenter users. Doesn't yet imply anything to be done by the DC IETF =
WG, as these problems *may* have been solved. Ideally, if this was the =
case, we won't have the draft and the discussion. So, then the next one.

dc-approaches - analyzes current approaches to solve dc-requirements, =
and identifies their problems (these are second order problems, that =
arise when you try to solve the original problem). That takes us to the =
problem statement for the DC WG, because current solutions are not =
optimal.

So - you have two tiers of problems=20

- First, the ones that we originally need to solve - the GOAL.
- Second, the issues that you need to fix, in trying to solve the =
original problem - the ISSUES.

Can we just describe ISSUES, and not GOALS? Well, it's important to keep =
the GOALS in mind, in trying to address ISSUES, because we might address =
one of the ISSUES in a way that misses the overall GOALS. Discussions on =
this alias have been both about GOALS and ISSUES.
=09
Hope this is helpful. Look forward to your comments. I believe both =
GOALS and ISSUES can be clarified further.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of =
Xuxiaohu
Sent: Thursday, January 05, 2012 3:30 PM
To: Ashish Dalela (adalela); David Allan I; Thomas Narten
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt


> -----=D3=CA=BC=FE=D4=AD=BC=FE-----
> =B7=A2=BC=FE=C8=CB: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] =
=B4=FA=B1=ED Ashish
> Dalela (adalela)
> =B7=A2=CB=CD=CA=B1=BC=E4: 2012=C4=EA1=D4=C25=C8=D5 14:58
> =CA=D5=BC=FE=C8=CB: David Allan I; Thomas Narten
> =B3=AD=CB=CD: dc@ietf.org
> =D6=F7=CC=E2: Re: [dc] draft-dalela-dc-requirements-00.txt
>=20
> David,
>=20
> A total of 1 sentence was dedicated to the L2 problem - "L2 networks
> can't be made to scale because of high number of broadcasts." And =
Linda
> has commented on this that the scaling issue is also due to MAC
> summarization, modification to be done. There is absolutely no
> discussion about pre-VLAN days, and I would like to know where you see
> it. Broadcast is contained by VLAN, and we are talking of that.
>=20
> L2VPN is another story - because there was VPLS, and then there are =
lots
> of new things. There are problems to be solved, and then problems to =
be
> solved given some solution. So, the problem boundary shifts from the
> time you take something as a "given". People generally take OSPF (or
> some L3 routing protocol) and VLAN as a given in the datacenter. Rest =
is
> not a "given". So, we have to start from what is given.

To help understanding whether there is any specific IETF work that needs =
to be done, it's worthwhile to evaluate whether the existing and proven =
VPN technologies could be used in the data center to solve the problems =
faced by "the given solutions" as you mentioned. For instance, could the =
VPLS solution address the scaling issue associated with the VLAN+STP =
solution, or could the host route based L3VPN solution could solve the =
problem associated with the host route based OSPF solution... if not, =
what's missing?

Best regards,
Xiaohu

> I'm fine, if we want to change the given to something else. Let me =
know
> what we think is the given today.
>=20
> Thanks, Ashish
>=20
> -----Original Message-----
> From: David Allan I [mailto:david.i.allan@ericsson.com]
> Sent: Thursday, January 05, 2012 12:26 AM
> To: Thomas Narten; Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: RE: [dc] draft-dalela-dc-requirements-00.txt
>=20
>=20
> If the goal is to describe the generalized characteristics of what is
> needed:
> An absolutely flat broadcast domain does not scale...duh!
> An absolutely flat L2 network does not scale...duh!
> Partitioning the network into a large number of virtual broadcast
> domains or L2VPNs/VLANs is what works for many adopters as it supports
> PMO. This is what numerous existing standardized and proprietary
> solutions offer with various shades of grey attribute wise (e.g.
> scaling, ordering guarantees, properties when failures occur, =
broadcast
> containment etc.). The one observation is that a 24 bit VLAN tag seems
> to be the current gold standard, both with the IEEE and with =
proprietary
> or proposed approaches.
>=20
> It would be doing to group a service if the issues with Ethernet were
> not presented based on a view stuck in perhaps the 2004-2005 =
timeframe,
> or perhaps even before the standardization of the original 12 bit VLAN
> tag, let alone what has come since.
>=20
> ;-)
> Dave
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Thomas Narten
> Sent: Wednesday, January 04, 2012 9:59 AM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: [dc] draft-dalela-dc-requirements-00.txt
>=20
> Hi Ashish.
>=20
> I had  look at this document as it is focused on requirements. Thanks
> for doing this.
>=20
> One starting comment, as the document says:
>=20
> >    Scalability hasn't generally been a standards consideration and =
the
> >    problems of scaling are left to implementation. But, in the case =
of
> >    cloud datacenters, scaling is the basic requirement, and all
> problems
> >    of cloud datacenters arise due to scaling. The solution =
development
> >    can't therefore ignore the scaling and optimality problem.
>=20
> I disagree with the above. Scalability has always been one (of many)
> factors that goes into development of a standard. Let's just take it =
as
> a given that any solution has to scale adequately for the environment =
in
> which it is to be deployed. Saying more than that (in general terms) =
is
> probably not a useful discussion. To talk about scalability, one has =
to
> talk about a specific technology and where it is or will be deployed.
>=20
> Looking at Section 5, where the main requirements are listed:
>=20
> >    5.1. The Basic Forwarding Problem
> >
> >    Traditionally, datacenter networks have used L2 or L3 =
technologies.
> >    The need to massively scale virtualized hosts breaks both these
> >    approaches. L2 networks can't be made to scale because of high
> number
> >    of broadcasts. L3 networks can't support host mobility, since
> routing
> >    uses subnets and an IP cannot be moved out of that subnet. Moving
> IP
> >    in a natively L3 network requires installing host routes at one =
or
> >    more points in the path and that is an approach that can't be
> > scaled.
>=20
> I suspect there is general agreement that the above is a general
> "problem". Having one big flat L2 in a data center is great for VM
> migration and placement of services "any place, anytime", but can =
raise
> scaling and other concerns. Pushing L3 all the way out to the edges
> (e..g, ToR or Hypervisor) makes it hard to place (or move) =
services/VMs
> arbitrarily.
>=20
> The above is one of the motivations behind the NVO3 work.
>=20
> > 5.2. The Datacenter Inter-Connectivity Problem
> >
> >    There are limits to how much a datacenter would be scaled.
> Workloads
> >    need to be placed closer to the clients to reduce latency and
> >    bandwidth. Hence, datacenters need to be split into geographical
> >    locations and connected over the Internet. Some of these
> datacenters
> >    may be owned by different administrators, as in the case of =
private
> >    and public cloud interconnectivity. Workloads can move between
> these
> >    datacenters, similar to how they move within the datacenter.
>=20
> In this section, my take away is that there will be multiple,
> geographically separated data centers. And that they will need to be
> connected together. I suspect everyone agrees with that.
>=20
> But I don't see how this implies there is any specific IETF work that
> needs doing. We already have geographically separated data centers, =
and
> there are, e.g., plenty of VPN technologies available for connecting
> them together.
>=20
> What specifically is missing that prevents the above from being done
> today? What is it that you think needs doing that can't be done with
> existing standards?
>=20
> > 5.3. The Multi-Tenancy Problem
> >
> >    Datacenters thus far have been wholly used by single tenant. To
> >    separate departments within a tenant, VLANs have been used. This
> >    seemed sufficient for the number of segments an enterprise would
> >    need. But, this approach can't be extended to cloud datacenters.
>=20
> I suspect you'll get a lot of agreement on this. And one of the key =
aims
> of NVO3 is to address this.
>=20
> Is the existing NVO3 approach not adequate for the above? If so why =
not?
>=20
> > 5.4. The Technology-Topology Separation Problem
> >
> >    While large datacenters are becoming common, medium and small
> >    datacenters will continue to exist. These may include a branch
> office
> >    connected to a central office, or a small enterprise datacenter
> that
> >    is connected to a huge public cloud. To move workloads across =
these
> >    networks, the technologies used in the datacenter must be =
agnostic
> of
> >    the topology employed in the various sized datacenters.
>=20
> >    A small datacenter may use a mesh topology. A medium datacenter =
may
> >    use a three-tier topology. And a large datacenter may use a
> two-tier
> >    multi-path architecture. It has to be recognized that all these
> >    datacenters of various sizes need to interoperate. In particular,
> it
> >    should be possible to use a common technology to connect large =
and
> >    small datacenters, two large datacenters, or two small =
datacenters.
>=20
> Isn't this already possible, and indeed, happening today?
>=20
> What IETF work is needed? What standards gap needs filling?
>=20
> >    5.5. The Network Convergence Problem
> >
> >    Cloud datacenters will be characterized by elasticity. That means
> >    that virtual resources are constantly created and destroyed.
> Typical
> >    hardware and software reliabilities of today mean that failures =
at
> >    scale will be fairly common, and automated recovery mechanisms =
will
> >    need to be put in place. When combined with workload mobility for
> the
> >    sake of resource optimization and improving utilization, the =
churn
> in
> >    the network forwarding tables can be very significant.
>=20
> WHat work does the above imply that the IETF needs to do ?
>=20
> >    Mobility also affects virtualized network devices, such as =
virtual
> >    switches, firewalls, load-balancers, etc. For instance, when a
> server
> >    fails and all the VMs are relocated, the associated virtual =
switch
> >    and firewall must also be relocated. This means that any =
assumption
> >    in mobility that the network is a static firmament on which hosts
> are
> >    dynamically attached becomes false. We have to assume that the
> >    network is as dynamic as the hosts themselves.
>=20
> This here is interesting. The implication is that when moving a VM,
> either
>=20
> a) a FW or LB (or both) may also have to be moved, or
>=20
> b) some sort of path enforcement is needed that insures traffic from =
the
> (now moved) VM continues to go through the same LB or FW as before.
>=20
> Do I understand that correctly? And if so, what is the IETF work that
> needs to be done to make all this happen?
>=20
> >  5.6. The East-West Traffic Problem
>=20
> Is this section saying anything more than there is a need for
> multipathing for East West traffic?
>=20
> > 5.7. The Network SLA Problem
> >
> >    Multi-tenant networks need to protect all tenants from overusing
> >    network resources. For example, high-traffic load from one tenant
> >    should not starve another tenant of bandwidth. Note that in a
> multi-
> >    tenant environment, no tenant has full control or visibility of
> what
> >    other tenants are doing, and how problems can be fixed. A =
real-time
> >    debugging of such problems is very hard for a provider.
>=20
> ...
>=20
> >    Second, mechanisms to measure and guarantee network SLAs will =
have
> to
> >    employ active flow management to guarantee bandwidth to all =
tenants
> >    and keep the network provisioned only to the level required. Flow
> >    management can be integrated as part of existing forwarding
> >    techniques or may need new techniques. Network SLAs can play an
> >    important role in determining if sufficient bandwidth is =
available
> >    before a VM is moved to a new location.
>=20
> Can this not be done today? What specific IETF work would be needed to
> support the enforcement of SLAs?
>=20
> Thomas
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From lars@netapp.com  Thu Jan  5 06:34:29 2012
Return-Path: <lars@netapp.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5F1D021F85DA for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 06:34:29 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -10.566
X-Spam-Level: 
X-Spam-Status: No, score=-10.566 tagged_above=-999 required=5 tests=[AWL=0.033, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QiIIvYx-D7EE for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 06:34:28 -0800 (PST)
Received: from mx2.netapp.com (mx2.netapp.com [216.240.18.37]) by ietfa.amsl.com (Postfix) with ESMTP id AD6A721F85BD for <dc@ietf.org>; Thu,  5 Jan 2012 06:34:28 -0800 (PST)
X-IronPort-AV: E=Sophos;i="4.71,461,1320652800";  d="p7s'?scan'208";a="613660990"
Received: from smtp1.corp.netapp.com ([10.57.156.124]) by mx2-out.netapp.com with ESMTP; 05 Jan 2012 06:34:07 -0800
Received: from svlrsexc1-prd.hq.netapp.com (svlrsexc1-prd.hq.netapp.com [10.57.115.30]) by smtp1.corp.netapp.com (8.13.1/8.13.1/NTAP-1.6) with ESMTP id q05EY6KL012684; Thu, 5 Jan 2012 06:34:07 -0800 (PST)
Received: from SACMVEXC4-PRD.hq.netapp.com ([10.99.115.25]) by svlrsexc1-prd.hq.netapp.com with Microsoft SMTPSVC(6.0.3790.4675);  Thu, 5 Jan 2012 06:33:55 -0800
Received: from VMWEXCEHT05-PRD.hq.netapp.com ([10.106.77.35]) by SACMVEXC4-PRD.hq.netapp.com with Microsoft SMTPSVC(6.0.3790.3959);  Thu, 5 Jan 2012 06:33:55 -0800
Received: from SACEXCMBX01-PRD.hq.netapp.com ([169.254.2.211]) by vmwexceht05-prd.hq.netapp.com ([10.106.77.35]) with mapi id 14.01.0355.002; Thu, 5 Jan 2012 06:33:54 -0800
From: "Eggert, Lars" <lars@netapp.com>
To: Ronald Bonica <rbonica@juniper.net>
Thread-Topic: [dc] interim dates decided?
Thread-Index: AQHMxHKmKTq3rKgeQE2pr9b6CAG3SZXyrjIAgADxFwCACswGgA==
Date: Thu, 5 Jan 2012 14:33:54 +0000
Message-ID: <3F24AB27-AB9D-48BE-AD66-25133FD4D588@netapp.com>
References: <4EF98391.6010500@piuha.net> <DD803EEF-7ED8-455E-B8BD-0F05279814A1@kumari.net> <13205C286662DE4387D9AF3AC30EF456D74EB56784@EMBX01-WF.jnpr.net>
In-Reply-To: <13205C286662DE4387D9AF3AC30EF456D74EB56784@EMBX01-WF.jnpr.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.106.53.51]
Content-Type: multipart/signed; boundary="Apple-Mail=_AA7A5A82-7E95-4C00-B698-6A7A8BCBCEAA"; protocol="application/pkcs7-signature"; micalg=sha1
MIME-Version: 1.0
X-OriginalArrivalTime: 05 Jan 2012 14:33:55.0146 (UTC) FILETIME=[0D6AD2A0:01CCCBB7]
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] interim dates decided?
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 14:34:29 -0000

--Apple-Mail=_AA7A5A82-7E95-4C00-B698-6A7A8BCBCEAA
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii

Hi,

On Dec 29, 2011, at 18:41, Ronald Bonica wrote:
> The Doodle Poll indicates that February 22-23 is the most popular =
date, followed by February 20-21.
>=20
> We are currently evaluating an offer to host the meeting in suburban =
Boston on February 21-22. If you indicated that you would be available =
on February 20-21 or February 22-23, but cannot be available on February =
21-22, please send me unicast email.
>=20
> Please *do not make your travel reservations yet*. We are still =
evaluating whether the proposed venue will work for us. If it doesn't =
work, we may meet somewhere else.

is there a date and place now? I'd be nice to be able to book travel =
sooner rather than later.

Thanks,
Lars=

--Apple-Mail=_AA7A5A82-7E95-4C00-B698-6A7A8BCBCEAA
Content-Disposition: attachment; filename="smime.p7s"
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIMQDCCBUow
ggQyoAMCAQICEFcfSRTG0jNknqb9LV9GuFkwDQYJKoZIhvcNAQEFBQAwgd0xCzAJBgNVBAYTAlVT
MRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjEfMB0GA1UECxMWVmVyaVNpZ24gVHJ1c3QgTmV0d29y
azE7MDkGA1UECxMyVGVybXMgb2YgdXNlIGF0IGh0dHBzOi8vd3d3LnZlcmlzaWduLmNvbS9ycGEg
KGMpMDkxHjAcBgNVBAsTFVBlcnNvbmEgTm90IFZhbGlkYXRlZDE3MDUGA1UEAxMuVmVyaVNpZ24g
Q2xhc3MgMSBJbmRpdmlkdWFsIFN1YnNjcmliZXIgQ0EgLSBHMzAeFw0xMTEyMTAwMDAwMDBaFw0x
MjEyMDkyMzU5NTlaMIIBDTEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlT
aWduIFRydXN0IE5ldHdvcmsxRjBEBgNVBAsTPXd3dy52ZXJpc2lnbi5jb20vcmVwb3NpdG9yeS9S
UEEgSW5jb3JwLiBieSBSZWYuLExJQUIuTFREKGMpOTgxHjAcBgNVBAsTFVBlcnNvbmEgTm90IFZh
bGlkYXRlZDEzMDEGA1UECxMqRGlnaXRhbCBJRCBDbGFzcyAxIC0gTmV0c2NhcGUgRnVsbCBTZXJ2
aWNlMRQwEgYDVQQDFAtMYXJzIEVnZ2VydDEeMBwGCSqGSIb3DQEJARYPbGFyc0BuZXRhcHAuY29t
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAokrhJTcXt6J/VEpZOicLoguBlYTjXP9v
Ze4HuuhXnURUS8YouAfgaqA0zYbt5yd6fh4PBMdAaEWr5yJyHuFykXlrCumjUWSpLuqTS2A+pt4q
cZaAQk9iLDN/UVd3SpkUuvWbxXlqzG7/BSqa3VNObBzCmyh+V7aXxri+30CT//DSsNRC4VFy6sn6
dMgSaFenXLwe/FBwY0qTMfICT1PrrX6Sw1S8OfH9rykLlZXbmfkFExxQngp1DJH9xMHeODHGbCv/
ty5gdxMOrLe+vENxFEcy1YQWBZd1kNL4UObugF8A/jE/s+Oa3H1VFH8ghqZTdqGDysVxmtKHuNFx
6jIBSQIDAQABo4HSMIHPMAkGA1UdEwQCMAAwRAYDVR0gBD0wOzA5BgtghkgBhvhFAQcXATAqMCgG
CCsGAQUFBwIBFhxodHRwczovL3d3dy52ZXJpc2lnbi5jb20vcnBhMAsGA1UdDwQEAwIFoDAdBgNV
HSUEFjAUBggrBgEFBQcDBAYIKwYBBQUHAwIwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2luZGMx
ZGlnaXRhbGlkLWczLWNybC52ZXJpc2lnbi5jb20vSW5kQzFEaWdpdGFsSUQtRzMuY3JsMA0GCSqG
SIb3DQEBBQUAA4IBAQBA7q6tR92qpd7xo7VBsrOfGCWzoxIVfTc7t0RhB/Oz/+c3lnhYnNScIuKN
JmyZvznmVxqB9BJ72+NkvmdB/hnILSBTRawL2tyLo9PkBtN0nRt4gS6wjpWnD8G83hlJLE7r25jk
7HkRev61dTIXsANFpJKF02C4XSoDfEzNV6MpuEvHvcgHCqMrlwWwfKc7+NoDnE8PBuRzwSXvlD5L
mswCY2iiOsd7ImNO4OzTCxETvKTDu92+FTIbRJJpYjVNv1UF7e3w9Kq65BkZJErUH19beUeQl0Wh
2BJQE6/15rQyCnP0iJ/Nmx2/kI6M0PWunEsI6FMs0MbosreaWGHlQmomMIIG7jCCBdagAwIBAgIQ
cRVmBUrkkSFN6bxE+azT3DANBgkqhkiG9w0BAQUFADCByjELMAkGA1UEBhMCVVMxFzAVBgNVBAoT
DlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQL
EzEoYykgMTk5OSBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYD
VQQDEzxWZXJpU2lnbiBDbGFzcyAxIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y
aXR5IC0gRzMwHhcNMDkwNTAxMDAwMDAwWhcNMTkwNDMwMjM1OTU5WjCB3TELMAkGA1UEBhMCVVMx
FzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3Jr
MTswOQYDVQQLEzJUZXJtcyBvZiB1c2UgYXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAo
YykwOTEeMBwGA1UECxMVUGVyc29uYSBOb3QgVmFsaWRhdGVkMTcwNQYDVQQDEy5WZXJpU2lnbiBD
bGFzcyAxIEluZGl2aWR1YWwgU3Vic2NyaWJlciBDQSAtIEczMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEA7cRH3yooHXwGa7vXITLJbBOP6bGNQU4099oL42r6ZYggCxET6ZvgSU6Lb9UB
0F8NR5GKWkx0Pj/GkQm7TDSejW6hglFi92l2WJYHr54UGAdPWr2f0jGyVBlzRmoZQhHsEnMhjfXc
MM3l2VYKMcU2bSkUl70t2olHGYjYSwQ967Y8Zx50ABMN0Ibak2f4MwOuGjxraXj2wCyO4YM/d/mZ
//6fUlrCtIcK2GypR8FUKWVDPkrAlh/Brfd3r2yxBF6+wbaULZeQLSfSux7pg2qE9sSyriMGZSal
J1grByK0b6ZiSBp38tVQJ5op05b7KPW6JHZi44xZ6/tu1ULEvkHH9QIDAQABo4ICuTCCArUwNAYI
KwYBBQUHAQEEKDAmMCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC52ZXJpc2lnbi5jb20wEgYDVR0T
AQH/BAgwBgEB/wIBADBwBgNVHSAEaTBnMGUGC2CGSAGG+EUBBxcBMFYwKAYIKwYBBQUHAgEWHGh0
dHBzOi8vd3d3LnZlcmlzaWduLmNvbS9jcHMwKgYIKwYBBQUHAgIwHhocaHR0cHM6Ly93d3cudmVy
aXNpZ24uY29tL3JwYTA0BgNVHR8ELTArMCmgJ6AlhiNodHRwOi8vY3JsLnZlcmlzaWduLmNvbS9w
Y2ExLWczLmNybDAOBgNVHQ8BAf8EBAMCAQYwbgYIKwYBBQUHAQwEYjBgoV6gXDBaMFgwVhYJaW1h
Z2UvZ2lmMCEwHzAHBgUrDgMCGgQUS2u5KJYGDLvQUjibKaxLB4shBRgwJhYkaHR0cDovL2xvZ28u
dmVyaXNpZ24uY29tL3ZzbG9nbzEuZ2lmMC4GA1UdEQQnMCWkIzAhMR8wHQYDVQQDExZQcml2YXRl
TGFiZWw0LTIwNDgtMTE4MB0GA1UdDgQWBBR5R2EIQf04BKJL57XM9UP2SSsR+DCB8QYDVR0jBIHp
MIHmoYHQpIHNMIHKMQswCQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNV
BAsTFlZlcmlTaWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWduLCBJ
bmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlTaWduIENsYXNzIDEg
UHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgLSBHM4IRAItbdVaEVIULAM+v
OEjOsaQwDQYJKoZIhvcNAQEFBQADggEBADlNz0GZgbWpBbVSOOk5hIls5DSoWufYbAlMJBq6WaSH
O3Mh8ZOBz79oY1pn/jWFK6HDXaNKwjoZ3TDWzE3v8dKBl8pUWkO/N4t6jhmND0OojPKvYLMVirOV
nDzgnrMnmKQ1chfl/Cpdh9OKDcLRRSr4wPSsKpM61a4ScAjr+zvid+zoK2Q1ds262uDRyxTWcVib
vtU+fbbZ6CTFJGZMXZEfdrMXPn8NxiGJL7M3uKH/XLJtSd5lUkL7DojS7Uodv0vj+Mxy+kgOZY5J
yNb4mZg7t5Q+MXEGh/psWVMu198r7V9jAKwV7QO4VRaMxmgD5yKocwuxvKDaUljdCg5/wYIxggSL
MIIEhwIBATCB8jCB3TELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYD
VQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2UgYXQgaHR0
cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykwOTEeMBwGA1UECxMVUGVyc29uYSBOb3QgVmFs
aWRhdGVkMTcwNQYDVQQDEy5WZXJpU2lnbiBDbGFzcyAxIEluZGl2aWR1YWwgU3Vic2NyaWJlciBD
QSAtIEczAhBXH0kUxtIzZJ6m/S1fRrhZMAkGBSsOAwIaBQCgggJtMBgGCSqGSIb3DQEJAzELBgkq
hkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTEyMDEwNTE0MzM1NFowIwYJKoZIhvcNAQkEMRYEFAZP
xab35zeAtR/A0oBUOjzulmzgMIIBAwYJKwYBBAGCNxAEMYH1MIHyMIHdMQswCQYDVQQGEwJVUzEX
MBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlTaWduIFRydXN0IE5ldHdvcmsx
OzA5BgNVBAsTMlRlcm1zIG9mIHVzZSBhdCBodHRwczovL3d3dy52ZXJpc2lnbi5jb20vcnBhIChj
KTA5MR4wHAYDVQQLExVQZXJzb25hIE5vdCBWYWxpZGF0ZWQxNzA1BgNVBAMTLlZlcmlTaWduIENs
YXNzIDEgSW5kaXZpZHVhbCBTdWJzY3JpYmVyIENBIC0gRzMCEFcfSRTG0jNknqb9LV9GuFkwggEF
BgsqhkiG9w0BCRACCzGB9aCB8jCB3TELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJ
bmMuMR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1
c2UgYXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykwOTEeMBwGA1UECxMVUGVyc29u
YSBOb3QgVmFsaWRhdGVkMTcwNQYDVQQDEy5WZXJpU2lnbiBDbGFzcyAxIEluZGl2aWR1YWwgU3Vi
c2NyaWJlciBDQSAtIEczAhBXH0kUxtIzZJ6m/S1fRrhZMA0GCSqGSIb3DQEBAQUABIIBAA3WiBxK
f5qLQBJOScixfgkqMiEdUZXgjv6JQaVt5o2l1LGK1sWyatkf2XK4BQVsFoj/Kt8e+RSLsBz1Zm2S
+tsYnzzg0gcFitHjfOxouWMt2PobDuS++Ym2xQQR7pdzyc3ILcD9eVXB7h9ROrtC7lfD0vTQvBUi
n8+jIO4oHozsqYTkqqaz5tQJlvadw9bteRR+DvlbP61/k4Mte4Jra4w2/u6BopHcxLJ3OoUGf4U9
lJhzRrGqMC3vnSfcyVYLKxMANiZ2u5cbVsOehXORgvuxmzQrNUefIuVMJD1V6ttBHqBjgUP43Xi7
7LTI5IImP5WoHi1GOW++Zjcuqb1lRusAAAAAAAA=

--Apple-Mail=_AA7A5A82-7E95-4C00-B698-6A7A8BCBCEAA--

From lizhong.jin@zte.com.cn  Thu Jan  5 07:26:47 2012
Return-Path: <lizhong.jin@zte.com.cn>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5515321F8759 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 07:26:47 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -100.804
X-Spam-Level: 
X-Spam-Status: No, score=-100.804 tagged_above=-999 required=5 tests=[AWL=-1.034, BAYES_00=-2.599, HTML_MESSAGE=0.001, MIME_BASE64_TEXT=1.753, RCVD_DOUBLE_IP_LOOSE=0.76, SARE_MILLIONSOF=0.315, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4VftbysKGfwE for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 07:26:46 -0800 (PST)
Received: from mx5.zte.com.cn (mx6.zte.com.cn [95.130.199.165]) by ietfa.amsl.com (Postfix) with ESMTP id 109D221F8760 for <dc@ietf.org>; Thu,  5 Jan 2012 07:26:44 -0800 (PST)
Received: from [10.30.17.100] by mx5.zte.com.cn with surfront esmtp id 56690122734555; Thu, 5 Jan 2012 23:06:04 +0800 (CST)
Received: from [10.30.3.21] by [192.168.168.16] with StormMail ESMTP id 95768.4095645156; Thu, 5 Jan 2012 23:26:18 +0800 (CST)
Received: from notes_smtp.zte.com.cn ([10.30.1.239]) by mse02.zte.com.cn with ESMTP id q05FQM6R021099; Thu, 5 Jan 2012 23:26:22 +0800 (GMT-8) (envelope-from lizhong.jin@zte.com.cn)
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25961@XMB-BGL-416.cisco.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
MIME-Version: 1.0
X-Mailer: Lotus Notes Release 6.5.4 March 27, 2005
Message-ID: <OF92C6E01A.E44BD605-ON4825797C.0053B413-4825797C.0054D077@zte.com.cn>
From: Lizhong Jin<lizhong.jin@zte.com.cn>
Date: Thu, 5 Jan 2012 23:26:06 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.1FP4|July 25, 2010) at 2012-01-05 23:26:26, Serialize complete at 2012-01-05 23:26:26
Content-Type: multipart/alternative; boundary="=_alternative 0054D0764825797C_="
X-MAIL: mse02.zte.com.cn q05FQM6R021099
Cc: yakov@juniper.net, robert@raszuk.net, dc@ietf.org, aldrin.isaac@gmail.com
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 15:26:47 -0000

This is a multipart message in MIME format.
--=_alternative 0054D0764825797C_=
Content-Type: text/plain; charset="GB2312"
Content-Transfer-Encoding: base64

SGkgQXNoaXNoLA0KVGhhbmsgeW91IGZvciB0aGUgYW5hbHlzaXMuIFRoZSBudW1iZXIgb2YgaG9z
dCByb3V0ZXMgaW4geW91ciBleGFtcGxlIA0KcmVhY2hlcyAyTSwgYmVjYXN1ZSB5b3UgYXNzdW1l
IGVhY2ggVk0gd2l0aCB0d28gVklGLiBJZiBlYWNoIHJvdXRlIHBlcmZvcm0gDQo0IHBhdGggZm9y
IEVDTVAsIHRoZSB0b3RhbCByb3V0ZSB3aWxsIHJlYWNoIDhNIGFzIHlvdSBkZXNjcmliZWQuIFRo
aXMgaXMgDQp0aGUgd29yc3QgY2FzZSwgYW5kIGF0IGxlYXN0LCB0aGUgc2NhbGFiaWxpdHkgb24g
YWdncmVnYXRpb24gcm91dGVyIHdvdWxkIA0KYmUgYmV0dGVyIHRoYW4gb24gYWNjZXNzLiBBbmQg
SSBhZ3JlZSwgdGhlIGNvbm5lY3Rpb24gYmV0d2VlbiBhY2Nlc3MgYW5kIA0KYWdncmVnYXRpb24g
c2hvdWxkIGJlIGNvb3JkaW5hdGVkIHdoZW4gVk0gbW92aW5nLCB3aGljaCBpcyBub3QgZWFzeSB0
byANCnNvbHZlLg0KDQpSZWdhcmRzDQpMaXpob25nDQoNCiANCg0KIkFzaGlzaCBEYWxlbGEgKGFk
YWxlbGEpIiA8YWRhbGVsYUBjaXNjby5jb20+IHdyb3RlIDIwMTIvMDEvMDUgMDA6MTY6MzQ6DQoN
Cj4gSGkgTGl6aG9uZywNCj4gDQo+IFdoZW4geW91IGRvIHRoaW5ncyBhdCB0aGUgYWdncmVnYXRp
b24geW91IG5lZWQgdHdvIHNldHMgb2YgZW5jYXBzIKhDIA0KPiBBZ2ctdG8tQWdnIGFuZCBBZ2ct
dG8tQWNjZXNzLiANCj4gDQo+IElmIHRoZXJlIGFyZSBOIGhvc3RzIHVuZGVyIGFuIGFjY2Vzcywg
YW5kIE0gYWNjZXNzZXMgdW5kZXIgYW4gQWdnLCANCj4gYW5kIFAgQWdncywgdGhlbiB5b3UgbmVl
ZCCoQyANCj4gDQo+IE4gKiBNIGZvciBhY2Nlc3MgdG8gYWdnDQo+IChQLTEpICogTiAqIE0gZm9y
IGFnZyB0byBhZ2cNCj4gVG90YWwgPSBOICogTSArIChQLTEpICogTiAqIE0gPSBOICogTSAqIFAN
Cj4gDQo+IFRoaXMgaXMgdGhlIHdvcnN0IGNhc2UuDQo+IA0KPiBGb3IgYSBiZXR0ZXIgY2FzZSwg
YXNzdW1lIGVhY2ggVk0gdGFsa3MgdG8gMjUgVk0gb3V0c2lkZSBpdHMgQWdnLg0KPiBUaGUgdG90
YWwgPSBOICogTSAqIDI1DQo+IA0KPiBMZXShr3MgcGx1ZyBzb21lIG51bWJlcnMgaW50byB0aGlz
Lg0KPiANCj4gRm9yIGEgNDggcG9ydCBhY2Nlc3MsIDUwIFZNIHBlciBwb3J0LCBlYWNoIHdpdGgg
MiBWSUYsIE4gPSA0OCAqIDUwICogMiA9IA0KNDgwMA0KPiANCj4gRm9yIGFuIGFnZyB3aXRoIDEw
MCBwb3J0cyAoNTAgZG93biBhbmQgNTAgdXApLCBNID0gNTANCj4gDQo+IEEgbWlsbGlvbiBWTSBy
ZXF1aXJlIDEsMDAwLDAwMCAvICg0OCAqIDUwKSA9IDQxNSBhY2Nlc3Nlcy4NCj4gDQo+IEVhY2gg
YWNjZXNzIGNvbm5lY3RzIHRvIDQgQWdncywgc28geW91IG5lZWQgNDE1ICogNCAvIDUwID0gMzMg
QWdncyA9IFAuDQo+IA0KPiBXb3JzdCBjYXNlIHRvdGFsIGVudHJpZXMgbmVlZGVkIGF0IGVhY2gg
QWdnID0gNDgwMCAqIDUwICogMzMgPSA3LDkyMCwwMDANCj4gQmV0dGVyIGNhc2UgdG90YWwgZW50
cmllcyBuZWVkZWQgYXQgZWFjaCBBZ2cgPSA0ODAwICogNTAgKiAyNSA9IA0KNiwwMDAsMDAwDQo+
IA0KPiBZb3UgbWF5IGJlIHdvbmRlcmluZyB3aHkgZm9yIDFNIFZNLCB3ZSBuZWVkIDYtOE0gaG9z
dCByb3V0ZXM/IEl0oa9zIA0KPiBiZWNhdXNlIG9mIG11bHRpLXBhdGhpbmcuIEEgZGVzdGluYXRp
b24gY2FuIGJlIHJlYWNoZWQgdGhyb3VnaCBtYW55IA0KPiBwYXRocywgc28geW91IHB1dCByb3V0
ZXMgZm9yIGFsbCBwYXRocy4NCj4gDQo+IFdoZW4geW91IGhhdmUgNi04TSByb3V0ZXMsIHlvdSBj
YW4gaW1hZ2luZSB0aGUgY29ycmVzcG9uZGluZyBjb250cm9sDQo+IHBsYW5lIGxvYWQuDQo+IA0K
PiBUaGUgYWRkaXRpb25hbCBjb21wbGV4aXR5IGlzIHRoYXQgdG8gc2VuZCBwYWNrZXQgZnJvbSBB
IHRvIEIsIHlvdSANCj4gbmVlZCAzIGVuY2FwcyCoQyBhY2Nlc3MtdG8tYWdnLCBhZ2ctdG8tYWdn
IGFuZCBhZ2ctdG8tYWNjZXNzLiBXaGVuIGEgDQo+IFZNIG1vdmVzLCB5b3UgaGF2ZSB0byBjb29y
ZGluYXRlIGFsbCB0aGVzZSBlbnRyaWVzLiBUaGF0oa9zIGFub3RoZXIgDQo+IG5vdCBzbyBlYXN5
IHByb2JsZW0uDQo+IA0KPiBUaGFua3MsIEFzaGlzaA0KPiANCj4gDQo+IEZyb206IExpemhvbmcg
SmluIFttYWlsdG86bGl6aG9uZy5qaW5AenRlLmNvbS5jbl0gDQo+IFNlbnQ6IFdlZG5lc2RheSwg
SmFudWFyeSAwNCwgMjAxMiA4OjI4IFBNDQo+IFRvOiBBc2hpc2ggRGFsZWxhIChhZGFsZWxhKQ0K
PiBDYzogcm9iZXJ0QHJhc3p1ay5uZXQ7IGRjQGlldGYub3JnOyB5YWtvdkBqdW5pcGVyLm5ldDsg
DQphbGRyaW4uaXNhYWNAZ21haWwuY29tDQo+IFN1YmplY3Q6IFJlOiBbZGNdIG5ldyBkcmFmdHMN
Cj4gDQo+IA0KPiBIaSBBc2hpc2gsIA0KPiBJZiB3ZSBpbXBsZW1lbnQgVlJGIG9uIGFjY2VzcyBz
d2l0Y2ggKG9yIFRvUiksIEkgYWdyZWUgdGhlcmUgd2lsbCBiZQ0KPiBzY2FsYWJpbGl0eSBwcm9i
bGVtLiBBbHNvIHRoZSBjb3N0IHdpbGwgYWxzbyBiZSBhbiBpc3N1ZSwgdGhlIGFjY2Vzcw0KPiBz
d2l0Y2ggd291bGQgYmUgbW9yZSBleHBlbnNpdmUgdGhhbiBiZWZvcmUuIEhvdyBhYm91dCBpbXBs
ZW1lbnQgVlJGIA0KPiBvbiBhZ2dyZWdhdGlvbiByb3V0ZXI/IElmIHRoZSBhZ2dyZWdhdGlvbiBy
b3V0ZXIgY291bGQgc29sdmUgDQo+IHNjYWxhYmlsaXR5IGFuZCBoaWdoIGF2YWlsYWJpbGl0eSBw
cm9ibGVtLCB0aGVuIHdlIHNob3VsZCBmb2N1cyBvbiANCj4gaG93IHRvIHNldHVwIGNvbm5lY3Rp
b24gYmV0d2VlbiBWTSBhbmQgYWdncmVnYXRpb24gcm91dGVyLiBIb3BlIHRvIA0KPiBzZWUgeW91
ciBjb21tZW50cy4gDQo+IA0KPiBUaGFua3MgDQo+IExpemhvbmcgDQo+IA0KPiANCj4gPiANCj4g
PiAtLS0tLUZyb20gIkFzaGlzaCBEYWxlbGEgKGFkYWxlbGEpIiA8YWRhbGVsYUBjaXNjby5jb20+
IA0KPiA+IFR1ZSwgMyBKYW4gMjAxMiAyMjowOToxNyArMDUzMCAtLS0tLSANCj4gPiANCj4gPiBS
ZWNlaXZlcjogDQo+ID4gDQo+ID4gPHJvYmVydEByYXN6dWsubmV0PiANCj4gPiANCj4gPiBjYzog
DQo+ID4gDQo+ID4gWWFrb3YgUmVraHRlciA8eWFrb3ZAanVuaXBlci5uZXQ+LCBkY0BpZXRmLm9y
ZywgQWxkcmluIElzYWFjIA0KPiA+IDxhbGRyaW4uaXNhYWNAZ21haWwuY29tPiANCj4gPiANCj4g
PiBTdWJqZWN0OiANCj4gPiANCj4gPiBSZTogW2RjXSBuZXcgZHJhZnRzIA0KPiA+IA0KPiA+IFJv
YmVydCwgDQo+ID4gDQo+ID4gSGVyZSBhcmUgc29tZSB0aGluZ3MgdG8gZXZhbHVhdGUgc2NhbGFi
aWxpdHkgYWdhaW5zdC4gDQo+ID4gDQo+ID4gQXNzdW1lIGEgc2ltcGxlIGNhc2UgdGhhdCB1bmRl
ciBhIHN3aXRjaCB0aGVyZSBhcmUgMjUwIFZNLCBzcGxpdCANCmFtb25nc3QgDQo+ID4gMTAgY3Vz
dG9tZXJzLiBFYWNoIGN1c3RvbWVyIGhhcyBhIHVuaXF1ZSBWUkYuIE5vcm1hbGx5LCB3ZSB3b3Vs
ZCBoYXZlIA0KPiA+IGFkdmVydGl6ZWQgYSAvMjQgcm91dGUgZm9yIHRoYXQgc3dpdGNoLiBJbiB0
aGlzIGNhc2UgeW91ciByb3V0ZXMgdG8gYSANCj4gPiBzaW5nbGUgc3dpdGNoIGFyZSBzZWdtZW50
ZWQgYW5kIHRoZXJlIGFyZSAxMCBWUkZzLCBhbmQgeW91IHdpbGwgdmVyeSANCj4gPiBsaWtlbHkg
aGF2ZSAyNTAgcm91dGUgdGFibGUgZW50cmllcyB0b3RhbCBzZWdtZW50ZWQgYnkgVlJGLWlkcy4g
VGhhdCdzIA0KYSANCj4gPiByb3V0aW5nIHRhYmxlIGJsb2F0IGZyb20gMSBlbnRyeSB0byAyNTAg
ZW50cnkuIFRoaXMgaGFwcGVucyANCmV2ZXJ5d2hlcmUuIA0KPiA+IEkgaGF2ZSBhc3N1bWVkIGEg
cHVibGljIElQIGFkZHJlc3NpbmcsIGJ1dCB0aGUgc2FtZSB0aGluZyB3aWxsIGhhcHBlbiANCj4g
PiBmb3IgdGhlIHByaXZhdGUgYWRkcmVzc2luZyBhcyB3ZWxsLiANCj4gPiANCj4gPiBUaGVuLCB0
eXBpY2FsbHkgdGhlIG51bWJlciBvZiBWUkZzIHlvdSBjYW4gc3VwcG9ydCBvbiBhIHJvdXRlciBp
cyANCmFib3V0IA0KPiA+IDRLLiBUaGVzZSAjIG9mIFZSRnMgaGF2ZSB0byBiZSBzdXBwb3J0ZWQg
YXQgdGhlIGFjY2Vzcywgc28geW91IGhhdmUgdG8gDQoNCj4gPiBhc3N1bWUgdGhpcyBpcyB0aGUg
bGltaXQgZnJvbSB0aGUgYWNjZXNzIHZpZXdwb2ludC4gNEsgaXMgbm90aGluZyAtIHdlIA0KDQo+
ID4gaGF2ZSA0SyBWTEFOcyB0b2RheSB0byBzZWdtZW50IGFuZCB0aGF0J3Mgbm90aGluZy4gRXZl
cnkgc2VnbWVudGF0aW9uIA0KPiA+IHRlY2huaXF1ZSBiZWluZyB0YWxrZWQgYWJvdXQgc3BlYWtz
IG9mIGEgbWlsbGlvbiBwbHVzIHNlZ21lbnRzLiBUYWtlIA0KPiA+IHRoYXQgdG8gVlJGcywgeW91
IG5lZWQgYSBtaWxsaW9uIFZSRnMgb24gdGhlIGNvbnRyb2wgcGxhbmUgYXQgdGhlIA0KYWNjZXNz
IA0KPiA+IHN3aXRjaC4gQW5vdGhlciBwcm9ibGVtIHdpdGggYSBWUkYgaXMgdGhhdCBpdCB3aWxs
IGdldCBhbmQgc3RvcmUgYSANCnJvdXRlIA0KPiA+IGZvciBhIGhvc3QsIGV2ZW4gd2hlbiB0aGVy
ZSBpcyBubyBob3N0IHRhbGtpbmcgdG8gaXQuIFdpdGggZHluYW1pYyANCj4gPiBsZWFybmluZyBv
ciBsZWFybmluZyBiYXNlZCBvbiBwYWNrZXQgYXJyaXZhbCB5b3UgYXZvaWQgdGhlc2UgaG9zdCAN
CnJvdXRlcyANCj4gPiBhbmQgbGltaXQgdGhlbSB0byBhY3RpdmUgY29udmVyc2F0aW9ucyBvbmx5
LiBUaGF0J3MgYSBodWdlIHNhdmluZyANCj4gPiBiZWNhdXNlIG5vdCBldmVyeSBob3N0IHRhbGtz
IHRvIGV2ZXJ5IGhvc3QuIA0KPiA+IA0KPiA+IFRoZW4sIGF0IG1hc3NpdmUgc2NhbGUsIHRoZSBm
YWlsdXJlIHJhdGVzIGFyZSBhbHNvIG1hc3NpdmUuIEF0IDUgbmluZXMgDQoNCj4gPiByZWxpYWJp
bGl0eSwgYSBoYXJkd2FyZSBlbnRpdHkgb3V0IG9mIDEwMCwwMDAgd2lsbCBmYWlsIGV2ZXJ5IDUu
MjUgDQo+ID4gbWludXRlcy4gQWNjZXNzIHN3aXRjaGVzIGRvbid0IGhhdmUgaGlnaCBhdmFpbGFi
aWxpdHkuIFNvZnR3YXJlIGZhaWxzIA0KPiA+IGV2ZW4gZmFzdGVyIC0gT1MgaXMgZ2VuZXJhbGx5
IDQgOSdzLCB3aGljaCBtZWFucyBvbmUgb3V0IG9mIDEwLDAwMCANCmZhaWxzIA0KPiA+IGV2ZXJ5
IDUuMjUgbWludXRlcy4gQXQgbWlsbGlvbnMgb2YgaW5zdGFuY2VzIG9mIHN1Y2ggZW50aXRpZXMs
IHRoZXJlIA0KYXJlIA0KPiA+IHJhcGlkIGZhaWx1cmVzIGhhcHBlbmluZy4gWW91IGhhdmUgdG8g
b25seSBsb29rIGF0IG1hc3NpdmUgZGF0YWNlbnRlcnMgDQoNCj4gPiB0b2RheSBydW4gYnkgV2Vi
IDIuMCBjb21wYW5pZXMsIGFuZCB0aGV5IGFsbCBlY2hvIHRoaXMgdmlldy4gVGhleSANCj4gPiBi
YXNpY2FsbHkgZm9ybSBjbHVzdGVycyBvZiB0aGUgc2FtZSBhcHBsaWNhdGlvbi4gU29mdHdhcmUg
bW92ZXMgdGhlIA0KPiA+IHdvcmtsb2FkIGZyb20gb25lIGNsdXN0ZXIgdG8gYW5vdGhlci4gVGhl
IHdob2xlIGNsdXN0ZXIgY2FuIGZhaWwgb3Zlci4gDQoNCj4gPiBUaGF0J3Mgbm90IHdoYXQgeW91
IGRvIGluIGEgY29uc3VtZXIgY2xvdWQsIHdoZXJlIHlvdSBoYXZlIHRvIHJlY292ZXIuIA0KDQo+
ID4gQXQgbWFzc2l2ZSBmYWlsdXJlIHJhdGVzLCBhbmQgcmFwaWQgcmVjb3ZlcnkgcmF0ZXMsIHlv
dSBhcmUgbW92aW5nIA0KPiA+IHRoaW5ncyBhcm91bmQgYW5kIGluamVjdGluZyBob3N0IHJvdXRl
cyBmb3IgcmVhY2hhYmlsaXR5LiBJdCdzIGEgDQo+ID4gY29udmVyZ2VuY2UgcHJvYmxlbSwgZXNw
ZWNpYWxseSB3aXRoIGxpbmstc3RhdGUgYWxnb3JpdGhtcy4gDQo+ID4gDQo+ID4gSWYgdGhlIFZN
IGNhbiBiZSBtb3ZlZCwgdGhlbiBhbGwgeW91IG5lZWQgdG8gZG8gaXMgaW5zdGFsbCBhIHRlbXBv
cmFyeSANCg0KPiA+IHJlZGlyZWN0IG9mIHBhY2tldHMgdG8gdGhlIG5ldyBsb2NhdGlvbi4gRWFj
aCBob3N0IHdpbGwgcmVmcmVzaCB0aGUgDQpNQUMgDQo+ID4gYWZ0ZXIgMTUtMzAgc2Vjb25kcy4g
SWYgdGhlIHBhY2tldHMgYXJlIHJlZGlyZWN0ZWQgZnJvbSBvbGQgdG8gbmV3IA0KPiA+IGxvY2F0
aW9uIGZvciB0aGVzZSAzMCBzZWNvbmRzLCB0aGUgcmVkaXJlY3QgY2FuIGJlIGFnZWQgYXV0b21h
dGljYWxseS4gDQoNCj4gPiBUaGlzIGhhcHBlbnMgYWxsIHRoZSB0aW1lIGluIG1vYmlsZSBuZXR3
b3JrcyBpbiB3aGF0IGlzIGNhbGxlZCBhICJmYXN0IA0KDQo+ID4gaGFuZG9mZiIgd2hlcmUgeW91
IHJlZGlyZWN0IHRoZSBwYWNrZXRzIHVudGlsIGhhbmRvZmYgaXMgY29tcGxldGVkLiANCj4gPiAN
Cj4gPiBUaGFua3MsIEFzaGlzaCANCj4gPiANCj4gPiANCg0KDQotLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KWlRFIEluZm9ybWF0aW9uIFNl
Y3VyaXR5IE5vdGljZTogVGhlIGluZm9ybWF0aW9uIGNvbnRhaW5lZCBpbiB0aGlzIG1haWwgaXMg
c29sZWx5IHByb3BlcnR5IG9mIHRoZSBzZW5kZXIncyBvcmdhbml6YXRpb24uIFRoaXMgbWFpbCBj
b21tdW5pY2F0aW9uIGlzIGNvbmZpZGVudGlhbC4gUmVjaXBpZW50cyBuYW1lZCBhYm92ZSBhcmUg
b2JsaWdhdGVkIHRvIG1haW50YWluIHNlY3JlY3kgYW5kIGFyZSBub3QgcGVybWl0dGVkIHRvIGRp
c2Nsb3NlIHRoZSBjb250ZW50cyBvZiB0aGlzIGNvbW11bmljYXRpb24gdG8gb3RoZXJzLg0KVGhp
cyBlbWFpbCBhbmQgYW55IGZpbGVzIHRyYW5zbWl0dGVkIHdpdGggaXQgYXJlIGNvbmZpZGVudGlh
bCBhbmQgaW50ZW5kZWQgc29sZWx5IGZvciB0aGUgdXNlIG9mIHRoZSBpbmRpdmlkdWFsIG9yIGVu
dGl0eSB0byB3aG9tIHRoZXkgYXJlIGFkZHJlc3NlZC4gSWYgeW91IGhhdmUgcmVjZWl2ZWQgdGhp
cyBlbWFpbCBpbiBlcnJvciBwbGVhc2Ugbm90aWZ5IHRoZSBvcmlnaW5hdG9yIG9mIHRoZSBtZXNz
YWdlLiBBbnkgdmlld3MgZXhwcmVzc2VkIGluIHRoaXMgbWVzc2FnZSBhcmUgdGhvc2Ugb2YgdGhl
IGluZGl2aWR1YWwgc2VuZGVyLg0KVGhpcyBtZXNzYWdlIGhhcyBiZWVuIHNjYW5uZWQgZm9yIHZp
cnVzZXMgYW5kIFNwYW0gYnkgWlRFIEFudGktU3BhbSBzeXN0ZW0uDQo=
--=_alternative 0054D0764825797C_=
Content-Type: text/html; charset="GB2312"
Content-Transfer-Encoding: base64

DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPkhpIEFzaGlzaCw8L2ZvbnQ+DQo8
YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPlRoYW5rIHlvdSBmb3IgdGhlIGFuYWx5
c2lzLiBUaGUgbnVtYmVyDQpvZiBob3N0IHJvdXRlcyBpbiB5b3VyIGV4YW1wbGUgcmVhY2hlcyAy
TSwgYmVjYXN1ZSB5b3UgYXNzdW1lIGVhY2ggVk0gd2l0aA0KdHdvIFZJRi4gSWYgZWFjaCByb3V0
ZSBwZXJmb3JtIDQgcGF0aCBmb3IgRUNNUCwgdGhlIHRvdGFsIHJvdXRlIHdpbGwgcmVhY2gNCjhN
IGFzIHlvdSBkZXNjcmliZWQuIFRoaXMgaXMgdGhlIHdvcnN0IGNhc2UsIGFuZCBhdCBsZWFzdCwg
dGhlIHNjYWxhYmlsaXR5DQpvbiBhZ2dyZWdhdGlvbiByb3V0ZXIgd291bGQgYmUgYmV0dGVyIHRo
YW4gb24gYWNjZXNzLiBBbmQgSSBhZ3JlZSwgdGhlDQpjb25uZWN0aW9uIGJldHdlZW4gYWNjZXNz
IGFuZCBhZ2dyZWdhdGlvbiBzaG91bGQgYmUgY29vcmRpbmF0ZWQgd2hlbiBWTQ0KbW92aW5nLCB3
aGljaCBpcyBub3QgZWFzeSB0byBzb2x2ZS48L2ZvbnQ+DQo8YnI+DQo8YnI+PGZvbnQgc2l6ZT0y
IGZhY2U9InNhbnMtc2VyaWYiPlJlZ2FyZHM8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9
InNhbnMtc2VyaWYiPkxpemhvbmc8YnI+DQo8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0xIGZhY2U9
InNhbnMtc2VyaWYiPiZuYnNwOzwvZm9udD4NCjxicj4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0i
c2Fucy1zZXJpZiI+JnF1b3Q7QXNoaXNoIERhbGVsYSAoYWRhbGVsYSkmcXVvdDsNCiZsdDthZGFs
ZWxhQGNpc2NvLmNvbSZndDsgd3JvdGUgMjAxMi8wMS8wNSAwMDoxNjozNDo8YnI+DQo8YnI+DQom
Z3Q7IEhpIExpemhvbmcsPC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlm
Ij4mZ3Q7ICZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+
Jmd0OyBXaGVuIHlvdSBkbyB0aGluZ3MgYXQgdGhlIGFnZ3JlZ2F0aW9uDQp5b3UgbmVlZCB0d28g
c2V0cyBvZiBlbmNhcHMgqEMgPGJyPg0KJmd0OyBBZ2ctdG8tQWdnIGFuZCBBZ2ctdG8tQWNjZXNz
LiA8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgJm5ic3A7
PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7IElmIHRoZXJl
IGFyZSBOIGhvc3RzIHVuZGVyIGFuIGFjY2VzcywNCmFuZCBNIGFjY2Vzc2VzIHVuZGVyIGFuIEFn
ZywgPGJyPg0KJmd0OyBhbmQgUCBBZ2dzLCB0aGVuIHlvdSBuZWVkIKhDIDwvZm9udD4NCjxicj48
Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyAmbmJzcDs8L2ZvbnQ+DQo8YnI+PGZv
bnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgTiAqIE0gZm9yIGFjY2VzcyB0byBhZ2c8
L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgKFAtMSkgKiBO
ICogTSBmb3IgYWdnIHRvIGFnZzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1z
ZXJpZiI+Jmd0OyBUb3RhbCA9IE4gKiBNICsgKFAtMSkgKiBOICogTSA9DQpOICogTSAqIFA8L2Zv
bnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgJm5ic3A7PC9mb250
Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7IFRoaXMgaXMgdGhlIHdv
cnN0IGNhc2UuPC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7
ICZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyBG
b3IgYSBiZXR0ZXIgY2FzZSwgYXNzdW1lIGVhY2gNClZNIHRhbGtzIHRvIDI1IFZNIG91dHNpZGUg
aXRzIEFnZy48L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsg
VGhlIHRvdGFsID0gTiAqIE0gKiAyNTwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fu
cy1zZXJpZiI+Jmd0OyAmbmJzcDs8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMt
c2VyaWYiPiZndDsgTGV0oa9zIHBsdWcgc29tZSBudW1iZXJzIGludG8NCnRoaXMuPC9mb250Pg0K
PGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7ICZuYnNwOzwvZm9udD4NCjxi
cj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyBGb3IgYSA0OCBwb3J0IGFjY2Vz
cywgNTAgVk0gcGVyDQpwb3J0LCBlYWNoIHdpdGggMiBWSUYsIE4gPSA0OCAqIDUwICogMiA9IDQ4
MDA8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgJm5ic3A7
PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7IEZvciBhbiBh
Z2cgd2l0aCAxMDAgcG9ydHMgKDUwIGRvd24NCmFuZCA1MCB1cCksIE0gPSA1MDwvZm9udD4NCjxi
cj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyAmbmJzcDs8L2ZvbnQ+DQo8YnI+
PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgQSBtaWxsaW9uIFZNIHJlcXVpcmUg
MSwwMDAsMDAwDQovICg0OCAqIDUwKSA9IDQxNSBhY2Nlc3Nlcy48L2ZvbnQ+DQo8YnI+PGZvbnQg
c2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgJm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNp
emU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7IEVhY2ggYWNjZXNzIGNvbm5lY3RzIHRvIDQgQWdn
cywNCnNvIHlvdSBuZWVkIDQxNSAqIDQgLyA1MCA9IDMzIEFnZ3MgPSBQLjwvZm9udD4NCjxicj48
Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyAmbmJzcDs8L2ZvbnQ+DQo8YnI+PGZv
bnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgV29yc3QgY2FzZSB0b3RhbCBlbnRyaWVz
IG5lZWRlZA0KYXQgZWFjaCBBZ2cgPSA0ODAwICogNTAgKiAzMyA9IDcsOTIwLDAwMDwvZm9udD4N
Cjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyBCZXR0ZXIgY2FzZSB0b3Rh
bCBlbnRyaWVzIG5lZWRlZA0KYXQgZWFjaCBBZ2cgPSA0ODAwICogNTAgKiAyNSA9IDYsMDAwLDAw
MDwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyAmbmJzcDs8
L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgWW91IG1heSBi
ZSB3b25kZXJpbmcgd2h5IGZvciAxTQ0KVk0sIHdlIG5lZWQgNi04TSBob3N0IHJvdXRlcz8gSXSh
r3MgPGJyPg0KJmd0OyBiZWNhdXNlIG9mIG11bHRpLXBhdGhpbmcuIEEgZGVzdGluYXRpb24gY2Fu
IGJlIHJlYWNoZWQgdGhyb3VnaCBtYW55DQo8YnI+DQomZ3Q7IHBhdGhzLCBzbyB5b3UgcHV0IHJv
dXRlcyBmb3IgYWxsIHBhdGhzLjwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1z
ZXJpZiI+Jmd0OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7DQombmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7DQombmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7DQo8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0y
IGZhY2U9InNhbnMtc2VyaWYiPiZndDsgV2hlbiB5b3UgaGF2ZSA2LThNIHJvdXRlcywgeW91DQpj
YW4gaW1hZ2luZSB0aGUgY29ycmVzcG9uZGluZyBjb250cm9sPGJyPg0KJmd0OyBwbGFuZSBsb2Fk
LjwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyAmbmJzcDs8
L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgVGhlIGFkZGl0
aW9uYWwgY29tcGxleGl0eSBpcyB0aGF0DQp0byBzZW5kIHBhY2tldCBmcm9tIEEgdG8gQiwgeW91
IDxicj4NCiZndDsgbmVlZCAzIGVuY2FwcyCoQyBhY2Nlc3MtdG8tYWdnLCBhZ2ctdG8tYWdnIGFu
ZCBhZ2ctdG8tYWNjZXNzLiBXaGVuDQphIDxicj4NCiZndDsgVk0gbW92ZXMsIHlvdSBoYXZlIHRv
IGNvb3JkaW5hdGUgYWxsIHRoZXNlIGVudHJpZXMuIFRoYXShr3MgYW5vdGhlcg0KPGJyPg0KJmd0
OyBub3Qgc28gZWFzeSBwcm9ibGVtLjwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fu
cy1zZXJpZiI+Jmd0OyAmbmJzcDs8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMt
c2VyaWYiPiZndDsgVGhhbmtzLCBBc2hpc2g8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9
InNhbnMtc2VyaWYiPiZndDsgJm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJz
YW5zLXNlcmlmIj4mZ3Q7ICZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fu
cy1zZXJpZiI+Jmd0OyBGcm9tOiBMaXpob25nIEppbiBbbWFpbHRvOmxpemhvbmcuamluQHp0ZS5j
b20uY25dDQo8YnI+DQomZ3Q7IFNlbnQ6IFdlZG5lc2RheSwgSmFudWFyeSAwNCwgMjAxMiA4OjI4
IFBNPGJyPg0KJmd0OyBUbzogQXNoaXNoIERhbGVsYSAoYWRhbGVsYSk8YnI+DQomZ3Q7IENjOiBy
b2JlcnRAcmFzenVrLm5ldDsgZGNAaWV0Zi5vcmc7IHlha292QGp1bmlwZXIubmV0OyBhbGRyaW4u
aXNhYWNAZ21haWwuY29tPGJyPg0KJmd0OyBTdWJqZWN0OiBSZTogW2RjXSBuZXcgZHJhZnRzPC9m
b250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7ICZuYnNwOzwvZm9u
dD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyA8YnI+DQomZ3Q7IEhp
IEFzaGlzaCwgPGJyPg0KJmd0OyBJZiB3ZSBpbXBsZW1lbnQgVlJGIG9uIGFjY2VzcyBzd2l0Y2gg
KG9yIFRvUiksIEkgYWdyZWUgdGhlcmUgd2lsbA0KYmU8YnI+DQomZ3Q7IHNjYWxhYmlsaXR5IHBy
b2JsZW0uIEFsc28gdGhlIGNvc3Qgd2lsbCBhbHNvIGJlIGFuIGlzc3VlLCB0aGUgYWNjZXNzPGJy
Pg0KJmd0OyBzd2l0Y2ggd291bGQgYmUgbW9yZSBleHBlbnNpdmUgdGhhbiBiZWZvcmUuIEhvdyBh
Ym91dCBpbXBsZW1lbnQgVlJGDQo8YnI+DQomZ3Q7IG9uIGFnZ3JlZ2F0aW9uIHJvdXRlcj8gSWYg
dGhlIGFnZ3JlZ2F0aW9uIHJvdXRlciBjb3VsZCBzb2x2ZSA8YnI+DQomZ3Q7IHNjYWxhYmlsaXR5
IGFuZCBoaWdoIGF2YWlsYWJpbGl0eSBwcm9ibGVtLCB0aGVuIHdlIHNob3VsZCBmb2N1cyBvbg0K
PGJyPg0KJmd0OyBob3cgdG8gc2V0dXAgY29ubmVjdGlvbiBiZXR3ZWVuIFZNIGFuZCBhZ2dyZWdh
dGlvbiByb3V0ZXIuIEhvcGUgdG8NCjxicj4NCiZndDsgc2VlIHlvdXIgY29tbWVudHMuIDxicj4N
CiZndDsgPGJyPg0KJmd0OyBUaGFua3MgPGJyPg0KJmd0OyBMaXpob25nIDxicj4NCiZndDsgPGJy
Pg0KJmd0OyA8YnI+DQomZ3Q7ICZndDsgPGJyPg0KJmd0OyAmZ3Q7IC0tLS0tRnJvbSAmcXVvdDtB
c2hpc2ggRGFsZWxhIChhZGFsZWxhKSZxdW90OyAmbHQ7YWRhbGVsYUBjaXNjby5jb20mZ3Q7DQo8
YnI+DQomZ3Q7ICZndDsgVHVlLCAzIEphbiAyMDEyIDIyOjA5OjE3ICswNTMwIC0tLS0tIDxicj4N
CiZndDsgJmd0OyA8YnI+DQomZ3Q7ICZndDsgUmVjZWl2ZXI6IDxicj4NCiZndDsgJmd0OyA8YnI+
DQomZ3Q7ICZndDsgJmx0O3JvYmVydEByYXN6dWsubmV0Jmd0OyA8YnI+DQomZ3Q7ICZndDsgPGJy
Pg0KJmd0OyAmZ3Q7IGNjOiA8YnI+DQomZ3Q7ICZndDsgPGJyPg0KJmd0OyAmZ3Q7IFlha292IFJl
a2h0ZXIgJmx0O3lha292QGp1bmlwZXIubmV0Jmd0OywgZGNAaWV0Zi5vcmcsIEFsZHJpbg0KSXNh
YWMgPGJyPg0KJmd0OyAmZ3Q7ICZsdDthbGRyaW4uaXNhYWNAZ21haWwuY29tJmd0OyA8YnI+DQom
Z3Q7ICZndDsgPGJyPg0KJmd0OyAmZ3Q7IFN1YmplY3Q6IDxicj4NCiZndDsgJmd0OyA8YnI+DQom
Z3Q7ICZndDsgUmU6IFtkY10gbmV3IGRyYWZ0cyA8YnI+DQomZ3Q7ICZndDsgPGJyPg0KJmd0OyAm
Z3Q7IFJvYmVydCwgPGJyPg0KJmd0OyAmZ3Q7IDxicj4NCiZndDsgJmd0OyBIZXJlIGFyZSBzb21l
IHRoaW5ncyB0byBldmFsdWF0ZSBzY2FsYWJpbGl0eSBhZ2FpbnN0LiA8YnI+DQomZ3Q7ICZndDsg
PGJyPg0KJmd0OyAmZ3Q7IEFzc3VtZSBhIHNpbXBsZSBjYXNlIHRoYXQgdW5kZXIgYSBzd2l0Y2gg
dGhlcmUgYXJlIDI1MCBWTSwgc3BsaXQNCmFtb25nc3QgPGJyPg0KJmd0OyAmZ3Q7IDEwIGN1c3Rv
bWVycy4gRWFjaCBjdXN0b21lciBoYXMgYSB1bmlxdWUgVlJGLiBOb3JtYWxseSwgd2Ugd291bGQN
CmhhdmUgPGJyPg0KJmd0OyAmZ3Q7IGFkdmVydGl6ZWQgYSAvMjQgcm91dGUgZm9yIHRoYXQgc3dp
dGNoLiBJbiB0aGlzIGNhc2UgeW91ciByb3V0ZXMNCnRvIGEgPGJyPg0KJmd0OyAmZ3Q7IHNpbmds
ZSBzd2l0Y2ggYXJlIHNlZ21lbnRlZCBhbmQgdGhlcmUgYXJlIDEwIFZSRnMsIGFuZCB5b3Ugd2ls
bA0KdmVyeSA8YnI+DQomZ3Q7ICZndDsgbGlrZWx5IGhhdmUgMjUwIHJvdXRlIHRhYmxlIGVudHJp
ZXMgdG90YWwgc2VnbWVudGVkIGJ5IFZSRi1pZHMuDQpUaGF0J3MgYSA8YnI+DQomZ3Q7ICZndDsg
cm91dGluZyB0YWJsZSBibG9hdCBmcm9tIDEgZW50cnkgdG8gMjUwIGVudHJ5LiBUaGlzIGhhcHBl
bnMgZXZlcnl3aGVyZS4NCjxicj4NCiZndDsgJmd0OyBJIGhhdmUgYXNzdW1lZCBhIHB1YmxpYyBJ
UCBhZGRyZXNzaW5nLCBidXQgdGhlIHNhbWUgdGhpbmcgd2lsbA0KaGFwcGVuIDxicj4NCiZndDsg
Jmd0OyBmb3IgdGhlIHByaXZhdGUgYWRkcmVzc2luZyBhcyB3ZWxsLiA8YnI+DQomZ3Q7ICZndDsg
PGJyPg0KJmd0OyAmZ3Q7IFRoZW4sIHR5cGljYWxseSB0aGUgbnVtYmVyIG9mIFZSRnMgeW91IGNh
biBzdXBwb3J0IG9uIGEgcm91dGVyDQppcyBhYm91dCA8YnI+DQomZ3Q7ICZndDsgNEsuIFRoZXNl
ICMgb2YgVlJGcyBoYXZlIHRvIGJlIHN1cHBvcnRlZCBhdCB0aGUgYWNjZXNzLCBzbyB5b3UNCmhh
dmUgdG8gPGJyPg0KJmd0OyAmZ3Q7IGFzc3VtZSB0aGlzIGlzIHRoZSBsaW1pdCBmcm9tIHRoZSBh
Y2Nlc3Mgdmlld3BvaW50LiA0SyBpcyBub3RoaW5nDQotIHdlIDxicj4NCiZndDsgJmd0OyBoYXZl
IDRLIFZMQU5zIHRvZGF5IHRvIHNlZ21lbnQgYW5kIHRoYXQncyBub3RoaW5nLiBFdmVyeSBzZWdt
ZW50YXRpb24NCjxicj4NCiZndDsgJmd0OyB0ZWNobmlxdWUgYmVpbmcgdGFsa2VkIGFib3V0IHNw
ZWFrcyBvZiBhIG1pbGxpb24gcGx1cyBzZWdtZW50cy4NClRha2UgPGJyPg0KJmd0OyAmZ3Q7IHRo
YXQgdG8gVlJGcywgeW91IG5lZWQgYSBtaWxsaW9uIFZSRnMgb24gdGhlIGNvbnRyb2wgcGxhbmUg
YXQNCnRoZSBhY2Nlc3MgPGJyPg0KJmd0OyAmZ3Q7IHN3aXRjaC4gQW5vdGhlciBwcm9ibGVtIHdp
dGggYSBWUkYgaXMgdGhhdCBpdCB3aWxsIGdldCBhbmQgc3RvcmUNCmEgcm91dGUgPGJyPg0KJmd0
OyAmZ3Q7IGZvciBhIGhvc3QsIGV2ZW4gd2hlbiB0aGVyZSBpcyBubyBob3N0IHRhbGtpbmcgdG8g
aXQuIFdpdGggZHluYW1pYw0KPGJyPg0KJmd0OyAmZ3Q7IGxlYXJuaW5nIG9yIGxlYXJuaW5nIGJh
c2VkIG9uIHBhY2tldCBhcnJpdmFsIHlvdSBhdm9pZCB0aGVzZQ0KaG9zdCByb3V0ZXMgPGJyPg0K
Jmd0OyAmZ3Q7IGFuZCBsaW1pdCB0aGVtIHRvIGFjdGl2ZSBjb252ZXJzYXRpb25zIG9ubHkuIFRo
YXQncyBhIGh1Z2Ugc2F2aW5nDQo8YnI+DQomZ3Q7ICZndDsgYmVjYXVzZSBub3QgZXZlcnkgaG9z
dCB0YWxrcyB0byBldmVyeSBob3N0LiA8YnI+DQomZ3Q7ICZndDsgPGJyPg0KJmd0OyAmZ3Q7IFRo
ZW4sIGF0IG1hc3NpdmUgc2NhbGUsIHRoZSBmYWlsdXJlIHJhdGVzIGFyZSBhbHNvIG1hc3NpdmUu
IEF0DQo1IG5pbmVzIDxicj4NCiZndDsgJmd0OyByZWxpYWJpbGl0eSwgYSBoYXJkd2FyZSBlbnRp
dHkgb3V0IG9mIDEwMCwwMDAgd2lsbCBmYWlsIGV2ZXJ5DQo1LjI1IDxicj4NCiZndDsgJmd0OyBt
aW51dGVzLiBBY2Nlc3Mgc3dpdGNoZXMgZG9uJ3QgaGF2ZSBoaWdoIGF2YWlsYWJpbGl0eS4gU29m
dHdhcmUNCmZhaWxzIDxicj4NCiZndDsgJmd0OyBldmVuIGZhc3RlciAtIE9TIGlzIGdlbmVyYWxs
eSA0IDkncywgd2hpY2ggbWVhbnMgb25lIG91dCBvZiAxMCwwMDANCmZhaWxzIDxicj4NCiZndDsg
Jmd0OyBldmVyeSA1LjI1IG1pbnV0ZXMuIEF0IG1pbGxpb25zIG9mIGluc3RhbmNlcyBvZiBzdWNo
IGVudGl0aWVzLA0KdGhlcmUgYXJlIDxicj4NCiZndDsgJmd0OyByYXBpZCBmYWlsdXJlcyBoYXBw
ZW5pbmcuIFlvdSBoYXZlIHRvIG9ubHkgbG9vayBhdCBtYXNzaXZlIGRhdGFjZW50ZXJzDQo8YnI+
DQomZ3Q7ICZndDsgdG9kYXkgcnVuIGJ5IFdlYiAyLjAgY29tcGFuaWVzLCBhbmQgdGhleSBhbGwg
ZWNobyB0aGlzIHZpZXcuDQpUaGV5IDxicj4NCiZndDsgJmd0OyBiYXNpY2FsbHkgZm9ybSBjbHVz
dGVycyBvZiB0aGUgc2FtZSBhcHBsaWNhdGlvbi4gU29mdHdhcmUgbW92ZXMNCnRoZSA8YnI+DQom
Z3Q7ICZndDsgd29ya2xvYWQgZnJvbSBvbmUgY2x1c3RlciB0byBhbm90aGVyLiBUaGUgd2hvbGUg
Y2x1c3RlciBjYW4gZmFpbA0Kb3Zlci4gPGJyPg0KJmd0OyAmZ3Q7IFRoYXQncyBub3Qgd2hhdCB5
b3UgZG8gaW4gYSBjb25zdW1lciBjbG91ZCwgd2hlcmUgeW91IGhhdmUgdG8NCnJlY292ZXIuIDxi
cj4NCiZndDsgJmd0OyBBdCBtYXNzaXZlIGZhaWx1cmUgcmF0ZXMsIGFuZCByYXBpZCByZWNvdmVy
eSByYXRlcywgeW91IGFyZSBtb3ZpbmcNCjxicj4NCiZndDsgJmd0OyB0aGluZ3MgYXJvdW5kIGFu
ZCBpbmplY3RpbmcgaG9zdCByb3V0ZXMgZm9yIHJlYWNoYWJpbGl0eS4gSXQncw0KYSA8YnI+DQom
Z3Q7ICZndDsgY29udmVyZ2VuY2UgcHJvYmxlbSwgZXNwZWNpYWxseSB3aXRoIGxpbmstc3RhdGUg
YWxnb3JpdGhtcy4gPGJyPg0KJmd0OyAmZ3Q7IDxicj4NCiZndDsgJmd0OyBJZiB0aGUgVk0gY2Fu
IGJlIG1vdmVkLCB0aGVuIGFsbCB5b3UgbmVlZCB0byBkbyBpcyBpbnN0YWxsIGENCnRlbXBvcmFy
eSA8YnI+DQomZ3Q7ICZndDsgcmVkaXJlY3Qgb2YgcGFja2V0cyB0byB0aGUgbmV3IGxvY2F0aW9u
LiBFYWNoIGhvc3Qgd2lsbCByZWZyZXNoDQp0aGUgTUFDIDxicj4NCiZndDsgJmd0OyBhZnRlciAx
NS0zMCBzZWNvbmRzLiBJZiB0aGUgcGFja2V0cyBhcmUgcmVkaXJlY3RlZCBmcm9tIG9sZCB0bw0K
bmV3IDxicj4NCiZndDsgJmd0OyBsb2NhdGlvbiBmb3IgdGhlc2UgMzAgc2Vjb25kcywgdGhlIHJl
ZGlyZWN0IGNhbiBiZSBhZ2VkIGF1dG9tYXRpY2FsbHkuDQo8YnI+DQomZ3Q7ICZndDsgVGhpcyBo
YXBwZW5zIGFsbCB0aGUgdGltZSBpbiBtb2JpbGUgbmV0d29ya3MgaW4gd2hhdCBpcyBjYWxsZWQN
CmEgJnF1b3Q7ZmFzdCA8YnI+DQomZ3Q7ICZndDsgaGFuZG9mZiZxdW90OyB3aGVyZSB5b3UgcmVk
aXJlY3QgdGhlIHBhY2tldHMgdW50aWwgaGFuZG9mZiBpcw0KY29tcGxldGVkLiA8YnI+DQomZ3Q7
ICZndDsgPGJyPg0KJmd0OyAmZ3Q7IFRoYW5rcywgQXNoaXNoIDxicj4NCiZndDsgJmd0OyA8YnI+
DQomZ3Q7ICZndDsgPC9mb250Pg0KPGJyPjxwcmU+DQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KWlRFJm5ic3A7SW5mb3JtYXRpb24mbmJz
cDtTZWN1cml0eSZuYnNwO05vdGljZTombmJzcDtUaGUmbmJzcDtpbmZvcm1hdGlvbiZuYnNwO2Nv
bnRhaW5lZCZuYnNwO2luJm5ic3A7dGhpcyZuYnNwO21haWwmbmJzcDtpcyZuYnNwO3NvbGVseSZu
YnNwO3Byb3BlcnR5Jm5ic3A7b2YmbmJzcDt0aGUmbmJzcDtzZW5kZXIncyZuYnNwO29yZ2FuaXph
dGlvbi4mbmJzcDtUaGlzJm5ic3A7bWFpbCZuYnNwO2NvbW11bmljYXRpb24mbmJzcDtpcyZuYnNw
O2NvbmZpZGVudGlhbC4mbmJzcDtSZWNpcGllbnRzJm5ic3A7bmFtZWQmbmJzcDthYm92ZSZuYnNw
O2FyZSZuYnNwO29ibGlnYXRlZCZuYnNwO3RvJm5ic3A7bWFpbnRhaW4mbmJzcDtzZWNyZWN5Jm5i
c3A7YW5kJm5ic3A7YXJlJm5ic3A7bm90Jm5ic3A7cGVybWl0dGVkJm5ic3A7dG8mbmJzcDtkaXNj
bG9zZSZuYnNwO3RoZSZuYnNwO2NvbnRlbnRzJm5ic3A7b2YmbmJzcDt0aGlzJm5ic3A7Y29tbXVu
aWNhdGlvbiZuYnNwO3RvJm5ic3A7b3RoZXJzLg0KVGhpcyZuYnNwO2VtYWlsJm5ic3A7YW5kJm5i
c3A7YW55Jm5ic3A7ZmlsZXMmbmJzcDt0cmFuc21pdHRlZCZuYnNwO3dpdGgmbmJzcDtpdCZuYnNw
O2FyZSZuYnNwO2NvbmZpZGVudGlhbCZuYnNwO2FuZCZuYnNwO2ludGVuZGVkJm5ic3A7c29sZWx5
Jm5ic3A7Zm9yJm5ic3A7dGhlJm5ic3A7dXNlJm5ic3A7b2YmbmJzcDt0aGUmbmJzcDtpbmRpdmlk
dWFsJm5ic3A7b3ImbmJzcDtlbnRpdHkmbmJzcDt0byZuYnNwO3dob20mbmJzcDt0aGV5Jm5ic3A7
YXJlJm5ic3A7YWRkcmVzc2VkLiZuYnNwO0lmJm5ic3A7eW91Jm5ic3A7aGF2ZSZuYnNwO3JlY2Vp
dmVkJm5ic3A7dGhpcyZuYnNwO2VtYWlsJm5ic3A7aW4mbmJzcDtlcnJvciZuYnNwO3BsZWFzZSZu
YnNwO25vdGlmeSZuYnNwO3RoZSZuYnNwO29yaWdpbmF0b3ImbmJzcDtvZiZuYnNwO3RoZSZuYnNw
O21lc3NhZ2UuJm5ic3A7QW55Jm5ic3A7dmlld3MmbmJzcDtleHByZXNzZWQmbmJzcDtpbiZuYnNw
O3RoaXMmbmJzcDttZXNzYWdlJm5ic3A7YXJlJm5ic3A7dGhvc2UmbmJzcDtvZiZuYnNwO3RoZSZu
YnNwO2luZGl2aWR1YWwmbmJzcDtzZW5kZXIuDQpUaGlzJm5ic3A7bWVzc2FnZSZuYnNwO2hhcyZu
YnNwO2JlZW4mbmJzcDtzY2FubmVkJm5ic3A7Zm9yJm5ic3A7dmlydXNlcyZuYnNwO2FuZCZuYnNw
O1NwYW0mbmJzcDtieSZuYnNwO1pURSZuYnNwO0FudGktU3BhbSZuYnNwO3N5c3RlbS4NCjwvcHJl
Pg==
--=_alternative 0054D0764825797C_=--


From lizhong.jin@zte.com.cn  Thu Jan  5 07:35:18 2012
Return-Path: <lizhong.jin@zte.com.cn>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5FF2821F8760 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 07:35:18 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -101.68
X-Spam-Level: 
X-Spam-Status: No, score=-101.68 tagged_above=-999 required=5 tests=[AWL=0.157, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_DOUBLE_IP_LOOSE=0.76, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zjE+3SaWAkEh for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 07:35:17 -0800 (PST)
Received: from mx5.zte.com.cn (mx5.zte.com.cn [63.217.80.70]) by ietfa.amsl.com (Postfix) with ESMTP id 5290921F875F for <dc@ietf.org>; Thu,  5 Jan 2012 07:35:17 -0800 (PST)
Received: from [10.30.17.99] by mx5.zte.com.cn with surfront esmtp id 53829122734555; Thu, 5 Jan 2012 23:32:44 +0800 (CST)
Received: from [10.30.3.21] by [192.168.168.15] with StormMail ESMTP id 4315.4095645156; Thu, 5 Jan 2012 23:35:13 +0800 (CST)
Received: from notes_smtp.zte.com.cn ([10.30.1.239]) by mse02.zte.com.cn with ESMTP id q05FZ4Lh025281; Thu, 5 Jan 2012 23:35:04 +0800 (GMT-8) (envelope-from lizhong.jin@zte.com.cn)
In-Reply-To: <4F046D7D.7040705@raszuk.net>
To: robert@raszuk.net
MIME-Version: 1.0
X-Mailer: Lotus Notes Release 6.5.4 March 27, 2005
Message-ID: <OF47BD3CE5.ED91E5B7-ON4825797C.0054E3D9-4825797C.00559D95@zte.com.cn>
From: Lizhong Jin<lizhong.jin@zte.com.cn>
Date: Thu, 5 Jan 2012 23:34:49 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.1FP4|July 25, 2010) at 2012-01-05 23:35:08, Serialize complete at 2012-01-05 23:35:08
Content-Type: multipart/alternative; boundary="=_alternative 00559D954825797C_="
X-MAIL: mse02.zte.com.cn q05FZ4Lh025281
Cc: yakov@juniper.net, dc@ietf.org, adalela@cisco.com, aldrin.isaac@gmail.com
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 15:35:18 -0000

This is a multipart message in MIME format.
--=_alternative 00559D954825797C_=
Content-Type: text/plain; charset="US-ASCII"

Hi Robert,
I think this is also an option. When separating control plane and 
dataplane, do you mean to use openFlow to implement the protocol between 
control plane and dataplane? I have some concern with route convergence 
when implementing VRF on x86.

Regards
Lizhong
 

Robert Raszuk <robert@raszuk.net> wrote on 2012/01/04 23:17:17:

> Hi Lizhong,
> 
> How about neither ?
> 
> How about implementing VRF for control plane separation on the x86 
> controller out of data plane and simply instructing either host hosting 
> VMs or TOR or Access Switch to forward/encapsulate the packets correctly 
?
> 
> Cheers,
> R.
> 
> 
> > Hi Ashish,
>  >
> > If we implement VRF on access switch (or ToR), I agree there will be
> > scalability problem. Also the cost will also be an issue, the access
> > switch would be more expensive than before. How about implement VRF on
> > aggregation router? If the aggregation router could solve scalability
> > and high availability problem, then we should focus on how to setup
> > connection between VM and aggregation router. Hope to see your 
comments.
> >
> > Thanks
> > Lizhong
> 


--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail is solely property of the sender's organization. This mail communication is confidential. Recipients named above are obligated to maintain secrecy and are not permitted to disclose the contents of this communication to others.
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the originator of the message. Any views expressed in this message are those of the individual sender.
This message has been scanned for viruses and Spam by ZTE Anti-Spam system.

--=_alternative 00559D954825797C_=
Content-Type: text/html; charset="US-ASCII"


<br><font size=2 face="sans-serif">Hi Robert,</font>
<br><font size=2 face="sans-serif">I think this is also an option. When
separating control plane and dataplane, do you mean to use openFlow to
implement the protocol between control plane and dataplane? I have some
concern with route convergence when implementing VRF on x86.</font>
<br>
<br><font size=2 face="sans-serif">Regards</font>
<br><font size=2 face="sans-serif">Lizhong</font>
<br><font size=1 face="sans-serif">&nbsp;</font>
<br>
<br><font size=2 face="sans-serif">Robert Raszuk &lt;robert@raszuk.net&gt;
wrote on 2012/01/04 23:17:17:<br>
<br>
&gt; Hi Lizhong,<br>
&gt; <br>
&gt; How about neither ?<br>
&gt; <br>
&gt; How about implementing VRF for control plane separation on the x86
<br>
&gt; controller out of data plane and simply instructing either host hosting
<br>
&gt; VMs or TOR or Access Switch to forward/encapsulate the packets correctly
?<br>
&gt; <br>
&gt; Cheers,<br>
&gt; R.<br>
&gt; <br>
&gt; <br>
&gt; &gt; Hi Ashish,<br>
&gt; &nbsp;&gt;<br>
&gt; &gt; If we implement VRF on access switch (or ToR), I agree there
will be<br>
&gt; &gt; scalability problem. Also the cost will also be an issue, the
access<br>
&gt; &gt; switch would be more expensive than before. How about implement
VRF on<br>
&gt; &gt; aggregation router? If the aggregation router could solve scalability<br>
&gt; &gt; and high availability problem, then we should focus on how to
setup<br>
&gt; &gt; connection between VM and aggregation router. Hope to see your
comments.<br>
&gt; &gt;<br>
&gt; &gt; Thanks<br>
&gt; &gt; Lizhong<br>
&gt; <br>
</font>
--=_alternative 0<br><pre>
--------------------------------------------------------
ZTE&nbsp;Information&nbsp;Security&nbsp;Notice:&nbsp;The&nbsp;information&nbsp;contained&nbsp;in&nbsp;this&nbsp;mail&nbsp;is&nbsp;solely&nbsp;property&nbsp;of&nbsp;the&nbsp;sender's&nbsp;organization.&nbsp;This&nbsp;mail&nbsp;communication&nbsp;is&nbsp;confidential.&nbsp;Recipients&nbsp;named&nbsp;above&nbsp;are&nbsp;obligated&nbsp;to&nbsp;maintain&nbsp;secrecy&nbsp;and&nbsp;are&nbsp;not&nbsp;permitted&nbsp;to&nbsp;disclose&nbsp;the&nbsp;contents&nbsp;of&nbsp;this&nbsp;communication&nbsp;to&nbsp;others.
This&nbsp;email&nbsp;and&nbsp;any&nbsp;files&nbsp;transmitted&nbsp;with&nbsp;it&nbsp;are&nbsp;confidential&nbsp;and&nbsp;intended&nbsp;solely&nbsp;for&nbsp;the&nbsp;use&nbsp;of&nbsp;the&nbsp;individual&nbsp;or&nbsp;entity&nbsp;to&nbsp;whom&nbsp;they&nbsp;are&nbsp;addressed.&nbsp;If&nbsp;you&nbsp;have&nbsp;received&nbsp;this&nbsp;email&nbsp;in&nbsp;error&nbsp;please&nbsp;notify&nbsp;the&nbsp;originator&nbsp;of&nbsp;the&nbsp;message.&nbsp;Any&nbsp;views&nbsp;expressed&nbsp;in&nbsp;this&nbsp;message&nbsp;are&nbsp;those&nbsp;of&nbsp;the&nbsp;individual&nbsp;sender.
This&nbsp;message&nbsp;has&nbsp;been&nbsp;scanned&nbsp;for&nbsp;viruses&nbsp;and&nbsp;Spam&nbsp;by&nbsp;ZTE&nbsp;Anti-Spam&nbsp;system.
</pre>
--=_alternative 00559D954825797C_=--


From warren@kumari.net  Thu Jan  5 07:36:33 2012
Return-Path: <warren@kumari.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C262A21F8760 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 07:36:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.529
X-Spam-Level: 
X-Spam-Status: No, score=-106.529 tagged_above=-999 required=5 tests=[AWL=0.070, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ac13Z7BXoPc6 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 07:36:32 -0800 (PST)
Received: from vimes.kumari.net (vimes.kumari.net [198.186.192.250]) by ietfa.amsl.com (Postfix) with ESMTP id ACB9621F875F for <dc@ietf.org>; Thu,  5 Jan 2012 07:36:32 -0800 (PST)
Received: from dhcp-172-19-119-228.cbf.corp.google.com (unknown [64.13.52.115]) by vimes.kumari.net (Postfix) with ESMTPSA id 80D5D1B403ED; Thu,  5 Jan 2012 10:36:31 -0500 (EST)
Mime-Version: 1.0 (Apple Message framework v1084)
Content-Type: text/plain; charset=us-ascii
From: Warren Kumari <warren@kumari.net>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco.com>
Date: Thu, 5 Jan 2012 10:36:29 -0500
Content-Transfer-Encoding: quoted-printable
Message-Id: <AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco .com>
To: Ashish Dalela (adalela) <adalela@cisco.com>
X-Mailer: Apple Mail (2.1084)
Cc: Pedro Marques <pedro.r.marques@gmail.com>, david.black@emc.com, Warren Kumari <warren@kumari.net>, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 15:36:33 -0000

On Jan 3, 2012, at 12:45 PM, Ashish Dalela (adalela) wrote:

>=20
> Suppose you have an IP solution.

Sure.

>=20
> To support mobility you need IP-in-IP encapsulation.=20

And if you do an overly you always do an IP encapsulation (to cover GRE, =
IPIP, sit, IPSec, PPP, etc).=20

>=20
> As VM density increases, as VM-to-VM conversation grows, as interfaces
> per VM increase, the host routes increase.

No.

The only thing that the network needs to know is the routes to the =
hypervisors / physical machines -- this is a solved problem.
The VM addresses and routes are only visible to the [gateways, =
hypervisors with VMs in that overlay, other VMs in the same overlay, =
mapping server].

For a really old overview: =
http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00

> These host routes are in addition to network routes, local host-port
> bindings, ACLs, etc. That means in addition to everything that existed
> so far.=20

No.


>=20
> Eventually, you hit a limit on the access, and you have to reduce size
> of network, reduce VM mobility, reduce VM density per server, reduce
> application spread.=20
>=20

No.

> The alternative is to constantly increase network hardware table sizes
> at access, which increases costs and energy.
>=20

No.

> We have to realize that IP encapsulations put network and compute at
> opposite sides of the cost trend. Compute cost reduces slowly as size
> grows. Network cost grows rapidly as size grows.=20
>=20

No.



> Thanks,
> Ashish
>=20
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Pedro Marques
> Sent: Tuesday, January 03, 2012 10:46 PM
> To: david.black@emc.com
> Cc: dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>=20
> That assumes that the MAC has relevance in the network. It is possible
> to build solutions such that packets are forwarded based on their IP
> addresses rather than their MACs.
>=20
>  Pedro.
>=20
> On Tue, Jan 3, 2012 at 8:41 AM,  <david.black@emc.com> wrote:
>> Ashish,
>>=20
>>>> [AD] The higher bits identify a switch - it's a switch-id.
>>=20
>> That breaks VM migration across switches by forcing a MAC change.
>>=20
>> Thanks,
>> --David
>>=20
>>> -----Original Message-----
>>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Ashish Dalela (adalela)
>>> Sent: Tuesday, January 03, 2012 11:15 AM
>>> To: robert@raszuk.net
>>> Cc: Pedro Marques; dc@ietf.org
>>> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>>>=20
>>> Robert,
>>>=20
>>> Please see inline.
>>>=20
>>> -----Original Message-----
>>> From: Robert Raszuk [mailto:robert@raszuk.net]
>>> Sent: Tuesday, January 03, 2012 8:24 PM
>>> To: Ashish Dalela (adalela)
>>> Cc: Pedro Marques; dc@ietf.org
>>> Subject: Re: [dc] [armd] IP over IP solution for data center
>>> interconnect
>>>=20
>>> Ashish,
>>>=20
>>> OK let's just discuss what is in your draft on Hierarchical
> Addressing.
>>>=20
>>> 1. You have 48 bits 32 go for host remaining 16 goes for switches.
> How
>>> do you aggregate at the TOR or AGGR switch boundary ? Are you
> assuming
>>> single HOST - SWITCH with max 65K flat macs ?
>>>=20
>>> [AD] The higher bits identify a switch - it's a switch-id. The hosts
> are
>>> dynamically assigned a host-id under that switch. Let's assume 23
> bits
>>> are for switch-id and 23 bits for host-id. To forward a packet to =
the
>>> host, you only have to look at the first 23 bits. That's a MAC =
prefix
> to
>>> route against.
>>>=20
>>> [AD] You can have 2^23 switches in a network and 2^23 hosts under
> each
>>> switch.
>>>=20
>>> 2. Can you deploy this on existing VMs and existing switches ?
>>>=20
>>> [AD] What do you mean by this? Any VM can be configured with any =
MAC.
>>> Any physical host can be configured with any MAC on any logical
>>> interface. Configuration standpoint this is possible. Forwarding
>>> standpoint, that's another question.
>>>=20
>>> 3. What new protocol you envision to use to distribute those new =
MACs
> ?
>>>=20
>>> [AD] IS-IS extensions. It can be TRILL extensions.
>>>=20
>>> 4. What is the advantage of using this vs ILNP if we assume that
> hosts
>>> should be modified ?
>>>=20
>>> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
>>> talking about Loc-Id separation. If not, correct me. If yes, each
> Loc-Id
>>> binding can be a host route, with mobility. These host-routes are a
>>> scaling problem. Traditional IP packet have IP as ID and MAC as LOC.
> We
>>> are just extending this LOC to make it actually location aware =
rather
>>> than a flat address which is fixed regardless of where the location
> is.
>>>=20
>>> 5. The proposal does not support aggregation .. even the draft says
> it
>>> :)
>>>=20
>>> "The total number of hardware entries anywhere in the network equals
> the
>>>=20
>>> total number of switches and remains agnostic of VM mobility."
>>>=20
>>> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts.
> With
>>> 48 port access switches, you need 833 switches. That's the routing
> table
>>> size for any switch in the datacenter - core, aggregation, access.
>>> Contrast this with host-routes, if each VM talks to 100 VMs, then
> each
>>> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just =
because
>>> the network prefix is 23 bits does not mean we have to store 10^23
>>> prefixes. We have to store only as many switches as there are in the
>>> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48
> port
>>> access and 20 VM per port). That means instead of storing =
host-routes
>>> which will grow proportional to VM growth, we store switch-id, which
>>> will grow at 1000 times slower rate. As VM density increases, this
>>> growth rate is further slowed down. There are other techniques to
>>> further reduce the rate of growth. But in any case, 1000 times =
slower
> is
>>> a lot slow.
>>>=20
>>> So if I have 100K switches I can not do any aggregation and need to
>>> "route" 100K MAC addresses.
>>>=20
>>> [AD] I don't know how you came to that conclusion. Think of HMAC as
> an
>>> IP address. Instead of 32 bits it is 46 bits. You route by prefixes
> in
>>> L3, and you are routing by the same prefixes here. Just as you
> aggregate
>>> IP, same way you aggregate MAC. It's not different.
>>>=20
>>> 6. Who provides me the mapping between switch mac and host/vm mac
> behind
>>>=20
>>> such switch ? Do switches proxy arp globally within your domain ?
>>>=20
>>> [AD] Variation of the same question. Above should answer it.
>>>=20
>>> Thx,
>>> R.
>>>=20
>>>=20
>>>> Robert,
>>>>=20
>>>>>> So you are advocating solution which is based on encapsulation -
>>> that
>>>> is fine.
>>>>=20
>>>> No, I'm not. Did you read the draft I had mentioned?
>>>> Hierarchical MAC is not encapsulation. It is one 48 bit address.
>>>>=20
>>>>>> However how could you ever arrive at the conclusion that HMACs
> would
>>>>>> scale better then "anything we know". Well I don't know about
> you,
>>>> but I
>>>>>> know that the key to scaling is ability to aggregate. And it is
> not
>>>> that
>>>>>> huge mystery that MACs aggregate rather poorly while there are
> quite
>>>>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
>>> natively
>>>>=20
>>>> You are hitting the issue on the nail. So, read the draft I
> mentioned.
>>>> Hierarchical MAC is higher bits "network prefix" and lower bits
> "host
>>>> id".
>>>> That's summarizable like IP address and aggregated.
>>>> It has 46 bits to modify so larger than IPv4 internet.
>>>>=20
>>>> I won't comment on the rest, because you have made an assumption
> about
>>>> encapsulation.
>>>>=20
>>>> I refer to this -
>>>> http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
>>>>=20
>>>> Thanks, Ashish
>>>>=20
>>>>=20
>>>> -----Original Message-----
>>>> From: Robert Raszuk [mailto:robert@raszuk.net]
>>>> Sent: Tuesday, January 03, 2012 7:05 PM
>>>> To: Ashish Dalela (adalela)
>>>> Cc: Pedro Marques; dc@ietf.org
>>>> Subject: Re: [dc] [armd] IP over IP solution for data center
>>>> interconnect
>>>>=20
>>>> Ashish,
>>>>=20
>>>>> The issues of scale you mentioned don't exist in Hierarchical
> MACs,
>>>>> which scale better than anything we know of.
>>>>=20
>>>> So you are advocating solution which is based on encapsulation -
> that
>>> is
>>>> fine.
>>>>=20
>>>> However how could you ever arrive at the conclusion that HMACs
> would
>>>> scale better then "anything we know". Well I don't know about you,
> but
>>> I
>>>> know that the key to scaling is ability to aggregate. And it is not
>>> that
>>>>=20
>>>> huge mystery that MACs aggregate rather poorly while there are
> quite
>>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> natively.
>>>>=20
>>>> For inter-dc this is IMHO a must. A must even if you build it using
>>>> traditional routers or OF enabled switches - does not matter.
>>>>=20
>>>>> I don't want to split the requirements into multiple use-cases
>>>>> because then this DC group will be many groups - one doing L2 and
>>>>> another doing L3. That I think you will agree is not optimal for
>>>>> anyone
>>>>=20
>>>> Why MAC-in-IP does not solve it for everyone ? And there are
> deployed
>>>> solutions already ..
>>>>=20
>>>> IMHO what this group should accomplish is not to try to reinvent
> the
>>>> world, but perhaps as example discuss where is the right boundary
> of
>>>> encapsulation, how should we communicate between network and hosts,
>>> what
>>>>=20
>>>> kind of DC instrumentation should be IETF blessed for easy
> integration
>>>> (ie min subset of functionality it should possess etc .... )
>>>>=20
>>>> R.
>>>>=20
>>>> _______________________________________________
>>>> dc mailing list
>>>> dc@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/dc
>>>>=20
>>>>=20
>>>=20
>>> _______________________________________________
>>> dc mailing list
>>> dc@ietf.org
>>> https://www.ietf.org/mailman/listinfo/dc
>>=20
>> _______________________________________________
>> dc mailing list
>> dc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>=20


From lizhong.jin@zte.com.cn  Thu Jan  5 07:47:50 2012
Return-Path: <lizhong.jin@zte.com.cn>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 719E221F877D for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 07:47:50 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -101.759
X-Spam-Level: 
X-Spam-Status: No, score=-101.759 tagged_above=-999 required=5 tests=[AWL=0.079, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_DOUBLE_IP_LOOSE=0.76, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id i-KMAGX4KtES for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 07:47:48 -0800 (PST)
Received: from mx5.zte.com.cn (mx5.zte.com.cn [63.217.80.70]) by ietfa.amsl.com (Postfix) with ESMTP id 1595F21F8771 for <dc@ietf.org>; Thu,  5 Jan 2012 07:47:47 -0800 (PST)
Received: from [10.30.17.99] by mx5.zte.com.cn with surfront esmtp id 53829122734555; Thu, 5 Jan 2012 23:45:15 +0800 (CST)
Received: from [10.30.3.20] by [192.168.168.15] with StormMail ESMTP id 4315.2912176524; Thu, 5 Jan 2012 23:47:44 +0800 (CST)
Received: from notes_smtp.zte.com.cn ([10.30.1.239]) by mse01.zte.com.cn with ESMTP id q05FleSW096816; Thu, 5 Jan 2012 23:47:40 +0800 (GMT-8) (envelope-from lizhong.jin@zte.com.cn)
In-Reply-To: <mailman.1957.1325746135.3174.dc@ietf.org>
To: adalela@cisco.com, narten@us.ibm.com
MIME-Version: 1.0
X-Mailer: Lotus Notes Release 6.5.4 March 27, 2005
Message-ID: <OF8A5A382B.0E316EA4-ON4825797C.0055CB1F-4825797C.0056C3B5@zte.com.cn>
From: Lizhong Jin<lizhong.jin@zte.com.cn>
Date: Thu, 5 Jan 2012 23:47:22 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.1FP4|July 25, 2010) at 2012-01-05 23:47:42, Serialize complete at 2012-01-05 23:47:42
Content-Type: multipart/alternative; boundary="=_alternative 0056C3B54825797C_="
X-MAIL: mse01.zte.com.cn q05FleSW096816
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 15:47:50 -0000

This is a multipart message in MIME format.
--=_alternative 0056C3B54825797C_=
Content-Type: text/plain; charset="US-ASCII"

See inline below. Thank you.

Lizhong


> 
> >> Can this not be done today? What specific IETF work would be needed
> to
> support the enforcement of SLAs?
> 
> No there is no work in IETF done to define SLAs. For service provider
> environments, you can define a SLA at access for a given user. When you
> have 10 VM talking to each other, and you want to guarantee a bandwidth
> SLA on a VLAN, there is nothing out there. The other fact is that in a
> multi-tenant environment, there is no guarantee that you will get 1G
> bandwidth because you have a 1G interface on the VM. Typical network
> planning too into account the "whole" network design including what
> applications you are going to run and what bandwidths they need. That
> isn't true for cloud at least.
[Lizhong] I also agree SLA is very important and not solved yet currently. 
If we use overlay technology (xxx over IP), and the encapsulated tunnel 
would span the WAN network where BW is severely oversubscribed. How to 
ensure the BW for some high priority tenant?

> 
> After we are done discussing these, we can discuss what specific
> modifications we need to clarify the problem statement further, since it
> is obvious from the email that not all things may be obvious.
> 
> Thanks, Ashish
> 
> 
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Thomas Narten
> Sent: Wednesday, January 04, 2012 11:29 PM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: [dc] draft-dalela-dc-requirements-00.txt
> 
> Hi Ashish.
> 
> I had  look at this document as it is focused on requirements. Thanks
> for doing this.
> 
> One starting comment, as the document says:
> 
> >    Scalability hasn't generally been a standards consideration and the
> >    problems of scaling are left to implementation. But, in the case of
> >    cloud datacenters, scaling is the basic requirement, and all
> problems
> >    of cloud datacenters arise due to scaling. The solution development
> >    can't therefore ignore the scaling and optimality problem.
> 
> I disagree with the above. Scalability has always been one (of many)
> factors that goes into development of a standard. Let's just take it
> as a given that any solution has to scale adequately for the
> environment in which it is to be deployed. Saying more than that (in
> general terms) is probably not a useful discussion. To talk about
> scalability, one has to talk about a specific technology and where it
> is or will be deployed.
> 
> Looking at Section 5, where the main requirements are listed:
> 
> >    5.1. The Basic Forwarding Problem
> > 
> >    Traditionally, datacenter networks have used L2 or L3 technologies.
> >    The need to massively scale virtualized hosts breaks both these
> >    approaches. L2 networks can't be made to scale because of high
> number
> >    of broadcasts. L3 networks can't support host mobility, since
> routing
> >    uses subnets and an IP cannot be moved out of that subnet. Moving
> IP
> >    in a natively L3 network requires installing host routes at one or
> >    more points in the path and that is an approach that can't be
> > scaled.
> 
> I suspect there is general agreement that the above is a general
> "problem". Having one big flat L2 in a data center is great for VM
> migration and placement of services "any place, anytime", but can
> raise scaling and other concerns. Pushing L3 all the way out to the
> edges (e..g, ToR or Hypervisor) makes it hard to place (or move)
> services/VMs arbitrarily.
> 
> The above is one of the motivations behind the NVO3 work.
> 
> > 5.2. The Datacenter Inter-Connectivity Problem
> > 
> >    There are limits to how much a datacenter would be scaled.
> Workloads
> >    need to be placed closer to the clients to reduce latency and
> >    bandwidth. Hence, datacenters need to be split into geographical
> >    locations and connected over the Internet. Some of these
> datacenters
> >    may be owned by different administrators, as in the case of private
> >    and public cloud interconnectivity. Workloads can move between
> these
> >    datacenters, similar to how they move within the datacenter.
> 
> In this section, my take away is that there will be multiple,
> geographically separated data centers. And that they will need to be
> connected together. I suspect everyone agrees with that.
> 
> But I don't see how this implies there is any specific IETF work that
> needs doing. We already have geographically separated data centers, and
> there are, e.g., plenty of VPN technologies available for connecting
> them together.
> 
> What specifically is missing that prevents the above from being done
> today? What is it that you think needs doing that can't be done with
> existing standards?
> 
> > 5.3. The Multi-Tenancy Problem
> > 
> >    Datacenters thus far have been wholly used by single tenant. To
> >    separate departments within a tenant, VLANs have been used. This
> >    seemed sufficient for the number of segments an enterprise would
> >    need. But, this approach can't be extended to cloud datacenters.
> 
> I suspect you'll get a lot of agreement on this. And one of the key
> aims of NVO3 is to address this.
> 
> Is the existing NVO3 approach not adequate for the above? If so why
> not?
> 
> > 5.4. The Technology-Topology Separation Problem
> > 
> >    While large datacenters are becoming common, medium and small
> >    datacenters will continue to exist. These may include a branch
> office
> >    connected to a central office, or a small enterprise datacenter
> that
> >    is connected to a huge public cloud. To move workloads across these
> >    networks, the technologies used in the datacenter must be agnostic
> of
> >    the topology employed in the various sized datacenters.
> 
> >    A small datacenter may use a mesh topology. A medium datacenter may
> >    use a three-tier topology. And a large datacenter may use a
> two-tier
> >    multi-path architecture. It has to be recognized that all these
> >    datacenters of various sizes need to interoperate. In particular,
> it
> >    should be possible to use a common technology to connect large and
> >    small datacenters, two large datacenters, or two small datacenters.
> 
> Isn't this already possible, and indeed, happening today?
> 
> What IETF work is needed? What standards gap needs filling?
> 
> >    5.5. The Network Convergence Problem
> > 
> >    Cloud datacenters will be characterized by elasticity. That means
> >    that virtual resources are constantly created and destroyed.
> Typical
> >    hardware and software reliabilities of today mean that failures at
> >    scale will be fairly common, and automated recovery mechanisms will
> >    need to be put in place. When combined with workload mobility for
> the
> >    sake of resource optimization and improving utilization, the churn
> in
> >    the network forwarding tables can be very significant.
> 
> WHat work does the above imply that the IETF needs to do ?
> 
> >    Mobility also affects virtualized network devices, such as virtual
> >    switches, firewalls, load-balancers, etc. For instance, when a
> server
> >    fails and all the VMs are relocated, the associated virtual switch
> >    and firewall must also be relocated. This means that any assumption
> >    in mobility that the network is a static firmament on which hosts
> are
> >    dynamically attached becomes false. We have to assume that the
> >    network is as dynamic as the hosts themselves.
> 
> This here is interesting. The implication is that when moving a VM,
> either
> 
> a) a FW or LB (or both) may also have to be moved, or
> 
> b) some sort of path enforcement is needed that insures traffic from
> the (now moved) VM continues to go through the same LB or FW as
> before.
> 
> Do I understand that correctly? And if so, what is the IETF work that
> needs to be done to make all this happen?
> 
> >  5.6. The East-West Traffic Problem
> 
> Is this section saying anything more than there is a need for
> multipathing for East West traffic?
> 
> > 5.7. The Network SLA Problem
> > 
> >    Multi-tenant networks need to protect all tenants from overusing
> >    network resources. For example, high-traffic load from one tenant
> >    should not starve another tenant of bandwidth. Note that in a
> multi-
> >    tenant environment, no tenant has full control or visibility of
> what
> >    other tenants are doing, and how problems can be fixed. A real-time
> >    debugging of such problems is very hard for a provider.
> 
> ...
> 
> >    Second, mechanisms to measure and guarantee network SLAs will have
> to
> >    employ active flow management to guarantee bandwidth to all tenants
> >    and keep the network provisioned only to the level required. Flow
> >    management can be integrated as part of existing forwarding
> >    techniques or may need new techniques. Network SLAs can play an
> >    important role in determining if sufficient bandwidth is available
> >    before a VM is moved to a new location.
> 
> Can this not be done today? What specific IETF work would be needed to
> support the enforcement of SLAs?
> 
> Thomas
> 
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> 
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail is solely property of the sender's organization. This mail communication is confidential. Recipients named above are obligated to maintain secrecy and are not permitted to disclose the contents of this communication to others.
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the originator of the message. Any views expressed in this message are those of the individual sender.
This message has been scanned for viruses and Spam by ZTE Anti-Spam system.

--=_alternative 0056C3B54825797C_=
Content-Type: text/html; charset="US-ASCII"


<br><font size=2 face="sans-serif">See inline below. Thank you.</font>
<br>
<br><font size=2 face="sans-serif">Lizhong</font>
<br>
<br><font size=2 face="sans-serif"><br>
&gt; <br>
&gt; &gt;&gt; Can this not be done today? What specific IETF work would
be needed<br>
&gt; to<br>
&gt; support the enforcement of SLAs?<br>
&gt; <br>
&gt; No there is no work in IETF done to define SLAs. For service provider<br>
&gt; environments, you can define a SLA at access for a given user. When
you<br>
&gt; have 10 VM talking to each other, and you want to guarantee a bandwidth<br>
&gt; SLA on a VLAN, there is nothing out there. The other fact is that
in a<br>
&gt; multi-tenant environment, there is no guarantee that you will get
1G<br>
&gt; bandwidth because you have a 1G interface on the VM. Typical network<br>
&gt; planning too into account the &quot;whole&quot; network design including
what<br>
&gt; applications you are going to run and what bandwidths they need. That<br>
&gt; isn't true for cloud at least.</font>
<br><font size=2 face="sans-serif">[Lizhong] I also agree SLA is very important
and not solved yet currently. If we use overlay technology (xxx over IP),
and the encapsulated tunnel would span the WAN network where BW is severely
oversubscribed. How to ensure the BW for some high priority tenant?</font>
<br><font size=2 face="sans-serif"><br>
&gt; <br>
&gt; After we are done discussing these, we can discuss what specific<br>
&gt; modifications we need to clarify the problem statement further, since
it<br>
&gt; is obvious from the email that not all things may be obvious.<br>
&gt; </font>
<br><font size=2 face="sans-serif">&gt; Thanks, Ashish<br>
&gt; <br>
&gt; <br>
&gt; -----Original Message-----<br>
&gt; From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of<br>
&gt; Thomas Narten<br>
&gt; Sent: Wednesday, January 04, 2012 11:29 PM<br>
&gt; To: Ashish Dalela (adalela)<br>
&gt; Cc: dc@ietf.org<br>
&gt; Subject: [dc] draft-dalela-dc-requirements-00.txt<br>
&gt; <br>
&gt; Hi Ashish.<br>
&gt; <br>
&gt; I had &nbsp;look at this document as it is focused on requirements.
Thanks<br>
&gt; for doing this.<br>
&gt; <br>
&gt; One starting comment, as the document says:<br>
&gt; <br>
&gt; &gt; &nbsp; &nbsp;Scalability hasn't generally been a standards consideration
and the<br>
&gt; &gt; &nbsp; &nbsp;problems of scaling are left to implementation.
But, in the case of<br>
&gt; &gt; &nbsp; &nbsp;cloud datacenters, scaling is the basic requirement,
and all<br>
&gt; problems<br>
&gt; &gt; &nbsp; &nbsp;of cloud datacenters arise due to scaling. The solution
development<br>
&gt; &gt; &nbsp; &nbsp;can't therefore ignore the scaling and optimality
problem.<br>
&gt; <br>
&gt; I disagree with the above. Scalability has always been one (of many)<br>
&gt; factors that goes into development of a standard. Let's just take
it<br>
&gt; as a given that any solution has to scale adequately for the<br>
&gt; environment in which it is to be deployed. Saying more than that (in<br>
&gt; general terms) is probably not a useful discussion. To talk about<br>
&gt; scalability, one has to talk about a specific technology and where
it<br>
&gt; is or will be deployed.<br>
&gt; <br>
&gt; Looking at Section 5, where the main requirements are listed:<br>
&gt; <br>
&gt; &gt; &nbsp; &nbsp;5.1. The Basic Forwarding Problem<br>
&gt; &gt; <br>
&gt; &gt; &nbsp; &nbsp;Traditionally, datacenter networks have used L2
or L3 technologies.<br>
&gt; &gt; &nbsp; &nbsp;The need to massively scale virtualized hosts breaks
both these<br>
&gt; &gt; &nbsp; &nbsp;approaches. L2 networks can't be made to scale because
of high<br>
&gt; number<br>
&gt; &gt; &nbsp; &nbsp;of broadcasts. L3 networks can't support host mobility,
since<br>
&gt; routing<br>
&gt; &gt; &nbsp; &nbsp;uses subnets and an IP cannot be moved out of that
subnet. Moving<br>
&gt; IP<br>
&gt; &gt; &nbsp; &nbsp;in a natively L3 network requires installing host
routes at one or<br>
&gt; &gt; &nbsp; &nbsp;more points in the path and that is an approach
that can't be<br>
&gt; &gt; scaled.<br>
&gt; <br>
&gt; I suspect there is general agreement that the above is a general<br>
&gt; &quot;problem&quot;. Having one big flat L2 in a data center is great
for VM<br>
&gt; migration and placement of services &quot;any place, anytime&quot;,
but can<br>
&gt; raise scaling and other concerns. Pushing L3 all the way out to the<br>
&gt; edges (e..g, ToR or Hypervisor) makes it hard to place (or move)<br>
&gt; services/VMs arbitrarily.<br>
&gt; <br>
&gt; The above is one of the motivations behind the NVO3 work.<br>
&gt; <br>
&gt; &gt; 5.2. The Datacenter Inter-Connectivity Problem<br>
&gt; &gt; <br>
&gt; &gt; &nbsp; &nbsp;There are limits to how much a datacenter would
be scaled.<br>
&gt; Workloads<br>
&gt; &gt; &nbsp; &nbsp;need to be placed closer to the clients to reduce
latency and<br>
&gt; &gt; &nbsp; &nbsp;bandwidth. Hence, datacenters need to be split into
geographical<br>
&gt; &gt; &nbsp; &nbsp;locations and connected over the Internet. Some
of these<br>
&gt; datacenters<br>
&gt; &gt; &nbsp; &nbsp;may be owned by different administrators, as in
the case of private<br>
&gt; &gt; &nbsp; &nbsp;and public cloud interconnectivity. Workloads can
move between<br>
&gt; these<br>
&gt; &gt; &nbsp; &nbsp;datacenters, similar to how they move within the
datacenter.<br>
&gt; <br>
&gt; In this section, my take away is that there will be multiple,<br>
&gt; geographically separated data centers. And that they will need to
be<br>
&gt; connected together. I suspect everyone agrees with that.<br>
&gt; <br>
&gt; But I don't see how this implies there is any specific IETF work that<br>
&gt; needs doing. We already have geographically separated data centers,
and<br>
&gt; there are, e.g., plenty of VPN technologies available for connecting<br>
&gt; them together.<br>
&gt; <br>
&gt; What specifically is missing that prevents the above from being done<br>
&gt; today? What is it that you think needs doing that can't be done with<br>
&gt; existing standards?<br>
&gt; <br>
&gt; &gt; 5.3. The Multi-Tenancy Problem<br>
&gt; &gt; <br>
&gt; &gt; &nbsp; &nbsp;Datacenters thus far have been wholly used by single
tenant. To<br>
&gt; &gt; &nbsp; &nbsp;separate departments within a tenant, VLANs have
been used. This<br>
&gt; &gt; &nbsp; &nbsp;seemed sufficient for the number of segments an
enterprise would<br>
&gt; &gt; &nbsp; &nbsp;need. But, this approach can't be extended to cloud
datacenters.<br>
&gt; <br>
&gt; I suspect you'll get a lot of agreement on this. And one of the key<br>
&gt; aims of NVO3 is to address this.<br>
&gt; <br>
&gt; Is the existing NVO3 approach not adequate for the above? If so why<br>
&gt; not?<br>
&gt; <br>
&gt; &gt; 5.4. The Technology-Topology Separation Problem<br>
&gt; &gt; <br>
&gt; &gt; &nbsp; &nbsp;While large datacenters are becoming common, medium
and small<br>
&gt; &gt; &nbsp; &nbsp;datacenters will continue to exist. These may include
a branch<br>
&gt; office<br>
&gt; &gt; &nbsp; &nbsp;connected to a central office, or a small enterprise
datacenter<br>
&gt; that<br>
&gt; &gt; &nbsp; &nbsp;is connected to a huge public cloud. To move workloads
across these<br>
&gt; &gt; &nbsp; &nbsp;networks, the technologies used in the datacenter
must be agnostic<br>
&gt; of<br>
&gt; &gt; &nbsp; &nbsp;the topology employed in the various sized datacenters.<br>
&gt; <br>
&gt; &gt; &nbsp; &nbsp;A small datacenter may use a mesh topology. A medium
datacenter may<br>
&gt; &gt; &nbsp; &nbsp;use a three-tier topology. And a large datacenter
may use a<br>
&gt; two-tier<br>
&gt; &gt; &nbsp; &nbsp;multi-path architecture. It has to be recognized
that all these<br>
&gt; &gt; &nbsp; &nbsp;datacenters of various sizes need to interoperate.
In particular,<br>
&gt; it<br>
&gt; &gt; &nbsp; &nbsp;should be possible to use a common technology to
connect large and<br>
&gt; &gt; &nbsp; &nbsp;small datacenters, two large datacenters, or two
small datacenters.<br>
&gt; <br>
&gt; Isn't this already possible, and indeed, happening today?<br>
&gt; <br>
&gt; What IETF work is needed? What standards gap needs filling?<br>
&gt; <br>
&gt; &gt; &nbsp; &nbsp;5.5. The Network Convergence Problem<br>
&gt; &gt; <br>
&gt; &gt; &nbsp; &nbsp;Cloud datacenters will be characterized by elasticity.
That means<br>
&gt; &gt; &nbsp; &nbsp;that virtual resources are constantly created and
destroyed.<br>
&gt; Typical<br>
&gt; &gt; &nbsp; &nbsp;hardware and software reliabilities of today mean
that failures at<br>
&gt; &gt; &nbsp; &nbsp;scale will be fairly common, and automated recovery
mechanisms will<br>
&gt; &gt; &nbsp; &nbsp;need to be put in place. When combined with workload
mobility for<br>
&gt; the<br>
&gt; &gt; &nbsp; &nbsp;sake of resource optimization and improving utilization,
the churn<br>
&gt; in<br>
&gt; &gt; &nbsp; &nbsp;the network forwarding tables can be very significant.<br>
&gt; <br>
&gt; WHat work does the above imply that the IETF needs to do ?<br>
&gt; <br>
&gt; &gt; &nbsp; &nbsp;Mobility also affects virtualized network devices,
such as virtual<br>
&gt; &gt; &nbsp; &nbsp;switches, firewalls, load-balancers, etc. For instance,
when a<br>
&gt; server<br>
&gt; &gt; &nbsp; &nbsp;fails and all the VMs are relocated, the associated
virtual switch<br>
&gt; &gt; &nbsp; &nbsp;and firewall must also be relocated. This means
that any assumption<br>
&gt; &gt; &nbsp; &nbsp;in mobility that the network is a static firmament
on which hosts<br>
&gt; are<br>
&gt; &gt; &nbsp; &nbsp;dynamically attached becomes false. We have to assume
that the<br>
&gt; &gt; &nbsp; &nbsp;network is as dynamic as the hosts themselves.<br>
&gt; <br>
&gt; This here is interesting. The implication is that when moving a VM,<br>
&gt; either<br>
&gt; <br>
&gt; a) a FW or LB (or both) may also have to be moved, or<br>
&gt; <br>
&gt; b) some sort of path enforcement is needed that insures traffic from<br>
&gt; the (now moved) VM continues to go through the same LB or FW as<br>
&gt; before.<br>
&gt; <br>
&gt; Do I understand that correctly? And if so, what is the IETF work that<br>
&gt; needs to be done to make all this happen?<br>
&gt; <br>
&gt; &gt; &nbsp;5.6. The East-West Traffic Problem<br>
&gt; <br>
&gt; Is this section saying anything more than there is a need for<br>
&gt; multipathing for East West traffic?<br>
&gt; <br>
&gt; &gt; 5.7. The Network SLA Problem<br>
&gt; &gt; <br>
&gt; &gt; &nbsp; &nbsp;Multi-tenant networks need to protect all tenants
from overusing<br>
&gt; &gt; &nbsp; &nbsp;network resources. For example, high-traffic load
from one tenant<br>
&gt; &gt; &nbsp; &nbsp;should not starve another tenant of bandwidth. Note
that in a<br>
&gt; multi-<br>
&gt; &gt; &nbsp; &nbsp;tenant environment, no tenant has full control or
visibility of<br>
&gt; what<br>
&gt; &gt; &nbsp; &nbsp;other tenants are doing, and how problems can be
fixed. A real-time<br>
&gt; &gt; &nbsp; &nbsp;debugging of such problems is very hard for a provider.<br>
&gt; <br>
&gt; ...<br>
&gt; <br>
&gt; &gt; &nbsp; &nbsp;Second, mechanisms to measure and guarantee network
SLAs will have<br>
&gt; to<br>
&gt; &gt; &nbsp; &nbsp;employ active flow management to guarantee bandwidth
to all tenants<br>
&gt; &gt; &nbsp; &nbsp;and keep the network provisioned only to the level
required. Flow<br>
&gt; &gt; &nbsp; &nbsp;management can be integrated as part of existing
forwarding<br>
&gt; &gt; &nbsp; &nbsp;techniques or may need new techniques. Network SLAs
can play an<br>
&gt; &gt; &nbsp; &nbsp;important role in determining if sufficient bandwidth
is available<br>
&gt; &gt; &nbsp; &nbsp;before a VM is moved to a new location.<br>
&gt; <br>
&gt; Can this not be done today? What specific IETF work would be needed
to<br>
&gt; support the enforcement of SLAs?<br>
&gt; <br>
&gt; Thomas<br>
&gt; <br>
&gt; _______________________________________________<br>
&gt; dc mailing list<br>
&gt; dc@ietf.org<br>
&gt; https://www.ietf.org/mailman/listinfo/dc<br>
&gt; <br>
&gt; _______________________________________________<br>
&gt; dc mailing list<br>
&gt; dc@ietf.org<br>
&gt; https://www.ietf.org/mailman/listinfo/dc<br>
</font><br><pre>
--------------------------------------------------------
ZTE&nbsp;Information&nbsp;Security&nbsp;Notice:&nbsp;The&nbsp;information&nbsp;contained&nbsp;in&nbsp;this&nbsp;mail&nbsp;is&nbsp;solely&nbsp;property&nbsp;of&nbsp;the&nbsp;sender's&nbsp;organization.&nbsp;This&nbsp;mail&nbsp;communication&nbsp;is&nbsp;confidential.&nbsp;Recipients&nbsp;named&nbsp;above&nbsp;are&nbsp;obligated&nbsp;to&nbsp;maintain&nbsp;secrecy&nbsp;and&nbsp;are&nbsp;not&nbsp;permitted&nbsp;to&nbsp;disclose&nbsp;the&nbsp;contents&nbsp;of&nbsp;this&nbsp;communication&nbsp;to&nbsp;others.
This&nbsp;email&nbsp;and&nbsp;any&nbsp;files&nbsp;transmitted&nbsp;with&nbsp;it&nbsp;are&nbsp;confidential&nbsp;and&nbsp;intended&nbsp;solely&nbsp;for&nbsp;the&nbsp;use&nbsp;of&nbsp;the&nbsp;individual&nbsp;or&nbsp;entity&nbsp;to&nbsp;whom&nbsp;they&nbsp;are&nbsp;addressed.&nbsp;If&nbsp;you&nbsp;have&nbsp;received&nbsp;this&nbsp;email&nbsp;in&nbsp;error&nbsp;please&nbsp;notify&nbsp;the&nbsp;originator&nbsp;of&nbsp;the&nbsp;message.&nbsp;Any&nbsp;views&nbsp;expressed&nbsp;in&nbsp;this&nbsp;message&nbsp;are&nbsp;those&nbsp;of&nbsp;the&nbsp;individual&nbsp;sender.
This&nbsp;message&nbsp;has&nbsp;been&nbsp;scanned&nbsp;for&nbsp;viruses&nbsp;and&nbsp;Spam&nbsp;by&nbsp;ZTE&nbsp;Anti-Spam&nbsp;system.
</pre>
--=_alternative 0056C3B54825797C_=--


From Peter.AshwoodSmith@huawei.com  Thu Jan  5 07:57:29 2012
Return-Path: <Peter.AshwoodSmith@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0F2DB21F8575 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 07:57:29 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.299
X-Spam-Level: 
X-Spam-Status: No, score=-2.299 tagged_above=-999 required=5 tests=[AWL=0.300,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0Vq3d1wCLqzx for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 07:57:28 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id 7C38B21F856B for <dc@ietf.org>; Thu,  5 Jan 2012 07:57:28 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml201-edg.china.huawei.com) ([172.18.9.243]) by dfwrg02-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACC04298; Thu, 05 Jan 2012 10:57:28 -0500 (EST)
Received: from DFWEML404-HUB.china.huawei.com (10.193.5.203) by dfweml201-edg.china.huawei.com (172.18.9.107) with Microsoft SMTP Server (TLS) id 14.1.323.3; Thu, 5 Jan 2012 07:56:32 -0800
Received: from DFWEML504-MBX.china.huawei.com ([10.124.31.30]) by dfweml404-hub.china.huawei.com ([10.193.5.203]) with mapi id 14.01.0323.003; Thu, 5 Jan 2012 07:56:27 -0800
From: AshwoodsmithPeter <Peter.AshwoodSmith@huawei.com>
To: Warren Kumari <warren@kumari.net>, Ashish Dalela <adalela@cisco.com>
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AQHMwUCIzWj2gT+Uy0Gb7K5wAOUnlpXvwNeAgAPPugCABOT+AIAAk6IAgAAP/4CAAAhIAIAAzXsAgAEtLYCAACCOAIAABvAAgAAPC4CAABargIAAB2QAgAAJtwCAAAgegIADAKOA//98X2A=
Date: Thu, 5 Jan 2012 15:56:27 +0000
Message-ID: <7AE6A4247B044C4ABE0A5B6BF427F8E28FA504@dfweml504-mbx>
In-Reply-To: <AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.193.60.86]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Pedro Marques <pedro.r.marques@gmail.com>, "david.black@emc.com" <david.black@emc.com>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 15:57:29 -0000

One thing that I've not seen discussed here is multicast state.

Clearly we can encapsualte unicast X in unicast Y and forward only on Y in =
the core thus eliminating knowledge of X and considerable savings. We've be=
en doing so for years for various X's and Y's.

However if we wish to deal with multicast X in an efficient manner it compl=
icates the problem and requires knowledge of or mapping of X multicast into=
 Y.

What are people's thoughts on the requirement for efficient multicast (i.e.=
 state in Y) v.s. emulated (serial unicast hence no state in Y) multicast s=
olutions?

Peter Ashwood-Smith



From adalela@cisco.com  Thu Jan  5 08:00:40 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7E5D121F8809 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:00:40 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.344
X-Spam-Level: 
X-Spam-Status: No, score=-4.344 tagged_above=-999 required=5 tests=[AWL=2.255,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id X-Y7dCA3SXnN for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:00:39 -0800 (PST)
Received: from mtv-iport-3.cisco.com (mtv-iport-3.cisco.com [173.36.130.14]) by ietfa.amsl.com (Postfix) with ESMTP id 7353C21F8752 for <dc@ietf.org>; Thu,  5 Jan 2012 08:00:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=11915; q=dns/txt; s=iport; t=1325779239; x=1326988839; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=JiK0KOSpk6vtUjDlL15qbnlTXvZrCEixPOFSJlnIO/w=; b=TkpfcFOSNU5R7fts2nZ23ctBLVeli8Kb7xf0MTt7x+OmgH8HtxtlAP98 Dw4krvPfkF7Mg1p/huBJpJz97ffW8d+rnUMdgSmvLB5G3KpMMyA0l64qf nCZ+PnvWSnN9JT/NRnKIV9bPF8rKu0sxkTpALVbXQ3HqdZb9kr5NmNBNS Y=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AgAFAAzIBU+rRDoG/2dsb2JhbAA4Cqx+gQWBcgEBAQMBAQEBDwEdCjQLBQcEAgEIEQQBAQEKBhcBBgEmHwkIAQEECwgIEweHWAiXXgGeDIhWglhjBIg3nww
X-IronPort-AV: E=Sophos;i="4.71,462,1320624000"; d="scan'208";a="23979278"
Received: from mtv-core-1.cisco.com ([171.68.58.6]) by mtv-iport-3.cisco.com with ESMTP; 05 Jan 2012 16:00:39 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by mtv-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q05G0bkU023568; Thu, 5 Jan 2012 16:00:38 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Thu, 5 Jan 2012 21:30:37 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Thu, 5 Jan 2012 21:30:32 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25C5A@XMB-BGL-416.cisco.com>
In-Reply-To: <AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczLv8/i14m93uVUT1SkyOUfB6pYjgAAhaTA
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco .com> < AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Warren Kumari" <warren@kumari.net>
X-OriginalArrivalTime: 05 Jan 2012 16:00:37.0115 (UTC) FILETIME=[2A084CB0:01CCCBC3]
Cc: Pedro Marques <pedro.r.marques@gmail.com>, david.black@emc.com, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 16:00:40 -0000

Hi Warren,

You are prescribing to a hypervisor based solutions and table scaling
issues don't arise there (or at least are not obvious) because it all
happens in software. However, there are other issues. For example, how
do you implement broadcast and multicast? The standard mechanism today
is to map a broadcast domain to a multicast group in L3. Now, what
happens if some rogue user sends an IGMP join to that group - what was
on the VLAN is now accessible to everyone through an IGMP join. For user
level multicast, there are other issues. Assume I'm doing a VDI cloud,
where users need to join multicast video conferencing. The group is user
determined, not admin determined. How do we know that the user is not
joining a VLAN mapped multicast group?

We need to keep a complete set of goals in mind. Otherwise, we can solve
an issue and miss a goal. E.g. multicast and broadcast.=20

Thanks, Ashish


-----Original Message-----
From: Warren Kumari [mailto:warren@kumari.net]=20
Sent: Thursday, January 05, 2012 9:06 PM
To: Ashish Dalela (adalela)
Cc: Warren Kumari; Pedro Marques; david.black@emc.com; dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect


On Jan 3, 2012, at 12:45 PM, Ashish Dalela (adalela) wrote:

>=20
> Suppose you have an IP solution.

Sure.

>=20
> To support mobility you need IP-in-IP encapsulation.=20

And if you do an overly you always do an IP encapsulation (to cover GRE,
IPIP, sit, IPSec, PPP, etc).=20

>=20
> As VM density increases, as VM-to-VM conversation grows, as interfaces
> per VM increase, the host routes increase.

No.

The only thing that the network needs to know is the routes to the
hypervisors / physical machines -- this is a solved problem.
The VM addresses and routes are only visible to the [gateways,
hypervisors with VMs in that overlay, other VMs in the same overlay,
mapping server].

For a really old overview:
http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00

> These host routes are in addition to network routes, local host-port
> bindings, ACLs, etc. That means in addition to everything that existed
> so far.=20

No.


>=20
> Eventually, you hit a limit on the access, and you have to reduce size
> of network, reduce VM mobility, reduce VM density per server, reduce
> application spread.=20
>=20

No.

> The alternative is to constantly increase network hardware table sizes
> at access, which increases costs and energy.
>=20

No.

> We have to realize that IP encapsulations put network and compute at
> opposite sides of the cost trend. Compute cost reduces slowly as size
> grows. Network cost grows rapidly as size grows.=20
>=20

No.



> Thanks,
> Ashish
>=20
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Pedro Marques
> Sent: Tuesday, January 03, 2012 10:46 PM
> To: david.black@emc.com
> Cc: dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>=20
> That assumes that the MAC has relevance in the network. It is possible
> to build solutions such that packets are forwarded based on their IP
> addresses rather than their MACs.
>=20
>  Pedro.
>=20
> On Tue, Jan 3, 2012 at 8:41 AM,  <david.black@emc.com> wrote:
>> Ashish,
>>=20
>>>> [AD] The higher bits identify a switch - it's a switch-id.
>>=20
>> That breaks VM migration across switches by forcing a MAC change.
>>=20
>> Thanks,
>> --David
>>=20
>>> -----Original Message-----
>>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Ashish Dalela (adalela)
>>> Sent: Tuesday, January 03, 2012 11:15 AM
>>> To: robert@raszuk.net
>>> Cc: Pedro Marques; dc@ietf.org
>>> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>>>=20
>>> Robert,
>>>=20
>>> Please see inline.
>>>=20
>>> -----Original Message-----
>>> From: Robert Raszuk [mailto:robert@raszuk.net]
>>> Sent: Tuesday, January 03, 2012 8:24 PM
>>> To: Ashish Dalela (adalela)
>>> Cc: Pedro Marques; dc@ietf.org
>>> Subject: Re: [dc] [armd] IP over IP solution for data center
>>> interconnect
>>>=20
>>> Ashish,
>>>=20
>>> OK let's just discuss what is in your draft on Hierarchical
> Addressing.
>>>=20
>>> 1. You have 48 bits 32 go for host remaining 16 goes for switches.
> How
>>> do you aggregate at the TOR or AGGR switch boundary ? Are you
> assuming
>>> single HOST - SWITCH with max 65K flat macs ?
>>>=20
>>> [AD] The higher bits identify a switch - it's a switch-id. The hosts
> are
>>> dynamically assigned a host-id under that switch. Let's assume 23
> bits
>>> are for switch-id and 23 bits for host-id. To forward a packet to
the
>>> host, you only have to look at the first 23 bits. That's a MAC
prefix
> to
>>> route against.
>>>=20
>>> [AD] You can have 2^23 switches in a network and 2^23 hosts under
> each
>>> switch.
>>>=20
>>> 2. Can you deploy this on existing VMs and existing switches ?
>>>=20
>>> [AD] What do you mean by this? Any VM can be configured with any
MAC.
>>> Any physical host can be configured with any MAC on any logical
>>> interface. Configuration standpoint this is possible. Forwarding
>>> standpoint, that's another question.
>>>=20
>>> 3. What new protocol you envision to use to distribute those new
MACs
> ?
>>>=20
>>> [AD] IS-IS extensions. It can be TRILL extensions.
>>>=20
>>> 4. What is the advantage of using this vs ILNP if we assume that
> hosts
>>> should be modified ?
>>>=20
>>> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
>>> talking about Loc-Id separation. If not, correct me. If yes, each
> Loc-Id
>>> binding can be a host route, with mobility. These host-routes are a
>>> scaling problem. Traditional IP packet have IP as ID and MAC as LOC.
> We
>>> are just extending this LOC to make it actually location aware
rather
>>> than a flat address which is fixed regardless of where the location
> is.
>>>=20
>>> 5. The proposal does not support aggregation .. even the draft says
> it
>>> :)
>>>=20
>>> "The total number of hardware entries anywhere in the network equals
> the
>>>=20
>>> total number of switches and remains agnostic of VM mobility."
>>>=20
>>> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts.
> With
>>> 48 port access switches, you need 833 switches. That's the routing
> table
>>> size for any switch in the datacenter - core, aggregation, access.
>>> Contrast this with host-routes, if each VM talks to 100 VMs, then
> each
>>> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just
because
>>> the network prefix is 23 bits does not mean we have to store 10^23
>>> prefixes. We have to store only as many switches as there are in the
>>> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48
> port
>>> access and 20 VM per port). That means instead of storing
host-routes
>>> which will grow proportional to VM growth, we store switch-id, which
>>> will grow at 1000 times slower rate. As VM density increases, this
>>> growth rate is further slowed down. There are other techniques to
>>> further reduce the rate of growth. But in any case, 1000 times
slower
> is
>>> a lot slow.
>>>=20
>>> So if I have 100K switches I can not do any aggregation and need to
>>> "route" 100K MAC addresses.
>>>=20
>>> [AD] I don't know how you came to that conclusion. Think of HMAC as
> an
>>> IP address. Instead of 32 bits it is 46 bits. You route by prefixes
> in
>>> L3, and you are routing by the same prefixes here. Just as you
> aggregate
>>> IP, same way you aggregate MAC. It's not different.
>>>=20
>>> 6. Who provides me the mapping between switch mac and host/vm mac
> behind
>>>=20
>>> such switch ? Do switches proxy arp globally within your domain ?
>>>=20
>>> [AD] Variation of the same question. Above should answer it.
>>>=20
>>> Thx,
>>> R.
>>>=20
>>>=20
>>>> Robert,
>>>>=20
>>>>>> So you are advocating solution which is based on encapsulation -
>>> that
>>>> is fine.
>>>>=20
>>>> No, I'm not. Did you read the draft I had mentioned?
>>>> Hierarchical MAC is not encapsulation. It is one 48 bit address.
>>>>=20
>>>>>> However how could you ever arrive at the conclusion that HMACs
> would
>>>>>> scale better then "anything we know". Well I don't know about
> you,
>>>> but I
>>>>>> know that the key to scaling is ability to aggregate. And it is
> not
>>>> that
>>>>>> huge mystery that MACs aggregate rather poorly while there are
> quite
>>>>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
>>> natively
>>>>=20
>>>> You are hitting the issue on the nail. So, read the draft I
> mentioned.
>>>> Hierarchical MAC is higher bits "network prefix" and lower bits
> "host
>>>> id".
>>>> That's summarizable like IP address and aggregated.
>>>> It has 46 bits to modify so larger than IPv4 internet.
>>>>=20
>>>> I won't comment on the rest, because you have made an assumption
> about
>>>> encapsulation.
>>>>=20
>>>> I refer to this -
>>>> http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
>>>>=20
>>>> Thanks, Ashish
>>>>=20
>>>>=20
>>>> -----Original Message-----
>>>> From: Robert Raszuk [mailto:robert@raszuk.net]
>>>> Sent: Tuesday, January 03, 2012 7:05 PM
>>>> To: Ashish Dalela (adalela)
>>>> Cc: Pedro Marques; dc@ietf.org
>>>> Subject: Re: [dc] [armd] IP over IP solution for data center
>>>> interconnect
>>>>=20
>>>> Ashish,
>>>>=20
>>>>> The issues of scale you mentioned don't exist in Hierarchical
> MACs,
>>>>> which scale better than anything we know of.
>>>>=20
>>>> So you are advocating solution which is based on encapsulation -
> that
>>> is
>>>> fine.
>>>>=20
>>>> However how could you ever arrive at the conclusion that HMACs
> would
>>>> scale better then "anything we know". Well I don't know about you,
> but
>>> I
>>>> know that the key to scaling is ability to aggregate. And it is not
>>> that
>>>>=20
>>>> huge mystery that MACs aggregate rather poorly while there are
> quite
>>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> natively.
>>>>=20
>>>> For inter-dc this is IMHO a must. A must even if you build it using
>>>> traditional routers or OF enabled switches - does not matter.
>>>>=20
>>>>> I don't want to split the requirements into multiple use-cases
>>>>> because then this DC group will be many groups - one doing L2 and
>>>>> another doing L3. That I think you will agree is not optimal for
>>>>> anyone
>>>>=20
>>>> Why MAC-in-IP does not solve it for everyone ? And there are
> deployed
>>>> solutions already ..
>>>>=20
>>>> IMHO what this group should accomplish is not to try to reinvent
> the
>>>> world, but perhaps as example discuss where is the right boundary
> of
>>>> encapsulation, how should we communicate between network and hosts,
>>> what
>>>>=20
>>>> kind of DC instrumentation should be IETF blessed for easy
> integration
>>>> (ie min subset of functionality it should possess etc .... )
>>>>=20
>>>> R.
>>>>=20
>>>> _______________________________________________
>>>> dc mailing list
>>>> dc@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/dc
>>>>=20
>>>>=20
>>>=20
>>> _______________________________________________
>>> dc mailing list
>>> dc@ietf.org
>>> https://www.ietf.org/mailman/listinfo/dc
>>=20
>> _______________________________________________
>> dc mailing list
>> dc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>=20


From narten@us.ibm.com  Thu Jan  5 08:12:38 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 02F5321F85BF for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:12:38 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.599
X-Spam-Level: 
X-Spam-Status: No, score=-106.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6-4PidjqZ1T7 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:12:37 -0800 (PST)
Received: from e8.ny.us.ibm.com (e8.ny.us.ibm.com [32.97.182.138]) by ietfa.amsl.com (Postfix) with ESMTP id C7F1921F85B3 for <dc@ietf.org>; Thu,  5 Jan 2012 08:12:36 -0800 (PST)
Received: from /spool/local by e8.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Thu, 5 Jan 2012 11:12:35 -0500
Received: from d01relay05.pok.ibm.com (9.56.227.237) by e8.ny.us.ibm.com (192.168.1.108) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Thu, 5 Jan 2012 11:11:27 -0500
Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d01relay05.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q05GBI8O193004 for <dc@ietf.org>; Thu, 5 Jan 2012 11:11:21 -0500
Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q05GBBbY011668 for <dc@ietf.org>; Thu, 5 Jan 2012 09:11:11 -0700
Received: from cichlid.raleigh.ibm.com (sig-9-76-133-189.mts.ibm.com [9.76.133.189]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q05GAxe6009805 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 5 Jan 2012 09:11:01 -0700
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q05GAqwn027469; Thu, 5 Jan 2012 11:10:53 -0500
Message-Id: <201201051610.q05GAqwn027469@cichlid.raleigh.ibm.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
In-reply-to: <618BE8B40039924EB9AED233D4A09C5102B25AA7@XMB-BGL-416.cisco.com>
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102B25AA7@XMB-BGL-416.cisco.com>
Comments: In-reply-to "Ashish Dalela (adalela)" <adalela@cisco.com> message dated "Thu, 05 Jan 2012 12:18:45 +0530."
Date: Thu, 05 Jan 2012 11:10:47 -0500
From: Thomas Narten <narten@us.ibm.com>
x-cbid: 12010516-9360-0000-0000-00000220BDD6
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 16:12:38 -0000

"Ashish Dalela (adalela)" <adalela@cisco.com> writes:

> Going by the number of map-encap approaches out there that don't scale
> at the network boundaries (access, interconnect, etc.) it would *seem*
> that we did not give adequate attention to scale. It might be that the
> alternatives haven't been present. I'm ok to modify this statement as
> follows: "Scalability is a primary consideration to be kept in mind,
> because without the intended scale, any solution will work". Does that
> sound right?

I don't want to nitpick on this wording, but I wouldn't use wording
like "any solution will work". But I did want to point out that
implying or saying that the IETF doesn't care about scaling is just
not true (and not helpful).
> >> What specifically is missing that prevents the above from being done
> today? What is it that you think needs doing that can't be done with
> existing standards?

> You perhaps missed the following problem statement:

> <snip>
>    treating inter and intra datacenter
>    as entirely independent leads to new issues at the edge that arise
>    from trying to map one forwarding approach within datacenter to
>    another forwarding approach between datacenters. In some cases, both
>    L2 and L3 approaches may be needed to connect two datacenters.
>    Further, ideally, customer segmentation in the internet needs to be
>    done similar to the segmentation in the datacenter. This simplifies
>    the identification of a customer's packets in the Internet as in the
>    datacenter. Common QoS and Security policies can be applied, in both
>    the domains if there is a common way to identify packets
> </snip>

> The key problem is still that datacenter inter-connectivity is not
> necessarily about connecting DC between the same provider. It is also
> about connecting them between private and public domains. We could use
> an approach that makes the provider DC edge scale very high, but you
> can't do that for a customer who has a smaller datacenter - you will be
> pushing the complexity to the customer edge, and their devices aren't
> designed to scale. In other words, if I have a large public cloud
> connected to a small private cloud, should the small cloud bear the
> burden of the large cloud? That needs better stating, I agree.

Sorry, I don't understand how this leads to a concrete, specific
problem that the IETF needs to work on.

This is just too high-level, general stuff. We can argue forever about
whether it's true, what it means, etc. But I don't think that sort of
discussion is productive or useful.

Please identify a specific problem with (say) a specific protocol or
deployment where what we have today doesn't work adequately and needs
a better approach. We need more details of a real problem experienced
by operators out in the field.

> >> I suspect you'll get a lot of agreement on this. And one of the key
> aims of NVO3 is to address this. Is the existing NVO3 approach not
> adequate for the above? If so why not?

> There are many approaches out there, and the discussion of approaches is
> in the separate draft.
> tools.ietf.org/html/draft-dalela-dc-approaches-00. We understand that
> many of these problems may have been stated in other places. But, we
> can't avoid that.

The purpose of this list (presumably) is to focus on new problems or
problems that aren't being discussed elsewhere already.

What are the problems for which there isn't a potential home yet?
Those are the ones we should be trying to tease out here.

> >> Isn't this already possible, and indeed, happening today? What IETF
> work is needed? What standards gap needs filling?

> Not necessarily. Take the example of two architectures - scale-up vs.
> scale-out and compare them for map-encap. The scale-up model requires
> less switch-to-switch map-encaps but a lot more internal mapping. So,
> from a technology perspective, there is no problem at all if you look at
> this from the outside. I can claim that I have one huge switch in which
> everything is connected. The problem is just abstracted from view, and
> it may be inside the huge switch. Contrast this with the scale-out model
> where there are many smaller switches and the map-encap is externalized.
> The problem is more visible. The technology you devise has to be such
> that I can do both scale-up and scale-out.

Again, this is too high-level. We need to talk more specificically
about protocols and IETF documents that are insufficient or need to be
developed.

> >> WHat work does the above imply that the IETF needs to do ?

> If you have looked at the other thread which talked about number of
> routes, a 1M VM datacenter can require several million host-routes. That
> implies slow convergence. There have been other discussions on
> convergence as well that talked about pulling out a route on-demand when
> packets arrived.

"slow convergence" in a general sense is not an IETF problem.

Which specific protocol is being used today (i.e., in real
deployments) that doesn't converge adequately? Do others agree that
there are problems with that protocol that require solutions?

> >> Is this section saying anything more than there is a need for
> multipathing for East West traffic?

> No it is not overtly. Internally, this is also tied to the flow mgmt
> problem.
nn
> >> Can this not be done today? What specific IETF work would be needed
> to
> support the enforcement of SLAs?

> No there is no work in IETF done to define SLAs. For service provider
> environments, you can define a SLA at access for a given user. When you
> have 10 VM talking to each other, and you want to guarantee a bandwidth
> SLA on a VLAN, there is nothing out there. The other fact is that in a
> multi-tenant environment, there is no guarantee that you will get 1G
> bandwidth because you have a 1G interface on the VM. Typical network
> planning too into account the "whole" network design including what
> applications you are going to run and what bandwidths they need. That
> isn't true for cloud at least.

Sounds to me like the topic of SLAs could be standalone and result
(potentially) in its own work group (or additional work in existing
WGs). But to get there, a problem statement document that focused
specifically on SLA issues would seem to be a good starting point.

I.e., what is done today, what protocols are used, why they are
inadequate, what sort of IETF work is needed to close those gaps. But
you need to drill down and provide more details about what is needed
and why.

Do others agree with this? What are some of the perceived gaps?

Thomas


From narten@us.ibm.com  Thu Jan  5 08:16:42 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5669621F8719 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:16:42 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.599
X-Spam-Level: 
X-Spam-Status: No, score=-106.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id T0DFBNuABDHv for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:16:41 -0800 (PST)
Received: from e3.ny.us.ibm.com (e3.ny.us.ibm.com [32.97.182.143]) by ietfa.amsl.com (Postfix) with ESMTP id A5F1321F86DB for <dc@ietf.org>; Thu,  5 Jan 2012 08:16:41 -0800 (PST)
Received: from /spool/local by e3.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Thu, 5 Jan 2012 11:16:39 -0500
Received: from d01relay04.pok.ibm.com (9.56.227.236) by e3.ny.us.ibm.com (192.168.1.103) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Thu, 5 Jan 2012 11:16:30 -0500
Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q05GGQfK316162 for <dc@ietf.org>; Thu, 5 Jan 2012 11:16:27 -0500
Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q05GGM0R032122 for <dc@ietf.org>; Thu, 5 Jan 2012 09:16:23 -0700
Received: from cichlid.raleigh.ibm.com (sig-9-76-133-189.mts.ibm.com [9.76.133.189]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q05GGKQA031867 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 5 Jan 2012 09:16:21 -0700
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q05GGIoU027563; Thu, 5 Jan 2012 11:16:19 -0500
Message-Id: <201201051616.q05GGIoU027563@cichlid.raleigh.ibm.com>
To: David Allan I <david.i.allan@ericsson.com>
In-reply-to: <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se>
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com> <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se>
Comments: In-reply-to David Allan I <david.i.allan@ericsson.com> message dated "Wed, 04 Jan 2012 13:56:26 -0500."
Date: Thu, 05 Jan 2012 11:16:18 -0500
From: Thomas Narten <narten@us.ibm.com>
x-cbid: 12010516-8974-0000-0000-000004EAF157
Cc: "Ashish Dalela \(adalela\)" <adalela@cisco.com>, "dc@ietf.org" <dc@ietf.org>
Subject: [dc] 24-bit VLAN tags [was Re: draft-dalela-dc-requirements-00.txt]
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 16:16:42 -0000

David,

> If the goal is to describe the generalized characteristics of what is needed:
> An absolutely flat broadcast domain does not scale...duh!
> An absolutely flat L2 network does not scale...duh!

Indeed! :-)

> Partitioning the network into a large number of virtual broadcast
>  domains or L2VPNs/VLANs is what works for many adopters as it
>  supports PMO. This is what numerous existing standardized and
>  proprietary solutions offer with various shades of grey attribute
>  wise (e.g. scaling, ordering guarantees, properties when failures
>  occur, broadcast containment etc.). The one observation is that a
>  24 bit VLAN tag seems to be the current gold standard, both with
>  the IEEE and with proprietary or proposed approaches.

Just how widely deployed are 24-bit VLAN tags these days? Presumably
you are referring to PBB/SPB?

What are the deployment trends for 24-bit VLAN tags in data centers?
Is this starting to happen? Does it look like there will be
significant deployments? Or will there be lots of deployments that
choose not go use them (for whatever reason)?

It would be useful to hear from operators what they are doing or
thinking of doing w.r.t. to 24-bit VLAN tags.

Thomas


From mphmmr@gmail.com  Thu Jan  5 08:17:15 2012
Return-Path: <mphmmr@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E024B21F855B for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:17:15 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.141
X-Spam-Level: 
X-Spam-Status: No, score=-3.141 tagged_above=-999 required=5 tests=[AWL=0.142,  BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6ZNLt0jes+yP for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:17:14 -0800 (PST)
Received: from mail-we0-f172.google.com (mail-we0-f172.google.com [74.125.82.172]) by ietfa.amsl.com (Postfix) with ESMTP id E40A921F853E for <dc@ietf.org>; Thu,  5 Jan 2012 08:17:13 -0800 (PST)
Received: by werb14 with SMTP id b14so595532wer.31 for <dc@ietf.org>; Thu, 05 Jan 2012 08:17:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=FkcTSQ+ztZT4T6HIMVrBMT+cLPImOAjnEd4UK09OnAU=; b=UT73QGP1SlLFcUrXfIF5gJqudLP/yjfow058YLo6brOn8UksKvOi2+nuAXNIntZ3oS hSN2hsztgfJh95VUpZ81arZx7cY1XBFeEJc8Jj8mecWxLzxhBgPkBy/C4Xr/kpX2OLlp /W5WG1fUpb7K1h1ITJyk8S9fbyM5cB7zYDs20=
MIME-Version: 1.0
Received: by 10.216.134.149 with SMTP id s21mr1258321wei.41.1325780232920; Thu, 05 Jan 2012 08:17:12 -0800 (PST)
Received: by 10.216.132.90 with HTTP; Thu, 5 Jan 2012 08:17:12 -0800 (PST)
In-Reply-To: <OF92C6E01A.E44BD605-ON4825797C.0053B413-4825797C.0054D077@zte.com.cn>
References: <618BE8B40039924EB9AED233D4A09C5102B25961@XMB-BGL-416.cisco.com> <OF92C6E01A.E44BD605-ON4825797C.0053B413-4825797C.0054D077@zte.com.cn>
Date: Thu, 5 Jan 2012 11:17:12 -0500
Message-ID: <CAA3wLqVzHefh3HXJ_tNogzNz3qLVqQf0TKZ7z=AjohGBTV+2og@mail.gmail.com>
From: Michael Hammer <mphmmr@gmail.com>
To: Lizhong Jin <lizhong.jin@zte.com.cn>
Content-Type: multipart/alternative; boundary=0016e6dd990dd560ad04b5ca4317
Cc: yakov@juniper.net, dc@ietf.org, "Ashish Dalela \(adalela\)" <adalela@cisco.com>, robert@raszuk.net, aldrin.isaac@gmail.com
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 16:17:16 -0000

--0016e6dd990dd560ad04b5ca4317
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

It is good to consider worse cases when planning architectures.  When you
design only for best cases, it all collapses when the worse cases occur.
 So, it might help to identify those hidden biases that rely on certain
assumptions of what might or might not occur.  Often it seems like only one
cloud provider is assumed.

I would prefer to see a design that addresses:
- security as a first consideration, not bolted on afterwards,
- clear separation between customer and cloud provider mechanisms, such
that the actions of each are orthogonal.

That means that any mechanism used should clearly define how customer and
cloud provider use of such mechanism is isolated.
And if not isolated, where in the architecture does the customer or cloud
provider monitor/manage the cross-domain traffic.

It might also be good to identify which mechanisms depend on certain
assumptions, and recommend that they not be used when those assumptions no
longer hold.  The poster child for this would seem to be ARP, which seems
to assume a rather static LAN, which in DC is not the case.  That might
suggest that another means of learning mappings is needed.

Lesson here is that continuing to use every existing mechanism may need to
be questioned and perhaps discouraged, i.e. abandon the cruft.
So, the task at hand may not be solely identifying new elements to pile
onto the toolkit.

Mike


2012/1/5 Lizhong Jin <lizhong.jin@zte.com.cn>

>
> Hi Ashish,
> Thank you for the analysis. The number of host routes in your example
> reaches 2M, becasue you assume each VM with two VIF. If each route perfor=
m
> 4 path for ECMP, the total route will reach 8M as you described. This is
> the worst case, and at least, the scalability on aggregation router would
> be better than on access. And I agree, the connection between access and
> aggregation should be coordinated when VM moving, which is not easy to
> solve.
>
> Regards
> Lizhong
>
>
>
> "Ashish Dalela (adalela)" <adalela@cisco.com> wrote 2012/01/05 00:16:34:
>
> > Hi Lizhong,
> >
> > When you do things at the aggregation you need two sets of encaps =96
> > Agg-to-Agg and Agg-to-Access.
> >
> > If there are N hosts under an access, and M accesses under an Agg,
> > and P Aggs, then you need =96
> >
> > N * M for access to agg
> > (P-1) * N * M for agg to agg
> > Total =3D N * M + (P-1) * N * M =3D N * M * P
> >
> > This is the worst case.
> >
> > For a better case, assume each VM talks to 25 VM outside its Agg.
> > The total =3D N * M * 25
> >
> > Let=92s plug some numbers into this.
> >
> > For a 48 port access, 50 VM per port, each with 2 VIF, N =3D 48 * 50 * =
2 =3D
> 4800
> >
> > For an agg with 100 ports (50 down and 50 up), M =3D 50
> >
> > A million VM require 1,000,000 / (48 * 50) =3D 415 accesses.
> >
> > Each access connects to 4 Aggs, so you need 415 * 4 / 50 =3D 33 Aggs =
=3D P.
> >
> > Worst case total entries needed at each Agg =3D 4800 * 50 * 33 =3D 7,92=
0,000
> > Better case total entries needed at each Agg =3D 4800 * 50 * 25 =3D 6,0=
00,000
> >
> > You may be wondering why for 1M VM, we need 6-8M host routes? It=92s
> > because of multi-pathing. A destination can be reached through many
> > paths, so you put routes for all paths.
> >
> > When you have 6-8M routes, you can imagine the corresponding control
> > plane load.
> >
> > The additional complexity is that to send packet from A to B, you
> > need 3 encaps =96 access-to-agg, agg-to-agg and agg-to-access. When a
> > VM moves, you have to coordinate all these entries. That=92s another
> > not so easy problem.
> >
> > Thanks, Ashish
> >
> >
> > From: Lizhong Jin [mailto:lizhong.jin@zte.com.cn]
> > Sent: Wednesday, January 04, 2012 8:28 PM
> > To: Ashish Dalela (adalela)
> > Cc: robert@raszuk.net; dc@ietf.org; yakov@juniper.net;
> aldrin.isaac@gmail.com
> > Subject: Re: [dc] new drafts
> >
> >
> > Hi Ashish,
> > If we implement VRF on access switch (or ToR), I agree there will be
> > scalability problem. Also the cost will also be an issue, the access
> > switch would be more expensive than before. How about implement VRF
> > on aggregation router? If the aggregation router could solve
> > scalability and high availability problem, then we should focus on
> > how to setup connection between VM and aggregation router. Hope to
> > see your comments.
> >
> > Thanks
> > Lizhong
> >
> >
> > >
> > > -----From "Ashish Dalela (adalela)" <adalela@cisco.com>
> > > Tue, 3 Jan 2012 22:09:17 +0530 -----
> > >
> > > Receiver:
> > >
> > > <robert@raszuk.net>
> > >
> > > cc:
> > >
> > > Yakov Rekhter <yakov@juniper.net>, dc@ietf.org, Aldrin Isaac
> > > <aldrin.isaac@gmail.com>
> > >
> > > Subject:
> > >
> > > Re: [dc] new drafts
> > >
> > > Robert,
> > >
> > > Here are some things to evaluate scalability against.
> > >
> > > Assume a simple case that under a switch there are 250 VM, split
> amongst
> > > 10 customers. Each customer has a unique VRF. Normally, we would have
> > > advertized a /24 route for that switch. In this case your routes to a
> > > single switch are segmented and there are 10 VRFs, and you will very
> > > likely have 250 route table entries total segmented by VRF-ids. That'=
s
> a
> > > routing table bloat from 1 entry to 250 entry. This happens
> everywhere.
> > > I have assumed a public IP addressing, but the same thing will happen
> > > for the private addressing as well.
> > >
> > > Then, typically the number of VRFs you can support on a router is
> about
> > > 4K. These # of VRFs have to be supported at the access, so you have t=
o
> > > assume this is the limit from the access viewpoint. 4K is nothing - w=
e
> > > have 4K VLANs today to segment and that's nothing. Every segmentation
> > > technique being talked about speaks of a million plus segments. Take
> > > that to VRFs, you need a million VRFs on the control plane at the
> access
> > > switch. Another problem with a VRF is that it will get and store a
> route
> > > for a host, even when there is no host talking to it. With dynamic
> > > learning or learning based on packet arrival you avoid these host
> routes
> > > and limit them to active conversations only. That's a huge saving
> > > because not every host talks to every host.
> > >
> > > Then, at massive scale, the failure rates are also massive. At 5 nine=
s
> > > reliability, a hardware entity out of 100,000 will fail every 5.25
> > > minutes. Access switches don't have high availability. Software fails
> > > even faster - OS is generally 4 9's, which means one out of 10,000
> fails
> > > every 5.25 minutes. At millions of instances of such entities, there
> are
> > > rapid failures happening. You have to only look at massive datacenter=
s
> > > today run by Web 2.0 companies, and they all echo this view. They
> > > basically form clusters of the same application. Software moves the
> > > workload from one cluster to another. The whole cluster can fail over=
.
> > > That's not what you do in a consumer cloud, where you have to recover=
.
> > > At massive failure rates, and rapid recovery rates, you are moving
> > > things around and injecting host routes for reachability. It's a
> > > convergence problem, especially with link-state algorithms.
> > >
> > > If the VM can be moved, then all you need to do is install a temporar=
y
> > > redirect of packets to the new location. Each host will refresh the
> MAC
> > > after 15-30 seconds. If the packets are redirected from old to new
> > > location for these 30 seconds, the redirect can be aged automatically=
.
> > > This happens all the time in mobile networks in what is called a "fas=
t
> > > handoff" where you redirect the packets until handoff is completed.
> > >
> > > Thanks, Ashish
> > >
> > >
>
> --------------------------------------------------------
> ZTE Information Security Notice: The information contained in this mail i=
s solely property of the sender's organization. This mail communication is =
confidential. Recipients named above are obligated to maintain secrecy and =
are not permitted to disclose the contents of this communication to others.
> This email and any files transmitted with it are confidential and intende=
d solely for the use of the individual or entity to whom they are addressed=
. If you have received this email in error please notify the originator of =
the message. Any views expressed in this message are those of the individua=
l sender.
> This message has been scanned for viruses and Spam by ZTE Anti-Spam syste=
m.
>
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>
>

--0016e6dd990dd560ad04b5ca4317
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

It is good to consider worse cases when planning architectures. =A0When you=
 design only for best cases, it all collapses when the worse cases occur. =
=A0So, it might help to identify those hidden biases that rely on certain a=
ssumptions of what might or might not occur. =A0Often it seems like only on=
e cloud provider is assumed.<div>
<br></div><div>I would prefer to see a design that addresses:</div><div>- s=
ecurity as a first consideration, not bolted on afterwards,</div><div>- cle=
ar separation between customer and cloud provider mechanisms, such that the=
 actions of each are orthogonal.</div>
<div><br></div><div>That means that any mechanism used should clearly defin=
e how customer and cloud provider use of such mechanism is isolated.</div><=
div>And if not isolated, where in the architecture does the customer or clo=
ud provider monitor/manage the cross-domain traffic.</div>
<div><br></div><div>It might also be good to identify which mechanisms depe=
nd on certain assumptions, and recommend that they not be used when those a=
ssumptions no longer hold. =A0The poster child for this would seem to be AR=
P, which seems to assume a rather static LAN, which in DC is not the case. =
=A0That might suggest that another means of learning mappings is needed. =
=A0</div>
<div><br></div><div>Lesson here is that continuing to use every existing me=
chanism may need to be questioned and perhaps discouraged, i.e. abandon the=
 cruft.</div><div>So, the task at hand may not be solely identifying new el=
ements to pile onto the toolkit.</div>
<div><br></div><div>Mike</div><div><div><br></div><div><br><div class=3D"gm=
ail_quote">2012/1/5 Lizhong Jin <span dir=3D"ltr">&lt;<a href=3D"mailto:liz=
hong.jin@zte.com.cn">lizhong.jin@zte.com.cn</a>&gt;</span><br><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;=
padding-left:1ex">

<br><font face=3D"sans-serif">Hi Ashish,</font>
<br><font face=3D"sans-serif">Thank you for the analysis. The number
of host routes in your example reaches 2M, becasue you assume each VM with
two VIF. If each route perform 4 path for ECMP, the total route will reach
8M as you described. This is the worst case, and at least, the scalability
on aggregation router would be better than on access. And I agree, the
connection between access and aggregation should be coordinated when VM
moving, which is not easy to solve.</font>
<br>
<br><font face=3D"sans-serif">Regards</font>
<br><font face=3D"sans-serif">Lizhong<br>
</font>
<br><font size=3D"1" face=3D"sans-serif">=A0</font>
<br>
<br><font face=3D"sans-serif">&quot;Ashish Dalela (adalela)&quot;
&lt;<a href=3D"mailto:adalela@cisco.com" target=3D"_blank">adalela@cisco.co=
m</a>&gt; wrote 2012/01/05 00:16:34:<br>
<br>
&gt; Hi Lizhong,</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; When you do things at the aggregation
you need two sets of encaps =96 <br>
&gt; Agg-to-Agg and Agg-to-Access. </font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; If there are N hosts under an access,
and M accesses under an Agg, <br>
&gt; and P Aggs, then you need =96 </font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; N * M for access to agg</font>
<br><font face=3D"sans-serif">&gt; (P-1) * N * M for agg to agg</font>
<br><font face=3D"sans-serif">&gt; Total =3D N * M + (P-1) * N * M =3D
N * M * P</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; This is the worst case.</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; For a better case, assume each
VM talks to 25 VM outside its Agg.</font>
<br><font face=3D"sans-serif">&gt; The total =3D N * M * 25</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; Let=92s plug some numbers into
this.</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; For a 48 port access, 50 VM per
port, each with 2 VIF, N =3D 48 * 50 * 2 =3D 4800</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; For an agg with 100 ports (50 down
and 50 up), M =3D 50</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; A million VM require 1,000,000
/ (48 * 50) =3D 415 accesses.</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; Each access connects to 4 Aggs,
so you need 415 * 4 / 50 =3D 33 Aggs =3D P.</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; Worst case total entries needed
at each Agg =3D 4800 * 50 * 33 =3D 7,920,000</font>
<br><font face=3D"sans-serif">&gt; Better case total entries needed
at each Agg =3D 4800 * 50 * 25 =3D 6,000,000</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; You may be wondering why for 1M
VM, we need 6-8M host routes? It=92s <br>
&gt; because of multi-pathing. A destination can be reached through many
<br>
&gt; paths, so you put routes for all paths.</font>
<br><font face=3D"sans-serif">&gt; =A0 =A0 =A0 =A0 =A0
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
</font>
<br><font face=3D"sans-serif">&gt; When you have 6-8M routes, you
can imagine the corresponding control<br>
&gt; plane load.</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; The additional complexity is that
to send packet from A to B, you <br>
&gt; need 3 encaps =96 access-to-agg, agg-to-agg and agg-to-access. When
a <br>
&gt; VM moves, you have to coordinate all these entries. That=92s another
<br>
&gt; not so easy problem.</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; Thanks, Ashish</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; From: Lizhong Jin [mailto:<a href=3D"mai=
lto:lizhong.jin@zte.com.cn" target=3D"_blank">lizhong.jin@zte.com.cn</a>]
<br>
&gt; Sent: Wednesday, January 04, 2012 8:28 PM<br>
&gt; To: Ashish Dalela (adalela)<br>
&gt; Cc: <a href=3D"mailto:robert@raszuk.net" target=3D"_blank">robert@rasz=
uk.net</a>; <a href=3D"mailto:dc@ietf.org" target=3D"_blank">dc@ietf.org</a=
>; <a href=3D"mailto:yakov@juniper.net" target=3D"_blank">yakov@juniper.net=
</a>; <a href=3D"mailto:aldrin.isaac@gmail.com" target=3D"_blank">aldrin.is=
aac@gmail.com</a><br>

&gt; Subject: Re: [dc] new drafts</font>
<br><font face=3D"sans-serif">&gt; =A0</font>
<br><font face=3D"sans-serif">&gt; <br>
&gt; Hi Ashish, <br>
&gt; If we implement VRF on access switch (or ToR), I agree there will
be<br>
&gt; scalability problem. Also the cost will also be an issue, the access<b=
r>
&gt; switch would be more expensive than before. How about implement VRF
<br>
&gt; on aggregation router? If the aggregation router could solve <br>
&gt; scalability and high availability problem, then we should focus on
<br>
&gt; how to setup connection between VM and aggregation router. Hope to
<br>
&gt; see your comments. <br>
&gt; <br>
&gt; Thanks <br>
&gt; Lizhong <br>
&gt; <br>
&gt; <br>
&gt; &gt; <br>
&gt; &gt; -----From &quot;Ashish Dalela (adalela)&quot; &lt;<a href=3D"mail=
to:adalela@cisco.com" target=3D"_blank">adalela@cisco.com</a>&gt;
<br>
&gt; &gt; Tue, 3 Jan 2012 22:09:17 +0530 ----- <br>
&gt; &gt; <br>
&gt; &gt; Receiver: <br>
&gt; &gt; <br>
&gt; &gt; &lt;<a href=3D"mailto:robert@raszuk.net" target=3D"_blank">robert=
@raszuk.net</a>&gt; <br>
&gt; &gt; <br>
&gt; &gt; cc: <br>
&gt; &gt; <br>
&gt; &gt; Yakov Rekhter &lt;<a href=3D"mailto:yakov@juniper.net" target=3D"=
_blank">yakov@juniper.net</a>&gt;, <a href=3D"mailto:dc@ietf.org" target=3D=
"_blank">dc@ietf.org</a>, Aldrin
Isaac <br>
&gt; &gt; &lt;<a href=3D"mailto:aldrin.isaac@gmail.com" target=3D"_blank">a=
ldrin.isaac@gmail.com</a>&gt; <br>
&gt; &gt; <br>
&gt; &gt; Subject: <br>
&gt; &gt; <br>
&gt; &gt; Re: [dc] new drafts <br>
&gt; &gt; <br>
&gt; &gt; Robert, <br>
&gt; &gt; <br>
&gt; &gt; Here are some things to evaluate scalability against. <br>
&gt; &gt; <br>
&gt; &gt; Assume a simple case that under a switch there are 250 VM, split
amongst <br>
&gt; &gt; 10 customers. Each customer has a unique VRF. Normally, we would
have <br>
&gt; &gt; advertized a /24 route for that switch. In this case your routes
to a <br>
&gt; &gt; single switch are segmented and there are 10 VRFs, and you will
very <br>
&gt; &gt; likely have 250 route table entries total segmented by VRF-ids.
That&#39;s a <br>
&gt; &gt; routing table bloat from 1 entry to 250 entry. This happens every=
where.
<br>
&gt; &gt; I have assumed a public IP addressing, but the same thing will
happen <br>
&gt; &gt; for the private addressing as well. <br>
&gt; &gt; <br>
&gt; &gt; Then, typically the number of VRFs you can support on a router
is about <br>
&gt; &gt; 4K. These # of VRFs have to be supported at the access, so you
have to <br>
&gt; &gt; assume this is the limit from the access viewpoint. 4K is nothing
- we <br>
&gt; &gt; have 4K VLANs today to segment and that&#39;s nothing. Every segm=
entation
<br>
&gt; &gt; technique being talked about speaks of a million plus segments.
Take <br>
&gt; &gt; that to VRFs, you need a million VRFs on the control plane at
the access <br>
&gt; &gt; switch. Another problem with a VRF is that it will get and store
a route <br>
&gt; &gt; for a host, even when there is no host talking to it. With dynami=
c
<br>
&gt; &gt; learning or learning based on packet arrival you avoid these
host routes <br>
&gt; &gt; and limit them to active conversations only. That&#39;s a huge sa=
ving
<br>
&gt; &gt; because not every host talks to every host. <br>
&gt; &gt; <br>
&gt; &gt; Then, at massive scale, the failure rates are also massive. At
5 nines <br>
&gt; &gt; reliability, a hardware entity out of 100,000 will fail every
5.25 <br>
&gt; &gt; minutes. Access switches don&#39;t have high availability. Softwa=
re
fails <br>
&gt; &gt; even faster - OS is generally 4 9&#39;s, which means one out of 1=
0,000
fails <br>
&gt; &gt; every 5.25 minutes. At millions of instances of such entities,
there are <br>
&gt; &gt; rapid failures happening. You have to only look at massive datace=
nters
<br>
&gt; &gt; today run by Web 2.0 companies, and they all echo this view.
They <br>
&gt; &gt; basically form clusters of the same application. Software moves
the <br>
&gt; &gt; workload from one cluster to another. The whole cluster can fail
over. <br>
&gt; &gt; That&#39;s not what you do in a consumer cloud, where you have to
recover. <br>
&gt; &gt; At massive failure rates, and rapid recovery rates, you are movin=
g
<br>
&gt; &gt; things around and injecting host routes for reachability. It&#39;=
s
a <br>
&gt; &gt; convergence problem, especially with link-state algorithms. <br>
&gt; &gt; <br>
&gt; &gt; If the VM can be moved, then all you need to do is install a
temporary <br>
&gt; &gt; redirect of packets to the new location. Each host will refresh
the MAC <br>
&gt; &gt; after 15-30 seconds. If the packets are redirected from old to
new <br>
&gt; &gt; location for these 30 seconds, the redirect can be aged automatic=
ally.
<br>
&gt; &gt; This happens all the time in mobile networks in what is called
a &quot;fast <br>
&gt; &gt; handoff&quot; where you redirect the packets until handoff is
completed. <br>
&gt; &gt; <br>
&gt; &gt; Thanks, Ashish <br>
&gt; &gt; <br>
&gt; &gt; </font>
<br><pre>--------------------------------------------------------
ZTE=A0Information=A0Security=A0Notice:=A0The=A0information=A0contained=A0in=
=A0this=A0mail=A0is=A0solely=A0property=A0of=A0the=A0sender&#39;s=A0organiz=
ation.=A0This=A0mail=A0communication=A0is=A0confidential.=A0Recipients=A0na=
med=A0above=A0are=A0obligated=A0to=A0maintain=A0secrecy=A0and=A0are=A0not=
=A0permitted=A0to=A0disclose=A0the=A0contents=A0of=A0this=A0communication=
=A0to=A0others.
This=A0email=A0and=A0any=A0files=A0transmitted=A0with=A0it=A0are=A0confiden=
tial=A0and=A0intended=A0solely=A0for=A0the=A0use=A0of=A0the=A0individual=A0=
or=A0entity=A0to=A0whom=A0they=A0are=A0addressed.=A0If=A0you=A0have=A0recei=
ved=A0this=A0email=A0in=A0error=A0please=A0notify=A0the=A0originator=A0of=
=A0the=A0message.=A0Any=A0views=A0expressed=A0in=A0this=A0message=A0are=A0t=
hose=A0of=A0the=A0individual=A0sender.
This=A0message=A0has=A0been=A0scanned=A0for=A0viruses=A0and=A0Spam=A0by=A0Z=
TE=A0Anti-Spam=A0system.
</pre><br>_______________________________________________<br>
dc mailing list<br>
<a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>
<a href=3D"https://www.ietf.org/mailman/listinfo/dc" target=3D"_blank">http=
s://www.ietf.org/mailman/listinfo/dc</a><br>
<br></blockquote></div><br></div></div>

--0016e6dd990dd560ad04b5ca4317--

From david.black@emc.com  Thu Jan  5 08:31:20 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5A5D921F870F for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:31:20 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.59
X-Spam-Level: 
X-Spam-Status: No, score=-106.59 tagged_above=-999 required=5 tests=[AWL=0.009, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fituWNIwO7se for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:31:19 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id DB5AD21F86F8 for <dc@ietf.org>; Thu,  5 Jan 2012 08:31:18 -0800 (PST)
Received: from hop04-l1d11-si03.isus.emc.com (HOP04-L1D11-SI03.isus.emc.com [10.254.111.23]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q05GVGkH011477 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 5 Jan 2012 11:31:16 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.221.251]) by hop04-l1d11-si03.isus.emc.com (RSA Interceptor); Thu, 5 Jan 2012 11:30:56 -0500
Received: from mxhub19.corp.emc.com (mxhub19.corp.emc.com [10.254.93.48]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q05GUr0C011589; Thu, 5 Jan 2012 11:30:55 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub19.corp.emc.com ([10.254.93.48]) with mapi; Thu, 5 Jan 2012 11:29:41 -0500
From: <david.black@emc.com>
To: <adalela@cisco.com>
Date: Thu, 5 Jan 2012 11:29:40 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczLv8/i14m93uVUT1SkyOUfB6pYjgAAhaTAAAEwXYA=
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C204@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco .com> < AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net> <618BE8B40039924EB9AED233D4A09C5102B25C5A@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25C5A@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 16:31:20 -0000

Ashish,

For IP-based overlays, this starts by carefully distinguishing the inner (u=
ser) and
outer (provider/infrastructure) IP address blocks, and completely controlli=
ng access
to the outer IP address blocks via encap/decap.  The result is that an IGMP=
 join
from a user gets encapsulated and can't be processed with respect to those =
outer
addresses.  That creates a requirement to provide an IP multicast service t=
o the=20
users over the encapsulation.

Thanks,
--David

> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Ashis=
h Dalela (adalela)
> Sent: Thursday, January 05, 2012 11:01 AM
> To: Warren Kumari
> Cc: Pedro Marques; Black, David; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
>=20
> Hi Warren,
>=20
> You are prescribing to a hypervisor based solutions and table scaling
> issues don't arise there (or at least are not obvious) because it all
> happens in software. However, there are other issues. For example, how
> do you implement broadcast and multicast? The standard mechanism today
> is to map a broadcast domain to a multicast group in L3. Now, what
> happens if some rogue user sends an IGMP join to that group - what was
> on the VLAN is now accessible to everyone through an IGMP join. For user
> level multicast, there are other issues. Assume I'm doing a VDI cloud,
> where users need to join multicast video conferencing. The group is user
> determined, not admin determined. How do we know that the user is not
> joining a VLAN mapped multicast group?
>=20
> We need to keep a complete set of goals in mind. Otherwise, we can solve
> an issue and miss a goal. E.g. multicast and broadcast.
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: Warren Kumari [mailto:warren@kumari.net]
> Sent: Thursday, January 05, 2012 9:06 PM
> To: Ashish Dalela (adalela)
> Cc: Warren Kumari; Pedro Marques; david.black@emc.com; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>=20
>=20
> On Jan 3, 2012, at 12:45 PM, Ashish Dalela (adalela) wrote:
>=20
> >
> > Suppose you have an IP solution.
>=20
> Sure.
>=20
> >
> > To support mobility you need IP-in-IP encapsulation.
>=20
> And if you do an overly you always do an IP encapsulation (to cover GRE,
> IPIP, sit, IPSec, PPP, etc).
>=20
> >
> > As VM density increases, as VM-to-VM conversation grows, as interfaces
> > per VM increase, the host routes increase.
>=20
> No.
>=20
> The only thing that the network needs to know is the routes to the
> hypervisors / physical machines -- this is a solved problem.
> The VM addresses and routes are only visible to the [gateways,
> hypervisors with VMs in that overlay, other VMs in the same overlay,
> mapping server].
>=20
> For a really old overview:
> http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00
>=20
> > These host routes are in addition to network routes, local host-port
> > bindings, ACLs, etc. That means in addition to everything that existed
> > so far.
>=20
> No.
>=20
>=20
> >
> > Eventually, you hit a limit on the access, and you have to reduce size
> > of network, reduce VM mobility, reduce VM density per server, reduce
> > application spread.
> >
>=20
> No.
>=20
> > The alternative is to constantly increase network hardware table sizes
> > at access, which increases costs and energy.
> >
>=20
> No.
>=20
> > We have to realize that IP encapsulations put network and compute at
> > opposite sides of the cost trend. Compute cost reduces slowly as size
> > grows. Network cost grows rapidly as size grows.
> >
>=20
> No.
>=20
>=20
>=20
> > Thanks,
> > Ashish
> >
> >
> > -----Original Message-----
> > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> > Pedro Marques
> > Sent: Tuesday, January 03, 2012 10:46 PM
> > To: david.black@emc.com
> > Cc: dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> > That assumes that the MAC has relevance in the network. It is possible
> > to build solutions such that packets are forwarded based on their IP
> > addresses rather than their MACs.
> >
> >  Pedro.
> >
> > On Tue, Jan 3, 2012 at 8:41 AM,  <david.black@emc.com> wrote:
> >> Ashish,
> >>
> >>>> [AD] The higher bits identify a switch - it's a switch-id.
> >>
> >> That breaks VM migration across switches by forcing a MAC change.
> >>
> >> Thanks,
> >> --David
> >>
> >>> -----Original Message-----
> >>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> > Ashish Dalela (adalela)
> >>> Sent: Tuesday, January 03, 2012 11:15 AM
> >>> To: robert@raszuk.net
> >>> Cc: Pedro Marques; dc@ietf.org
> >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >>>
> >>> Robert,
> >>>
> >>> Please see inline.
> >>>
> >>> -----Original Message-----
> >>> From: Robert Raszuk [mailto:robert@raszuk.net]
> >>> Sent: Tuesday, January 03, 2012 8:24 PM
> >>> To: Ashish Dalela (adalela)
> >>> Cc: Pedro Marques; dc@ietf.org
> >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> >>> interconnect
> >>>
> >>> Ashish,
> >>>
> >>> OK let's just discuss what is in your draft on Hierarchical
> > Addressing.
> >>>
> >>> 1. You have 48 bits 32 go for host remaining 16 goes for switches.
> > How
> >>> do you aggregate at the TOR or AGGR switch boundary ? Are you
> > assuming
> >>> single HOST - SWITCH with max 65K flat macs ?
> >>>
> >>> [AD] The higher bits identify a switch - it's a switch-id. The hosts
> > are
> >>> dynamically assigned a host-id under that switch. Let's assume 23
> > bits
> >>> are for switch-id and 23 bits for host-id. To forward a packet to
> the
> >>> host, you only have to look at the first 23 bits. That's a MAC
> prefix
> > to
> >>> route against.
> >>>
> >>> [AD] You can have 2^23 switches in a network and 2^23 hosts under
> > each
> >>> switch.
> >>>
> >>> 2. Can you deploy this on existing VMs and existing switches ?
> >>>
> >>> [AD] What do you mean by this? Any VM can be configured with any
> MAC.
> >>> Any physical host can be configured with any MAC on any logical
> >>> interface. Configuration standpoint this is possible. Forwarding
> >>> standpoint, that's another question.
> >>>
> >>> 3. What new protocol you envision to use to distribute those new
> MACs
> > ?
> >>>
> >>> [AD] IS-IS extensions. It can be TRILL extensions.
> >>>
> >>> 4. What is the advantage of using this vs ILNP if we assume that
> > hosts
> >>> should be modified ?
> >>>
> >>> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
> >>> talking about Loc-Id separation. If not, correct me. If yes, each
> > Loc-Id
> >>> binding can be a host route, with mobility. These host-routes are a
> >>> scaling problem. Traditional IP packet have IP as ID and MAC as LOC.
> > We
> >>> are just extending this LOC to make it actually location aware
> rather
> >>> than a flat address which is fixed regardless of where the location
> > is.
> >>>
> >>> 5. The proposal does not support aggregation .. even the draft says
> > it
> >>> :)
> >>>
> >>> "The total number of hardware entries anywhere in the network equals
> > the
> >>>
> >>> total number of switches and remains agnostic of VM mobility."
> >>>
> >>> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts.
> > With
> >>> 48 port access switches, you need 833 switches. That's the routing
> > table
> >>> size for any switch in the datacenter - core, aggregation, access.
> >>> Contrast this with host-routes, if each VM talks to 100 VMs, then
> > each
> >>> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just
> because
> >>> the network prefix is 23 bits does not mean we have to store 10^23
> >>> prefixes. We have to store only as many switches as there are in the
> >>> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48
> > port
> >>> access and 20 VM per port). That means instead of storing
> host-routes
> >>> which will grow proportional to VM growth, we store switch-id, which
> >>> will grow at 1000 times slower rate. As VM density increases, this
> >>> growth rate is further slowed down. There are other techniques to
> >>> further reduce the rate of growth. But in any case, 1000 times
> slower
> > is
> >>> a lot slow.
> >>>
> >>> So if I have 100K switches I can not do any aggregation and need to
> >>> "route" 100K MAC addresses.
> >>>
> >>> [AD] I don't know how you came to that conclusion. Think of HMAC as
> > an
> >>> IP address. Instead of 32 bits it is 46 bits. You route by prefixes
> > in
> >>> L3, and you are routing by the same prefixes here. Just as you
> > aggregate
> >>> IP, same way you aggregate MAC. It's not different.
> >>>
> >>> 6. Who provides me the mapping between switch mac and host/vm mac
> > behind
> >>>
> >>> such switch ? Do switches proxy arp globally within your domain ?
> >>>
> >>> [AD] Variation of the same question. Above should answer it.
> >>>
> >>> Thx,
> >>> R.
> >>>
> >>>
> >>>> Robert,
> >>>>
> >>>>>> So you are advocating solution which is based on encapsulation -
> >>> that
> >>>> is fine.
> >>>>
> >>>> No, I'm not. Did you read the draft I had mentioned?
> >>>> Hierarchical MAC is not encapsulation. It is one 48 bit address.
> >>>>
> >>>>>> However how could you ever arrive at the conclusion that HMACs
> > would
> >>>>>> scale better then "anything we know". Well I don't know about
> > you,
> >>>> but I
> >>>>>> know that the key to scaling is ability to aggregate. And it is
> > not
> >>>> that
> >>>>>> huge mystery that MACs aggregate rather poorly while there are
> > quite
> >>>>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> >>> natively
> >>>>
> >>>> You are hitting the issue on the nail. So, read the draft I
> > mentioned.
> >>>> Hierarchical MAC is higher bits "network prefix" and lower bits
> > "host
> >>>> id".
> >>>> That's summarizable like IP address and aggregated.
> >>>> It has 46 bits to modify so larger than IPv4 internet.
> >>>>
> >>>> I won't comment on the rest, because you have made an assumption
> > about
> >>>> encapsulation.
> >>>>
> >>>> I refer to this -
> >>>> http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> >>>>
> >>>> Thanks, Ashish
> >>>>
> >>>>
> >>>> -----Original Message-----
> >>>> From: Robert Raszuk [mailto:robert@raszuk.net]
> >>>> Sent: Tuesday, January 03, 2012 7:05 PM
> >>>> To: Ashish Dalela (adalela)
> >>>> Cc: Pedro Marques; dc@ietf.org
> >>>> Subject: Re: [dc] [armd] IP over IP solution for data center
> >>>> interconnect
> >>>>
> >>>> Ashish,
> >>>>
> >>>>> The issues of scale you mentioned don't exist in Hierarchical
> > MACs,
> >>>>> which scale better than anything we know of.
> >>>>
> >>>> So you are advocating solution which is based on encapsulation -
> > that
> >>> is
> >>>> fine.
> >>>>
> >>>> However how could you ever arrive at the conclusion that HMACs
> > would
> >>>> scale better then "anything we know". Well I don't know about you,
> > but
> >>> I
> >>>> know that the key to scaling is ability to aggregate. And it is not
> >>> that
> >>>>
> >>>> huge mystery that MACs aggregate rather poorly while there are
> > quite
> >>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > natively.
> >>>>
> >>>> For inter-dc this is IMHO a must. A must even if you build it using
> >>>> traditional routers or OF enabled switches - does not matter.
> >>>>
> >>>>> I don't want to split the requirements into multiple use-cases
> >>>>> because then this DC group will be many groups - one doing L2 and
> >>>>> another doing L3. That I think you will agree is not optimal for
> >>>>> anyone
> >>>>
> >>>> Why MAC-in-IP does not solve it for everyone ? And there are
> > deployed
> >>>> solutions already ..
> >>>>
> >>>> IMHO what this group should accomplish is not to try to reinvent
> > the
> >>>> world, but perhaps as example discuss where is the right boundary
> > of
> >>>> encapsulation, how should we communicate between network and hosts,
> >>> what
> >>>>
> >>>> kind of DC instrumentation should be IETF blessed for easy
> > integration
> >>>> (ie min subset of functionality it should possess etc .... )
> >>>>
> >>>> R.
> >>>>
> >>>> _______________________________________________
> >>>> dc mailing list
> >>>> dc@ietf.org
> >>>> https://www.ietf.org/mailman/listinfo/dc
> >>>>
> >>>>
> >>>
> >>> _______________________________________________
> >>> dc mailing list
> >>> dc@ietf.org
> >>> https://www.ietf.org/mailman/listinfo/dc
> >>
> >> _______________________________________________
> >> dc mailing list
> >> dc@ietf.org
> >> https://www.ietf.org/mailman/listinfo/dc
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
> >
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


From narten@us.ibm.com  Thu Jan  5 08:41:32 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4CA7E21F8817 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:41:32 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.599
X-Spam-Level: 
X-Spam-Status: No, score=-106.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id VTk93SAEsiqh for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 08:41:31 -0800 (PST)
Received: from e6.ny.us.ibm.com (e6.ny.us.ibm.com [32.97.182.146]) by ietfa.amsl.com (Postfix) with ESMTP id 414CC21F870F for <dc@ietf.org>; Thu,  5 Jan 2012 08:41:25 -0800 (PST)
Received: from /spool/local by e6.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Thu, 5 Jan 2012 11:41:24 -0500
Received: from d01relay06.pok.ibm.com (9.56.227.116) by e6.ny.us.ibm.com (192.168.1.106) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Thu, 5 Jan 2012 11:41:21 -0500
Received: from d01av01.pok.ibm.com (d01av01.pok.ibm.com [9.56.224.215]) by d01relay06.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q05GfIOS3481674 for <dc@ietf.org>; Thu, 5 Jan 2012 11:41:19 -0500
Received: from d01av01.pok.ibm.com (loopback [127.0.0.1]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q05GfIAn003882 for <dc@ietf.org>; Thu, 5 Jan 2012 11:41:18 -0500
Received: from cichlid.raleigh.ibm.com (sig-9-76-133-189.mts.ibm.com [9.76.133.189]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q05Gf8G5002837 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 5 Jan 2012 11:41:09 -0500
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q05Gf62V028070; Thu, 5 Jan 2012 11:41:07 -0500
Message-Id: <201201051641.q05Gf62V028070@cichlid.raleigh.ibm.com>
To: david.black@emc.com
In-reply-to: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBE3@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B256BA@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBD6@MX14! A.corp.emc.com> <CAMXVrt4QnsbZODLy2b9FsOmfwK5h4vhyA4kqfw48DH+Qie9SoQ@mail.gmail.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBE3@MX14A.corp.emc.com>
Comments: In-reply-to <david.black@emc.com> message dated "Tue, 03 Jan 2012 12:35:17 -0500."
Date: Thu, 05 Jan 2012 11:41:06 -0500
From: Thomas Narten <narten@us.ibm.com>
x-cbid: 12010516-1976-0000-0000-00000931E174
Cc: pedro.r.marques@gmail.com, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 16:41:32 -0000

<david.black@emc.com> writes:

> > Whether the VMs change the MAC or not is not necessarily relevant
> > either. What is the service model to the VM ? Is it an IEEE compatible
> > LAN or an ethernet (point-to-point) interface with the ability to
> > carry IP traffic ?. In the later case the MAC is not relevant.

> The widely deployed service model for VMs that I'm thinking of is
> an IEEE-compatible LAN (L2 network service, for which the MAC matters).

And specifically, within one VLAN. I.e., there is no notion of trying
to emulate multiple VLANs in such a service.

Right?

Thomas


From adalela@cisco.com  Thu Jan  5 09:39:06 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D622E21F878F for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 09:39:05 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.412
X-Spam-Level: 
X-Spam-Status: No, score=-2.412 tagged_above=-999 required=5 tests=[AWL=0.187,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HEHHclG2Tl5t for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 09:39:04 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 5101C21F878C for <dc@ietf.org>; Thu,  5 Jan 2012 09:39:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=13922; q=dns/txt; s=iport; t=1325785143; x=1326994743; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=VBSyIKOtNE66VdCqdi6CWCFQZa5zNVFFK+XzsnM1JY8=; b=P32/3qWNv4QXyEaDkrVCOhDwzmjcP34q6DcFfeRINMbZ8wSZcseQoPZS 98TeprDiapAbW6FuOPAblHO4io9q1DCNanxmiTVvOT0rxpv+g64KF7ti5 3++bg45BF9OfGOTTjgIxxk8XN+xxk9tNFzeKBYZGibZ9bXgGDzhREUdX3 Y=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ar0EAOTeBU9Io8UY/2dsb2JhbAA4CoIFq32BcgEBAQMBAQEBDwEdCjQLBQcEAgEIEQQBAQEKBhcBBgEmHwkIAQEECwgIEweHWAiXdQGeE4hWglhjBIg3nww
X-IronPort-AV: E=Sophos;i="4.71,462,1320624000";  d="scan'208";a="2855860"
Received: from vla196-nat.cisco.com (HELO bgl-core-3.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 05 Jan 2012 17:39:01 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-3.cisco.com (8.14.3/8.14.3) with ESMTP id q05Hd1vH008268; Thu, 5 Jan 2012 17:39:01 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Thu, 5 Jan 2012 23:09:01 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Thu, 5 Jan 2012 23:08:59 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25C8B@XMB-BGL-416.cisco.com>
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C204@MX14A.corp.emc.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczLv8/i14m93uVUT1SkyOUfB6pYjgAAhaTAAAEwXYAAAoW84A==
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com><CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco . com> <AF 48CEB4-18A6- 45CE-891B-ACFE599C8FB4@kumari.net> <618BE8B40039924EB9AED233D4A09C5102B25C5A@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C204@MX14A.corp.emc.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <david.black@emc.com>
X-OriginalArrivalTime: 05 Jan 2012 17:39:01.0189 (UTC) FILETIME=[E9227F50:01CCCBD0]
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 17:39:06 -0000

David,

So, where does the multicast packet get replicated, since the network is
not aware of user-level multicast groups? How are those distribution
trees to be built over an overlay?

Thanks, Ashish


-----Original Message-----
From: david.black@emc.com [mailto:david.black@emc.com]=20
Sent: Thursday, January 05, 2012 10:00 PM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: RE: [dc] [armd] IP over IP solution for data center
interconnect

Ashish,

For IP-based overlays, this starts by carefully distinguishing the inner
(user) and
outer (provider/infrastructure) IP address blocks, and completely
controlling access
to the outer IP address blocks via encap/decap.  The result is that an
IGMP join
from a user gets encapsulated and can't be processed with respect to
those outer
addresses.  That creates a requirement to provide an IP multicast
service to the=20
users over the encapsulation.

Thanks,
--David

> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Ashish Dalela (adalela)
> Sent: Thursday, January 05, 2012 11:01 AM
> To: Warren Kumari
> Cc: Pedro Marques; Black, David; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect
>=20
> Hi Warren,
>=20
> You are prescribing to a hypervisor based solutions and table scaling
> issues don't arise there (or at least are not obvious) because it all
> happens in software. However, there are other issues. For example, how
> do you implement broadcast and multicast? The standard mechanism today
> is to map a broadcast domain to a multicast group in L3. Now, what
> happens if some rogue user sends an IGMP join to that group - what was
> on the VLAN is now accessible to everyone through an IGMP join. For
user
> level multicast, there are other issues. Assume I'm doing a VDI cloud,
> where users need to join multicast video conferencing. The group is
user
> determined, not admin determined. How do we know that the user is not
> joining a VLAN mapped multicast group?
>=20
> We need to keep a complete set of goals in mind. Otherwise, we can
solve
> an issue and miss a goal. E.g. multicast and broadcast.
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: Warren Kumari [mailto:warren@kumari.net]
> Sent: Thursday, January 05, 2012 9:06 PM
> To: Ashish Dalela (adalela)
> Cc: Warren Kumari; Pedro Marques; david.black@emc.com; dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>=20
>=20
> On Jan 3, 2012, at 12:45 PM, Ashish Dalela (adalela) wrote:
>=20
> >
> > Suppose you have an IP solution.
>=20
> Sure.
>=20
> >
> > To support mobility you need IP-in-IP encapsulation.
>=20
> And if you do an overly you always do an IP encapsulation (to cover
GRE,
> IPIP, sit, IPSec, PPP, etc).
>=20
> >
> > As VM density increases, as VM-to-VM conversation grows, as
interfaces
> > per VM increase, the host routes increase.
>=20
> No.
>=20
> The only thing that the network needs to know is the routes to the
> hypervisors / physical machines -- this is a solved problem.
> The VM addresses and routes are only visible to the [gateways,
> hypervisors with VMs in that overlay, other VMs in the same overlay,
> mapping server].
>=20
> For a really old overview:
> http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00
>=20
> > These host routes are in addition to network routes, local host-port
> > bindings, ACLs, etc. That means in addition to everything that
existed
> > so far.
>=20
> No.
>=20
>=20
> >
> > Eventually, you hit a limit on the access, and you have to reduce
size
> > of network, reduce VM mobility, reduce VM density per server, reduce
> > application spread.
> >
>=20
> No.
>=20
> > The alternative is to constantly increase network hardware table
sizes
> > at access, which increases costs and energy.
> >
>=20
> No.
>=20
> > We have to realize that IP encapsulations put network and compute at
> > opposite sides of the cost trend. Compute cost reduces slowly as
size
> > grows. Network cost grows rapidly as size grows.
> >
>=20
> No.
>=20
>=20
>=20
> > Thanks,
> > Ashish
> >
> >
> > -----Original Message-----
> > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> > Pedro Marques
> > Sent: Tuesday, January 03, 2012 10:46 PM
> > To: david.black@emc.com
> > Cc: dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> > That assumes that the MAC has relevance in the network. It is
possible
> > to build solutions such that packets are forwarded based on their IP
> > addresses rather than their MACs.
> >
> >  Pedro.
> >
> > On Tue, Jan 3, 2012 at 8:41 AM,  <david.black@emc.com> wrote:
> >> Ashish,
> >>
> >>>> [AD] The higher bits identify a switch - it's a switch-id.
> >>
> >> That breaks VM migration across switches by forcing a MAC change.
> >>
> >> Thanks,
> >> --David
> >>
> >>> -----Original Message-----
> >>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf
Of
> > Ashish Dalela (adalela)
> >>> Sent: Tuesday, January 03, 2012 11:15 AM
> >>> To: robert@raszuk.net
> >>> Cc: Pedro Marques; dc@ietf.org
> >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >>>
> >>> Robert,
> >>>
> >>> Please see inline.
> >>>
> >>> -----Original Message-----
> >>> From: Robert Raszuk [mailto:robert@raszuk.net]
> >>> Sent: Tuesday, January 03, 2012 8:24 PM
> >>> To: Ashish Dalela (adalela)
> >>> Cc: Pedro Marques; dc@ietf.org
> >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> >>> interconnect
> >>>
> >>> Ashish,
> >>>
> >>> OK let's just discuss what is in your draft on Hierarchical
> > Addressing.
> >>>
> >>> 1. You have 48 bits 32 go for host remaining 16 goes for switches.
> > How
> >>> do you aggregate at the TOR or AGGR switch boundary ? Are you
> > assuming
> >>> single HOST - SWITCH with max 65K flat macs ?
> >>>
> >>> [AD] The higher bits identify a switch - it's a switch-id. The
hosts
> > are
> >>> dynamically assigned a host-id under that switch. Let's assume 23
> > bits
> >>> are for switch-id and 23 bits for host-id. To forward a packet to
> the
> >>> host, you only have to look at the first 23 bits. That's a MAC
> prefix
> > to
> >>> route against.
> >>>
> >>> [AD] You can have 2^23 switches in a network and 2^23 hosts under
> > each
> >>> switch.
> >>>
> >>> 2. Can you deploy this on existing VMs and existing switches ?
> >>>
> >>> [AD] What do you mean by this? Any VM can be configured with any
> MAC.
> >>> Any physical host can be configured with any MAC on any logical
> >>> interface. Configuration standpoint this is possible. Forwarding
> >>> standpoint, that's another question.
> >>>
> >>> 3. What new protocol you envision to use to distribute those new
> MACs
> > ?
> >>>
> >>> [AD] IS-IS extensions. It can be TRILL extensions.
> >>>
> >>> 4. What is the advantage of using this vs ILNP if we assume that
> > hosts
> >>> should be modified ?
> >>>
> >>> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
> >>> talking about Loc-Id separation. If not, correct me. If yes, each
> > Loc-Id
> >>> binding can be a host route, with mobility. These host-routes are
a
> >>> scaling problem. Traditional IP packet have IP as ID and MAC as
LOC.
> > We
> >>> are just extending this LOC to make it actually location aware
> rather
> >>> than a flat address which is fixed regardless of where the
location
> > is.
> >>>
> >>> 5. The proposal does not support aggregation .. even the draft
says
> > it
> >>> :)
> >>>
> >>> "The total number of hardware entries anywhere in the network
equals
> > the
> >>>
> >>> total number of switches and remains agnostic of VM mobility."
> >>>
> >>> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts.
> > With
> >>> 48 port access switches, you need 833 switches. That's the routing
> > table
> >>> size for any switch in the datacenter - core, aggregation, access.
> >>> Contrast this with host-routes, if each VM talks to 100 VMs, then
> > each
> >>> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just
> because
> >>> the network prefix is 23 bits does not mean we have to store 10^23
> >>> prefixes. We have to store only as many switches as there are in
the
> >>> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48
> > port
> >>> access and 20 VM per port). That means instead of storing
> host-routes
> >>> which will grow proportional to VM growth, we store switch-id,
which
> >>> will grow at 1000 times slower rate. As VM density increases, this
> >>> growth rate is further slowed down. There are other techniques to
> >>> further reduce the rate of growth. But in any case, 1000 times
> slower
> > is
> >>> a lot slow.
> >>>
> >>> So if I have 100K switches I can not do any aggregation and need
to
> >>> "route" 100K MAC addresses.
> >>>
> >>> [AD] I don't know how you came to that conclusion. Think of HMAC
as
> > an
> >>> IP address. Instead of 32 bits it is 46 bits. You route by
prefixes
> > in
> >>> L3, and you are routing by the same prefixes here. Just as you
> > aggregate
> >>> IP, same way you aggregate MAC. It's not different.
> >>>
> >>> 6. Who provides me the mapping between switch mac and host/vm mac
> > behind
> >>>
> >>> such switch ? Do switches proxy arp globally within your domain ?
> >>>
> >>> [AD] Variation of the same question. Above should answer it.
> >>>
> >>> Thx,
> >>> R.
> >>>
> >>>
> >>>> Robert,
> >>>>
> >>>>>> So you are advocating solution which is based on encapsulation
-
> >>> that
> >>>> is fine.
> >>>>
> >>>> No, I'm not. Did you read the draft I had mentioned?
> >>>> Hierarchical MAC is not encapsulation. It is one 48 bit address.
> >>>>
> >>>>>> However how could you ever arrive at the conclusion that HMACs
> > would
> >>>>>> scale better then "anything we know". Well I don't know about
> > you,
> >>>> but I
> >>>>>> know that the key to scaling is ability to aggregate. And it is
> > not
> >>>> that
> >>>>>> huge mystery that MACs aggregate rather poorly while there are
> > quite
> >>>>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> >>> natively
> >>>>
> >>>> You are hitting the issue on the nail. So, read the draft I
> > mentioned.
> >>>> Hierarchical MAC is higher bits "network prefix" and lower bits
> > "host
> >>>> id".
> >>>> That's summarizable like IP address and aggregated.
> >>>> It has 46 bits to modify so larger than IPv4 internet.
> >>>>
> >>>> I won't comment on the rest, because you have made an assumption
> > about
> >>>> encapsulation.
> >>>>
> >>>> I refer to this -
> >>>> http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> >>>>
> >>>> Thanks, Ashish
> >>>>
> >>>>
> >>>> -----Original Message-----
> >>>> From: Robert Raszuk [mailto:robert@raszuk.net]
> >>>> Sent: Tuesday, January 03, 2012 7:05 PM
> >>>> To: Ashish Dalela (adalela)
> >>>> Cc: Pedro Marques; dc@ietf.org
> >>>> Subject: Re: [dc] [armd] IP over IP solution for data center
> >>>> interconnect
> >>>>
> >>>> Ashish,
> >>>>
> >>>>> The issues of scale you mentioned don't exist in Hierarchical
> > MACs,
> >>>>> which scale better than anything we know of.
> >>>>
> >>>> So you are advocating solution which is based on encapsulation -
> > that
> >>> is
> >>>> fine.
> >>>>
> >>>> However how could you ever arrive at the conclusion that HMACs
> > would
> >>>> scale better then "anything we know". Well I don't know about
you,
> > but
> >>> I
> >>>> know that the key to scaling is ability to aggregate. And it is
not
> >>> that
> >>>>
> >>>> huge mystery that MACs aggregate rather poorly while there are
> > quite
> >>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > natively.
> >>>>
> >>>> For inter-dc this is IMHO a must. A must even if you build it
using
> >>>> traditional routers or OF enabled switches - does not matter.
> >>>>
> >>>>> I don't want to split the requirements into multiple use-cases
> >>>>> because then this DC group will be many groups - one doing L2
and
> >>>>> another doing L3. That I think you will agree is not optimal for
> >>>>> anyone
> >>>>
> >>>> Why MAC-in-IP does not solve it for everyone ? And there are
> > deployed
> >>>> solutions already ..
> >>>>
> >>>> IMHO what this group should accomplish is not to try to reinvent
> > the
> >>>> world, but perhaps as example discuss where is the right boundary
> > of
> >>>> encapsulation, how should we communicate between network and
hosts,
> >>> what
> >>>>
> >>>> kind of DC instrumentation should be IETF blessed for easy
> > integration
> >>>> (ie min subset of functionality it should possess etc .... )
> >>>>
> >>>> R.
> >>>>
> >>>> _______________________________________________
> >>>> dc mailing list
> >>>> dc@ietf.org
> >>>> https://www.ietf.org/mailman/listinfo/dc
> >>>>
> >>>>
> >>>
> >>> _______________________________________________
> >>> dc mailing list
> >>> dc@ietf.org
> >>> https://www.ietf.org/mailman/listinfo/dc
> >>
> >> _______________________________________________
> >> dc mailing list
> >> dc@ietf.org
> >> https://www.ietf.org/mailman/listinfo/dc
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
> >
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


From david.i.allan@ericsson.com  Thu Jan  5 09:48:27 2012
Return-Path: <david.i.allan@ericsson.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 68C9021F8776 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 09:48:27 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Level: 
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[AWL=0.000,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JJl4X+KDcnb3 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 09:48:26 -0800 (PST)
Received: from imr3.ericy.com (imr3.ericy.com [198.24.6.13]) by ietfa.amsl.com (Postfix) with ESMTP id AA07621F8775 for <dc@ietf.org>; Thu,  5 Jan 2012 09:48:26 -0800 (PST)
Received: from eusaamw0712.eamcs.ericsson.se ([147.117.20.181]) by imr3.ericy.com (8.13.8/8.13.8) with ESMTP id q05HmKci030583 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Thu, 5 Jan 2012 11:48:26 -0600
Received: from EUSAACMS0703.eamcs.ericsson.se ([169.254.1.43]) by eusaamw0712.eamcs.ericsson.se ([147.117.20.181]) with mapi; Thu, 5 Jan 2012 12:48:05 -0500
From: David Allan I <david.i.allan@ericsson.com>
To: Thomas Narten <narten@us.ibm.com>
Date: Thu, 5 Jan 2012 12:48:03 -0500
Thread-Topic: [dc] 24-bit VLAN tags [was Re: draft-dalela-dc-requirements-00.txt]
Thread-Index: AczLxXZMYZroNroeSVWWz8EqKUM1fgACzHMQ
Message-ID: <60C093A41B5E45409A19D42CF7786DFD5229048737@EUSAACMS0703.eamcs.ericsson.se>
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com> <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se> <201201051616.q05GGIoU027563@cichlid.raleigh.ibm.com>
In-Reply-To: <201201051616.q05GGIoU027563@cichlid.raleigh.ibm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "Ashish Dalela \(adalela\)" <adalela@cisco.com>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] 24-bit VLAN tags [was Re: draft-dalela-dc-requirements-00.txt]
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 17:48:27 -0000

Hi Thomas:
=20
<snipped>

> Just how widely deployed are 24-bit VLAN tags these days? Presumably you =
are referring to PBB/SPB?

Well 802.1ah PBB has the 24 bit I-tag....which is also used by 802.1aq SPBM=
, but that is only one example.

I believe (but do not quote me) that the VmWare vCDNI port group is a 24 bi=
t identifier....(analogous concept)....

The VxLAN VNI is 24 bits.

The OTV Overlay ID is 24 bits...

Etc.

So just about every currently proposed solution I know of seems to be align=
ed...

> What are the deployment trends for 24-bit VLAN tags in data centers?
> Is this starting to happen? Does it look like there will be significant d=
eployments? Or will there be lots of deployments that choose not go use the=
m=20
> (for whatever reason)?

> It would be useful to hear from operators what they are doing or thinking=
 of doing w.r.t. to 24-bit VLAN tags.

Not being an operator, I can't comment on deployment. I agree operators chi=
ming in would be useful...

Cheers
D



_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From adalela@cisco.com  Thu Jan  5 09:58:00 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C732321F87ED for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 09:58:00 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.418
X-Spam-Level: 
X-Spam-Status: No, score=-2.418 tagged_above=-999 required=5 tests=[AWL=0.181,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ki8Acd1731It for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 09:57:59 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id C548B21F87E1 for <dc@ietf.org>; Thu,  5 Jan 2012 09:57:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=7262; q=dns/txt; s=iport; t=1325786279; x=1326995879; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=TECbHkfS2C92ULmuscrFsjYNMs2aREPi/Ma6mO6tXlU=; b=Lc2PvYL9q3uEgaldJaF4f6kTVCyD6VU/mJk7WLxbFLH/B3V9w+c9fRVP 4QuGOLOTGI3CO+LoE8RhtwQ3+lOEvcx78wAIY2UdCVXMu96ojFH9mBIpY 56SLbF3tdPaqaOj/V1yK2x2MB/ODm2TAabS5LAu4tDcfvEwU9+pUTj504 8=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: ArwEAFbjBU9Io8UY/2dsb2JhbAA4CoIFq32BcgEBAQMBEgEdCj8FBwQCAQgRBAEBCwYFEgEGAUUJCAEBBAsICBMHh1iXegGeGIhWFYJDYwSIN58M
X-IronPort-AV: E=Sophos;i="4.71,462,1320624000";  d="scan'208";a="2856346"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 05 Jan 2012 17:57:57 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q05HvvrN003549; Thu, 5 Jan 2012 17:57:57 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Thu, 5 Jan 2012 23:27:57 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Thu, 5 Jan 2012 23:27:55 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25C98@XMB-BGL-416.cisco.com>
In-Reply-To: <201201051610.q05GAqwn027469@cichlid.raleigh.ibm.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-dalela-dc-requirements-00.txt
Thread-Index: AczLxLhkG5Y/y4AnT0Se7gDe3eU6bAADqO4Q
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102B25AA7@XMB-BGL-416.cisco.com> <201201051610.q05GAqwn027469@cichlid.raleigh.ibm.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Thomas Narten" <narten@us.ibm.com>
X-OriginalArrivalTime: 05 Jan 2012 17:57:57.0324 (UTC) FILETIME=[8E52F0C0:01CCCBD3]
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 17:58:00 -0000

Thomas,

You are confusing the nature of two drafts. As I said on the other
email, one is about GOALS and the other is about ISSUES. You are looking
for the second type of thing in the first one. We have many examples of
how to solve issues without addressing all goals.

Thanks, Ashish

-----Original Message-----
From: Thomas Narten [mailto:narten@us.ibm.com]=20
Sent: Thursday, January 05, 2012 9:41 PM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt

"Ashish Dalela (adalela)" <adalela@cisco.com> writes:

> Going by the number of map-encap approaches out there that don't scale
> at the network boundaries (access, interconnect, etc.) it would *seem*
> that we did not give adequate attention to scale. It might be that the
> alternatives haven't been present. I'm ok to modify this statement as
> follows: "Scalability is a primary consideration to be kept in mind,
> because without the intended scale, any solution will work". Does that
> sound right?

I don't want to nitpick on this wording, but I wouldn't use wording
like "any solution will work". But I did want to point out that
implying or saying that the IETF doesn't care about scaling is just
not true (and not helpful).
> >> What specifically is missing that prevents the above from being
done
> today? What is it that you think needs doing that can't be done with
> existing standards?

> You perhaps missed the following problem statement:

> <snip>
>    treating inter and intra datacenter
>    as entirely independent leads to new issues at the edge that arise
>    from trying to map one forwarding approach within datacenter to
>    another forwarding approach between datacenters. In some cases,
both
>    L2 and L3 approaches may be needed to connect two datacenters.
>    Further, ideally, customer segmentation in the internet needs to be
>    done similar to the segmentation in the datacenter. This simplifies
>    the identification of a customer's packets in the Internet as in
the
>    datacenter. Common QoS and Security policies can be applied, in
both
>    the domains if there is a common way to identify packets
> </snip>

> The key problem is still that datacenter inter-connectivity is not
> necessarily about connecting DC between the same provider. It is also
> about connecting them between private and public domains. We could use
> an approach that makes the provider DC edge scale very high, but you
> can't do that for a customer who has a smaller datacenter - you will
be
> pushing the complexity to the customer edge, and their devices aren't
> designed to scale. In other words, if I have a large public cloud
> connected to a small private cloud, should the small cloud bear the
> burden of the large cloud? That needs better stating, I agree.

Sorry, I don't understand how this leads to a concrete, specific
problem that the IETF needs to work on.

This is just too high-level, general stuff. We can argue forever about
whether it's true, what it means, etc. But I don't think that sort of
discussion is productive or useful.

Please identify a specific problem with (say) a specific protocol or
deployment where what we have today doesn't work adequately and needs
a better approach. We need more details of a real problem experienced
by operators out in the field.

> >> I suspect you'll get a lot of agreement on this. And one of the key
> aims of NVO3 is to address this. Is the existing NVO3 approach not
> adequate for the above? If so why not?

> There are many approaches out there, and the discussion of approaches
is
> in the separate draft.
> tools.ietf.org/html/draft-dalela-dc-approaches-00. We understand that
> many of these problems may have been stated in other places. But, we
> can't avoid that.

The purpose of this list (presumably) is to focus on new problems or
problems that aren't being discussed elsewhere already.

What are the problems for which there isn't a potential home yet?
Those are the ones we should be trying to tease out here.

> >> Isn't this already possible, and indeed, happening today? What IETF
> work is needed? What standards gap needs filling?

> Not necessarily. Take the example of two architectures - scale-up vs.
> scale-out and compare them for map-encap. The scale-up model requires
> less switch-to-switch map-encaps but a lot more internal mapping. So,
> from a technology perspective, there is no problem at all if you look
at
> this from the outside. I can claim that I have one huge switch in
which
> everything is connected. The problem is just abstracted from view, and
> it may be inside the huge switch. Contrast this with the scale-out
model
> where there are many smaller switches and the map-encap is
externalized.
> The problem is more visible. The technology you devise has to be such
> that I can do both scale-up and scale-out.

Again, this is too high-level. We need to talk more specificically
about protocols and IETF documents that are insufficient or need to be
developed.

> >> WHat work does the above imply that the IETF needs to do ?

> If you have looked at the other thread which talked about number of
> routes, a 1M VM datacenter can require several million host-routes.
That
> implies slow convergence. There have been other discussions on
> convergence as well that talked about pulling out a route on-demand
when
> packets arrived.

"slow convergence" in a general sense is not an IETF problem.

Which specific protocol is being used today (i.e., in real
deployments) that doesn't converge adequately? Do others agree that
there are problems with that protocol that require solutions?

> >> Is this section saying anything more than there is a need for
> multipathing for East West traffic?

> No it is not overtly. Internally, this is also tied to the flow mgmt
> problem.
nn
> >> Can this not be done today? What specific IETF work would be needed
> to
> support the enforcement of SLAs?

> No there is no work in IETF done to define SLAs. For service provider
> environments, you can define a SLA at access for a given user. When
you
> have 10 VM talking to each other, and you want to guarantee a
bandwidth
> SLA on a VLAN, there is nothing out there. The other fact is that in a
> multi-tenant environment, there is no guarantee that you will get 1G
> bandwidth because you have a 1G interface on the VM. Typical network
> planning too into account the "whole" network design including what
> applications you are going to run and what bandwidths they need. That
> isn't true for cloud at least.

Sounds to me like the topic of SLAs could be standalone and result
(potentially) in its own work group (or additional work in existing
WGs). But to get there, a problem statement document that focused
specifically on SLA issues would seem to be a good starting point.

I.e., what is done today, what protocols are used, why they are
inadequate, what sort of IETF work is needed to close those gaps. But
you need to drill down and provide more details about what is needed
and why.

Do others agree with this? What are some of the perceived gaps?

Thomas


From robert@raszuk.net  Thu Jan  5 09:59:15 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7D30F21F881F for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 09:59:15 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id kQOtozBQ3pXw for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 09:59:15 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 9FF3521F881D for <dc@ietf.org>; Thu,  5 Jan 2012 09:59:14 -0800 (PST)
Received: (qmail 9472 invoked by uid 399); 5 Jan 2012 17:59:13 -0000
Received: from unknown (HELO ?192.168.1.57?) (83.24.121.240) by mail1310.opentransfer.com with ESMTP; 5 Jan 2012 17:59:13 -0000
X-Originating-IP: 83.24.121.240
Message-ID: <4F05E4F1.3050209@raszuk.net>
Date: Thu, 05 Jan 2012 18:59:13 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: Lizhong Jin <lizhong.jin@zte.com.cn>
References: <OF47BD3CE5.ED91E5B7-ON4825797C.0054E3D9-4825797C.00559D95@zte.com.cn>
In-Reply-To: <OF47BD3CE5.ED91E5B7-ON4825797C.0054E3D9-4825797C.00559D95@zte.com.cn>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: yakov@juniper.net, aldrin.isaac@gmail.com, adalela@cisco.com, dc@ietf.org
Subject: Re: [dc] new drafts
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 17:59:15 -0000

Openflow could be one option. The other option could be XMPP as example.

 > I have some concern with route convergence when implementing VRF on
 > x86.

Hmmm you mean that it is going to be faster then any network gear could 
support ;) ? Then just program directly hypervisors ....

Rgs,
R.


> Hi Robert,
> I think this is also an option. When separating control plane and
> dataplane, do you mean to use openFlow to implement the protocol between
> control plane and dataplane? I have some concern with route convergence
> when implementing VRF on x86.
>
> Regards
> Lizhong
>
>
> Robert Raszuk <robert@raszuk.net> wrote on 2012/01/04 23:17:17:
>
>  > Hi Lizhong,
>  >
>  > How about neither ?
>  >
>  > How about implementing VRF for control plane separation on the x86
>  > controller out of data plane and simply instructing either host hosting
>  > VMs or TOR or Access Switch to forward/encapsulate the packets
> correctly ?
>  >
>  > Cheers,
>  > R.


From david.black@emc.com  Thu Jan  5 10:09:34 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D8C6A21F87DD for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:09:34 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.59
X-Spam-Level: 
X-Spam-Status: No, score=-106.59 tagged_above=-999 required=5 tests=[AWL=0.009, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qGt+EzPsNxfI for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:09:33 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 473B621F87D6 for <dc@ietf.org>; Thu,  5 Jan 2012 10:09:32 -0800 (PST)
Received: from hop04-l1d11-si02.isus.emc.com (HOP04-L1D11-SI02.isus.emc.com [10.254.111.55]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q05I9Tcv013859 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 5 Jan 2012 13:09:31 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.222.130]) by hop04-l1d11-si02.isus.emc.com (RSA Interceptor); Thu, 5 Jan 2012 13:09:17 -0500
Received: from mxhub27.corp.emc.com (mxhub27.corp.emc.com [10.254.110.183]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q05I9BLV026264; Thu, 5 Jan 2012 13:09:11 -0500
Received: from mxhub37.corp.emc.com (128.222.70.104) by mxhub27.corp.emc.com (10.254.110.183) with Microsoft SMTP Server (TLS) id 8.3.213.0; Thu, 5 Jan 2012 13:09:11 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub37.corp.emc.com ([128.222.70.104]) with mapi; Thu, 5 Jan 2012 13:09:10 -0500
From: <david.black@emc.com>
To: <adalela@cisco.com>
Date: Thu, 5 Jan 2012 13:09:09 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczLv8/i14m93uVUT1SkyOUfB6pYjgAAhaTAAAEwXYAAAoW84AAAKwQw
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C26B@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com><CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco . com> <AF	48CEB4-18A6-	45CE-891B-ACFE599C8FB4@kumari.net> <618BE8B40039924EB9AED233D4A09C5102B25C5A@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C204@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25C8B@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25C8B@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 18:09:35 -0000

Ashish,

> So, where does the multicast packet get replicated, since the network is
> not aware of user-level multicast groups? How are those distribution
> trees to be built over an overlay?

The answer to this question starts from a realization that there are two
networks, and hence two address spaces, involved - the user-visible overlay
network and the underlying data center network that carries the overlay
network via encap/decap.  The user VMs (or physical servers) only see the
user-visible overlay network and its addresses, so the replication of user
multicast packets is a service of that user-visible overlay network
(specifically L3 nodes in that network that operate on the user-visible
IP addresses), not the underlying network.

To make this specific, suppose the user VMs are using 192.168.0.0/16 addres=
ses,
and these are encapsulated over an underlying data center network that is
using 10.0.0.0/8 addresses.  If a user VM (e.g., VDI desktop) sends a join
request for a 10.0.0.0/8 address, *nothing* should happen, because those
10. addresses aren't visible in the overlay network.  Instead, the overlay
network is responsible for providing IP multicast service for the 192.168.0=
.0/16
addresses - the default IP gateway for that overlay network is one possible
location.

> How are those distribution trees to be built over an overlay?

The same way that they're build in a non-overlay network - the L3 multicast
functionality is the same, only the IP addresses are different.  This is
relatively straightforward for a MAC-in-IP (L2-in-L3) overlay when the serv=
ice
provided by the L2 overlay includes multicast.  When there's no multicast
provided by the overlay's L2 service or no L2 service (IP-in-IP/L3-in-L3
approach), multicast for the overlay network may involve explicit copying o=
f
packets at L3 nodes in the overlay network (not the underlying network),
and is part of the overlay design.

Thanks,
--David

> -----Original Message-----
> From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> Sent: Thursday, January 05, 2012 12:39 PM
> To: Black, David
> Cc: dc@ietf.org
> Subject: RE: [dc] [armd] IP over IP solution for data center interconnect
>
> David,
>
> So, where does the multicast packet get replicated, since the network is
> not aware of user-level multicast groups? How are those distribution
> trees to be built over an overlay?
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: david.black@emc.com [mailto:david.black@emc.com]
> Sent: Thursday, January 05, 2012 10:00 PM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: RE: [dc] [armd] IP over IP solution for data center
> interconnect
>
> Ashish,
>
> For IP-based overlays, this starts by carefully distinguishing the inner
> (user) and
> outer (provider/infrastructure) IP address blocks, and completely
> controlling access
> to the outer IP address blocks via encap/decap.  The result is that an
> IGMP join
> from a user gets encapsulated and can't be processed with respect to
> those outer
> addresses.  That creates a requirement to provide an IP multicast
> service to the
> users over the encapsulation.
>
> Thanks,
> --David
>
> > -----Original Message-----
> > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Ashish Dalela (adalela)
> > Sent: Thursday, January 05, 2012 11:01 AM
> > To: Warren Kumari
> > Cc: Pedro Marques; Black, David; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
> >
> > Hi Warren,
> >
> > You are prescribing to a hypervisor based solutions and table scaling
> > issues don't arise there (or at least are not obvious) because it all
> > happens in software. However, there are other issues. For example, how
> > do you implement broadcast and multicast? The standard mechanism today
> > is to map a broadcast domain to a multicast group in L3. Now, what
> > happens if some rogue user sends an IGMP join to that group - what was
> > on the VLAN is now accessible to everyone through an IGMP join. For
> user
> > level multicast, there are other issues. Assume I'm doing a VDI cloud,
> > where users need to join multicast video conferencing. The group is
> user
> > determined, not admin determined. How do we know that the user is not
> > joining a VLAN mapped multicast group?
> >
> > We need to keep a complete set of goals in mind. Otherwise, we can
> solve
> > an issue and miss a goal. E.g. multicast and broadcast.
> >
> > Thanks, Ashish
> >
> >
> > -----Original Message-----
> > From: Warren Kumari [mailto:warren@kumari.net]
> > Sent: Thursday, January 05, 2012 9:06 PM
> > To: Ashish Dalela (adalela)
> > Cc: Warren Kumari; Pedro Marques; david.black@emc.com; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> >
> > On Jan 3, 2012, at 12:45 PM, Ashish Dalela (adalela) wrote:
> >
> > >
> > > Suppose you have an IP solution.
> >
> > Sure.
> >
> > >
> > > To support mobility you need IP-in-IP encapsulation.
> >
> > And if you do an overly you always do an IP encapsulation (to cover
> GRE,
> > IPIP, sit, IPSec, PPP, etc).
> >
> > >
> > > As VM density increases, as VM-to-VM conversation grows, as
> interfaces
> > > per VM increase, the host routes increase.
> >
> > No.
> >
> > The only thing that the network needs to know is the routes to the
> > hypervisors / physical machines -- this is a solved problem.
> > The VM addresses and routes are only visible to the [gateways,
> > hypervisors with VMs in that overlay, other VMs in the same overlay,
> > mapping server].
> >
> > For a really old overview:
> > http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00
> >
> > > These host routes are in addition to network routes, local host-port
> > > bindings, ACLs, etc. That means in addition to everything that
> existed
> > > so far.
> >
> > No.
> >
> >
> > >
> > > Eventually, you hit a limit on the access, and you have to reduce
> size
> > > of network, reduce VM mobility, reduce VM density per server, reduce
> > > application spread.
> > >
> >
> > No.
> >
> > > The alternative is to constantly increase network hardware table
> sizes
> > > at access, which increases costs and energy.
> > >
> >
> > No.
> >
> > > We have to realize that IP encapsulations put network and compute at
> > > opposite sides of the cost trend. Compute cost reduces slowly as
> size
> > > grows. Network cost grows rapidly as size grows.
> > >
> >
> > No.
> >
> >
> >
> > > Thanks,
> > > Ashish
> > >
> > >
> > > -----Original Message-----
> > > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> > > Pedro Marques
> > > Sent: Tuesday, January 03, 2012 10:46 PM
> > > To: david.black@emc.com
> > > Cc: dc@ietf.org
> > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > > interconnect
> > >
> > > That assumes that the MAC has relevance in the network. It is
> possible
> > > to build solutions such that packets are forwarded based on their IP
> > > addresses rather than their MACs.
> > >
> > >  Pedro.
> > >
> > > On Tue, Jan 3, 2012 at 8:41 AM,  <david.black@emc.com> wrote:
> > >> Ashish,
> > >>
> > >>>> [AD] The higher bits identify a switch - it's a switch-id.
> > >>
> > >> That breaks VM migration across switches by forcing a MAC change.
> > >>
> > >> Thanks,
> > >> --David
> > >>
> > >>> -----Original Message-----
> > >>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf
> Of
> > > Ashish Dalela (adalela)
> > >>> Sent: Tuesday, January 03, 2012 11:15 AM
> > >>> To: robert@raszuk.net
> > >>> Cc: Pedro Marques; dc@ietf.org
> > >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > > interconnect
> > >>>
> > >>> Robert,
> > >>>
> > >>> Please see inline.
> > >>>
> > >>> -----Original Message-----
> > >>> From: Robert Raszuk [mailto:robert@raszuk.net]
> > >>> Sent: Tuesday, January 03, 2012 8:24 PM
> > >>> To: Ashish Dalela (adalela)
> > >>> Cc: Pedro Marques; dc@ietf.org
> > >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > >>> interconnect
> > >>>
> > >>> Ashish,
> > >>>
> > >>> OK let's just discuss what is in your draft on Hierarchical
> > > Addressing.
> > >>>
> > >>> 1. You have 48 bits 32 go for host remaining 16 goes for switches.
> > > How
> > >>> do you aggregate at the TOR or AGGR switch boundary ? Are you
> > > assuming
> > >>> single HOST - SWITCH with max 65K flat macs ?
> > >>>
> > >>> [AD] The higher bits identify a switch - it's a switch-id. The
> hosts
> > > are
> > >>> dynamically assigned a host-id under that switch. Let's assume 23
> > > bits
> > >>> are for switch-id and 23 bits for host-id. To forward a packet to
> > the
> > >>> host, you only have to look at the first 23 bits. That's a MAC
> > prefix
> > > to
> > >>> route against.
> > >>>
> > >>> [AD] You can have 2^23 switches in a network and 2^23 hosts under
> > > each
> > >>> switch.
> > >>>
> > >>> 2. Can you deploy this on existing VMs and existing switches ?
> > >>>
> > >>> [AD] What do you mean by this? Any VM can be configured with any
> > MAC.
> > >>> Any physical host can be configured with any MAC on any logical
> > >>> interface. Configuration standpoint this is possible. Forwarding
> > >>> standpoint, that's another question.
> > >>>
> > >>> 3. What new protocol you envision to use to distribute those new
> > MACs
> > > ?
> > >>>
> > >>> [AD] IS-IS extensions. It can be TRILL extensions.
> > >>>
> > >>> 4. What is the advantage of using this vs ILNP if we assume that
> > > hosts
> > >>> should be modified ?
> > >>>
> > >>> [AD] I'm not familiar with the ILNP work, but I'm assuming you are
> > >>> talking about Loc-Id separation. If not, correct me. If yes, each
> > > Loc-Id
> > >>> binding can be a host route, with mobility. These host-routes are
> a
> > >>> scaling problem. Traditional IP packet have IP as ID and MAC as
> LOC.
> > > We
> > >>> are just extending this LOC to make it actually location aware
> > rather
> > >>> than a flat address which is fixed regardless of where the
> location
> > > is.
> > >>>
> > >>> 5. The proposal does not support aggregation .. even the draft
> says
> > > it
> > >>> :)
> > >>>
> > >>> "The total number of hardware entries anywhere in the network
> equals
> > > the
> > >>>
> > >>> total number of switches and remains agnostic of VM mobility."
> > >>>
> > >>> [AD] For 1 million VM, and 25 VM per host, you need 40,000 hosts.
> > > With
> > >>> 48 port access switches, you need 833 switches. That's the routing
> > > table
> > >>> size for any switch in the datacenter - core, aggregation, access.
> > >>> Contrast this with host-routes, if each VM talks to 100 VMs, then
> > > each
> > >>> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just
> > because
> > >>> the network prefix is 23 bits does not mean we have to store 10^23
> > >>> prefixes. We have to store only as many switches as there are in
> the
> > >>> network. Ratio between VM : switch is 1000 : 1 (today, assuming 48
> > > port
> > >>> access and 20 VM per port). That means instead of storing
> > host-routes
> > >>> which will grow proportional to VM growth, we store switch-id,
> which
> > >>> will grow at 1000 times slower rate. As VM density increases, this
> > >>> growth rate is further slowed down. There are other techniques to
> > >>> further reduce the rate of growth. But in any case, 1000 times
> > slower
> > > is
> > >>> a lot slow.
> > >>>
> > >>> So if I have 100K switches I can not do any aggregation and need
> to
> > >>> "route" 100K MAC addresses.
> > >>>
> > >>> [AD] I don't know how you came to that conclusion. Think of HMAC
> as
> > > an
> > >>> IP address. Instead of 32 bits it is 46 bits. You route by
> prefixes
> > > in
> > >>> L3, and you are routing by the same prefixes here. Just as you
> > > aggregate
> > >>> IP, same way you aggregate MAC. It's not different.
> > >>>
> > >>> 6. Who provides me the mapping between switch mac and host/vm mac
> > > behind
> > >>>
> > >>> such switch ? Do switches proxy arp globally within your domain ?
> > >>>
> > >>> [AD] Variation of the same question. Above should answer it.
> > >>>
> > >>> Thx,
> > >>> R.
> > >>>
> > >>>
> > >>>> Robert,
> > >>>>
> > >>>>>> So you are advocating solution which is based on encapsulation
> -
> > >>> that
> > >>>> is fine.
> > >>>>
> > >>>> No, I'm not. Did you read the draft I had mentioned?
> > >>>> Hierarchical MAC is not encapsulation. It is one 48 bit address.
> > >>>>
> > >>>>>> However how could you ever arrive at the conclusion that HMACs
> > > would
> > >>>>>> scale better then "anything we know". Well I don't know about
> > > you,
> > >>>> but I
> > >>>>>> know that the key to scaling is ability to aggregate. And it is
> > > not
> > >>>> that
> > >>>>>> huge mystery that MACs aggregate rather poorly while there are
> > > quite
> > >>>>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > >>> natively
> > >>>>
> > >>>> You are hitting the issue on the nail. So, read the draft I
> > > mentioned.
> > >>>> Hierarchical MAC is higher bits "network prefix" and lower bits
> > > "host
> > >>>> id".
> > >>>> That's summarizable like IP address and aggregated.
> > >>>> It has 46 bits to modify so larger than IPv4 internet.
> > >>>>
> > >>>> I won't comment on the rest, because you have made an assumption
> > > about
> > >>>> encapsulation.
> > >>>>
> > >>>> I refer to this -
> > >>>> http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> > >>>>
> > >>>> Thanks, Ashish
> > >>>>
> > >>>>
> > >>>> -----Original Message-----
> > >>>> From: Robert Raszuk [mailto:robert@raszuk.net]
> > >>>> Sent: Tuesday, January 03, 2012 7:05 PM
> > >>>> To: Ashish Dalela (adalela)
> > >>>> Cc: Pedro Marques; dc@ietf.org
> > >>>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > >>>> interconnect
> > >>>>
> > >>>> Ashish,
> > >>>>
> > >>>>> The issues of scale you mentioned don't exist in Hierarchical
> > > MACs,
> > >>>>> which scale better than anything we know of.
> > >>>>
> > >>>> So you are advocating solution which is based on encapsulation -
> > > that
> > >>> is
> > >>>> fine.
> > >>>>
> > >>>> However how could you ever arrive at the conclusion that HMACs
> > > would
> > >>>> scale better then "anything we know". Well I don't know about
> you,
> > > but
> > >>> I
> > >>>> know that the key to scaling is ability to aggregate. And it is
> not
> > >>> that
> > >>>>
> > >>>> huge mystery that MACs aggregate rather poorly while there are
> > > quite
> > >>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > > natively.
> > >>>>
> > >>>> For inter-dc this is IMHO a must. A must even if you build it
> using
> > >>>> traditional routers or OF enabled switches - does not matter.
> > >>>>
> > >>>>> I don't want to split the requirements into multiple use-cases
> > >>>>> because then this DC group will be many groups - one doing L2
> and
> > >>>>> another doing L3. That I think you will agree is not optimal for
> > >>>>> anyone
> > >>>>
> > >>>> Why MAC-in-IP does not solve it for everyone ? And there are
> > > deployed
> > >>>> solutions already ..
> > >>>>
> > >>>> IMHO what this group should accomplish is not to try to reinvent
> > > the
> > >>>> world, but perhaps as example discuss where is the right boundary
> > > of
> > >>>> encapsulation, how should we communicate between network and
> hosts,
> > >>> what
> > >>>>
> > >>>> kind of DC instrumentation should be IETF blessed for easy
> > > integration
> > >>>> (ie min subset of functionality it should possess etc .... )
> > >>>>
> > >>>> R.
> > >>>>
> > >>>> _______________________________________________
> > >>>> dc mailing list
> > >>>> dc@ietf.org
> > >>>> https://www.ietf.org/mailman/listinfo/dc
> > >>>>
> > >>>>
> > >>>
> > >>> _______________________________________________
> > >>> dc mailing list
> > >>> dc@ietf.org
> > >>> https://www.ietf.org/mailman/listinfo/dc
> > >>
> > >> _______________________________________________
> > >> dc mailing list
> > >> dc@ietf.org
> > >> https://www.ietf.org/mailman/listinfo/dc
> > > _______________________________________________
> > > dc mailing list
> > > dc@ietf.org
> > > https://www.ietf.org/mailman/listinfo/dc
> > > _______________________________________________
> > > dc mailing list
> > > dc@ietf.org
> > > https://www.ietf.org/mailman/listinfo/dc
> > >
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
>


From david.black@emc.com  Thu Jan  5 10:19:25 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8935021F8742 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:19:25 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.59
X-Spam-Level: 
X-Spam-Status: No, score=-106.59 tagged_above=-999 required=5 tests=[AWL=0.009, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3RCTCct4PJic for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:19:25 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id EA48721F86E2 for <dc@ietf.org>; Thu,  5 Jan 2012 10:19:24 -0800 (PST)
Received: from hop04-l1d11-si01.isus.emc.com (HOP04-L1D11-SI01.isus.emc.com [10.254.111.54]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q05IJOnV023919 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 5 Jan 2012 13:19:24 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.221.253]) by hop04-l1d11-si01.isus.emc.com (RSA Interceptor); Thu, 5 Jan 2012 13:19:04 -0500
Received: from mxhub24.corp.emc.com (mxhub24.corp.emc.com [128.222.70.136]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q05IJ3bT017910; Thu, 5 Jan 2012 13:19:04 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub24.corp.emc.com ([128.222.70.136]) with mapi; Thu, 5 Jan 2012 13:19:03 -0500
From: <david.black@emc.com>
To: <narten@us.ibm.com>
Date: Thu, 5 Jan 2012 13:19:02 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczLyQUyH332EuTzR1C1/dk9H9iPUgADLu0Q
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C275@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com> <618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com> <4EFC947A.4020007@riw.us> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com> <CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com> <CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com> <4F030418.1070202@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com> <4F031689.1050303@raszuk.net> <618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B256BA@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBD6@MX14! A.corp.emc.com> <CAMXVrt4QnsbZODLy2b9FsOmfwK5h4vhyA4kqfw48DH+Qie9SoQ@mail.gmail.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBE3@MX14A.corp.emc.com> <201201051641.q05Gf62V028070@cichlid.raleigh.ibm.com>
In-Reply-To: <201201051641.q05Gf62V028070@cichlid.raleigh.ibm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 18:19:25 -0000

> <david.black@emc.com> writes:
>=20
> > > Whether the VMs change the MAC or not is not necessarily relevant
> > > either. What is the service model to the VM ? Is it an IEEE compatibl=
e
> > > LAN or an ethernet (point-to-point) interface with the ability to
> > > carry IP traffic ?. In the later case the MAC is not relevant.
>=20
> > The widely deployed service model for VMs that I'm thinking of is
> > an IEEE-compatible LAN (L2 network service, for which the MAC matters).
>=20
> And specifically, within one VLAN. I.e., there is no notion of trying
> to emulate multiple VLANs in such a service.
>=20
> Right?

Right.  Multiple VLANs result in multiple separate instances of the L2 serv=
ice
(e.g., accessed via multiple network interfaces [vNICs] in a VM that needs
access to multiple VLANs).

Thanks,
--David
----------------------------------------------------
David L. Black, Distinguished Engineer
EMC Corporation, 176 South St., Hopkinton, MA=A0 01748
+1 (508) 293-7953=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 FAX: +1 (508) 293-778=
6
david.black@emc.com=A0=A0=A0=A0=A0=A0=A0 Mobile: +1 (978) 394-7754
----------------------------------------------------


From adalela@cisco.com  Thu Jan  5 10:24:47 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D625C21F8819 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:24:46 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.423
X-Spam-Level: 
X-Spam-Status: No, score=-2.423 tagged_above=-999 required=5 tests=[AWL=0.176,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4j-RfL44TXBC for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:24:45 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 1C35821F87E8 for <dc@ietf.org>; Thu,  5 Jan 2012 10:24:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=17511; q=dns/txt; s=iport; t=1325787883; x=1326997483; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=Zia3hBwph96YpeZL03UqpFzR7t0GsehHauMjYDa0S8Q=; b=ACMK9Jt8eRl0ntdXPG58KOrIodH7+OlW1ug0w8MK78u59Mj6cBvGhvdy 1nkhLZ+VC0eTwaBEUKA2wS/xkNFHtfMcvSMrypQlJ4AgRt/KUQdnB4oqb FdjGfcHgocnZscD2CYMZDx6c/7OMG1ATBH543VKUNjLDrz2/6jX329KRw U=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ar0EAGvqBU9Io8UY/2dsb2JhbAA4CoIFq32BcgEBAQMBAQEBDwEdCi0HCwUHBAIBCBEEAQEBCgYXAQYBJh8JCAEBBAsICBMHh1gIl3sBnheIVoJYYwSIN58M
X-IronPort-AV: E=Sophos;i="4.71,463,1320624000";  d="scan'208";a="2857104"
Received: from vla196-nat.cisco.com (HELO bgl-core-3.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 05 Jan 2012 18:24:39 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-3.cisco.com (8.14.3/8.14.3) with ESMTP id q05IOdEv014839; Thu, 5 Jan 2012 18:24:39 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Thu, 5 Jan 2012 23:54:39 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Thu, 5 Jan 2012 23:54:37 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25C9F@XMB-BGL-416.cisco.com>
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C26B@MX14A.corp.emc.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczLv8/i14m93uVUT1SkyOUfB6pYjgAAhaTAAAEwXYAAAoW84AAAKwQwAAFEQNA=
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com><CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco. com> <AF 48CEB4-18A6 -	45CE-891B-ACFE599C8FB4@kumari.net><618BE8B40039924EB9AED233D4A09C5102B25C5A@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C204@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25C8B@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C26B@MX14A.corp.emc.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <david.black@emc.com>
X-OriginalArrivalTime: 05 Jan 2012 18:24:39.0162 (UTC) FILETIME=[491805A0:01CCCBD7]
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 18:24:48 -0000

David,

When you have a default GW replicating packets, how do you get
multipath?

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
david.black@emc.com
Sent: Thursday, January 05, 2012 11:39 PM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect

Ashish,

> So, where does the multicast packet get replicated, since the network
is
> not aware of user-level multicast groups? How are those distribution
> trees to be built over an overlay?

The answer to this question starts from a realization that there are two
networks, and hence two address spaces, involved - the user-visible
overlay
network and the underlying data center network that carries the overlay
network via encap/decap.  The user VMs (or physical servers) only see
the
user-visible overlay network and its addresses, so the replication of
user
multicast packets is a service of that user-visible overlay network
(specifically L3 nodes in that network that operate on the user-visible
IP addresses), not the underlying network.

To make this specific, suppose the user VMs are using 192.168.0.0/16
addresses,
and these are encapsulated over an underlying data center network that
is
using 10.0.0.0/8 addresses.  If a user VM (e.g., VDI desktop) sends a
join
request for a 10.0.0.0/8 address, *nothing* should happen, because those
10. addresses aren't visible in the overlay network.  Instead, the
overlay
network is responsible for providing IP multicast service for the
192.168.0.0/16
addresses - the default IP gateway for that overlay network is one
possible
location.

> How are those distribution trees to be built over an overlay?

The same way that they're build in a non-overlay network - the L3
multicast
functionality is the same, only the IP addresses are different.  This is
relatively straightforward for a MAC-in-IP (L2-in-L3) overlay when the
service
provided by the L2 overlay includes multicast.  When there's no
multicast
provided by the overlay's L2 service or no L2 service (IP-in-IP/L3-in-L3
approach), multicast for the overlay network may involve explicit
copying of
packets at L3 nodes in the overlay network (not the underlying network),
and is part of the overlay design.

Thanks,
--David

> -----Original Message-----
> From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> Sent: Thursday, January 05, 2012 12:39 PM
> To: Black, David
> Cc: dc@ietf.org
> Subject: RE: [dc] [armd] IP over IP solution for data center
interconnect
>
> David,
>
> So, where does the multicast packet get replicated, since the network
is
> not aware of user-level multicast groups? How are those distribution
> trees to be built over an overlay?
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: david.black@emc.com [mailto:david.black@emc.com]
> Sent: Thursday, January 05, 2012 10:00 PM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: RE: [dc] [armd] IP over IP solution for data center
> interconnect
>
> Ashish,
>
> For IP-based overlays, this starts by carefully distinguishing the
inner
> (user) and
> outer (provider/infrastructure) IP address blocks, and completely
> controlling access
> to the outer IP address blocks via encap/decap.  The result is that an
> IGMP join
> from a user gets encapsulated and can't be processed with respect to
> those outer
> addresses.  That creates a requirement to provide an IP multicast
> service to the
> users over the encapsulation.
>
> Thanks,
> --David
>
> > -----Original Message-----
> > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Ashish Dalela (adalela)
> > Sent: Thursday, January 05, 2012 11:01 AM
> > To: Warren Kumari
> > Cc: Pedro Marques; Black, David; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
> >
> > Hi Warren,
> >
> > You are prescribing to a hypervisor based solutions and table
scaling
> > issues don't arise there (or at least are not obvious) because it
all
> > happens in software. However, there are other issues. For example,
how
> > do you implement broadcast and multicast? The standard mechanism
today
> > is to map a broadcast domain to a multicast group in L3. Now, what
> > happens if some rogue user sends an IGMP join to that group - what
was
> > on the VLAN is now accessible to everyone through an IGMP join. For
> user
> > level multicast, there are other issues. Assume I'm doing a VDI
cloud,
> > where users need to join multicast video conferencing. The group is
> user
> > determined, not admin determined. How do we know that the user is
not
> > joining a VLAN mapped multicast group?
> >
> > We need to keep a complete set of goals in mind. Otherwise, we can
> solve
> > an issue and miss a goal. E.g. multicast and broadcast.
> >
> > Thanks, Ashish
> >
> >
> > -----Original Message-----
> > From: Warren Kumari [mailto:warren@kumari.net]
> > Sent: Thursday, January 05, 2012 9:06 PM
> > To: Ashish Dalela (adalela)
> > Cc: Warren Kumari; Pedro Marques; david.black@emc.com; dc@ietf.org
> > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> >
> > On Jan 3, 2012, at 12:45 PM, Ashish Dalela (adalela) wrote:
> >
> > >
> > > Suppose you have an IP solution.
> >
> > Sure.
> >
> > >
> > > To support mobility you need IP-in-IP encapsulation.
> >
> > And if you do an overly you always do an IP encapsulation (to cover
> GRE,
> > IPIP, sit, IPSec, PPP, etc).
> >
> > >
> > > As VM density increases, as VM-to-VM conversation grows, as
> interfaces
> > > per VM increase, the host routes increase.
> >
> > No.
> >
> > The only thing that the network needs to know is the routes to the
> > hypervisors / physical machines -- this is a solved problem.
> > The VM addresses and routes are only visible to the [gateways,
> > hypervisors with VMs in that overlay, other VMs in the same overlay,
> > mapping server].
> >
> > For a really old overview:
> > http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00
> >
> > > These host routes are in addition to network routes, local
host-port
> > > bindings, ACLs, etc. That means in addition to everything that
> existed
> > > so far.
> >
> > No.
> >
> >
> > >
> > > Eventually, you hit a limit on the access, and you have to reduce
> size
> > > of network, reduce VM mobility, reduce VM density per server,
reduce
> > > application spread.
> > >
> >
> > No.
> >
> > > The alternative is to constantly increase network hardware table
> sizes
> > > at access, which increases costs and energy.
> > >
> >
> > No.
> >
> > > We have to realize that IP encapsulations put network and compute
at
> > > opposite sides of the cost trend. Compute cost reduces slowly as
> size
> > > grows. Network cost grows rapidly as size grows.
> > >
> >
> > No.
> >
> >
> >
> > > Thanks,
> > > Ashish
> > >
> > >
> > > -----Original Message-----
> > > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf
Of
> > > Pedro Marques
> > > Sent: Tuesday, January 03, 2012 10:46 PM
> > > To: david.black@emc.com
> > > Cc: dc@ietf.org
> > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > > interconnect
> > >
> > > That assumes that the MAC has relevance in the network. It is
> possible
> > > to build solutions such that packets are forwarded based on their
IP
> > > addresses rather than their MACs.
> > >
> > >  Pedro.
> > >
> > > On Tue, Jan 3, 2012 at 8:41 AM,  <david.black@emc.com> wrote:
> > >> Ashish,
> > >>
> > >>>> [AD] The higher bits identify a switch - it's a switch-id.
> > >>
> > >> That breaks VM migration across switches by forcing a MAC change.
> > >>
> > >> Thanks,
> > >> --David
> > >>
> > >>> -----Original Message-----
> > >>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf
> Of
> > > Ashish Dalela (adalela)
> > >>> Sent: Tuesday, January 03, 2012 11:15 AM
> > >>> To: robert@raszuk.net
> > >>> Cc: Pedro Marques; dc@ietf.org
> > >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > > interconnect
> > >>>
> > >>> Robert,
> > >>>
> > >>> Please see inline.
> > >>>
> > >>> -----Original Message-----
> > >>> From: Robert Raszuk [mailto:robert@raszuk.net]
> > >>> Sent: Tuesday, January 03, 2012 8:24 PM
> > >>> To: Ashish Dalela (adalela)
> > >>> Cc: Pedro Marques; dc@ietf.org
> > >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > >>> interconnect
> > >>>
> > >>> Ashish,
> > >>>
> > >>> OK let's just discuss what is in your draft on Hierarchical
> > > Addressing.
> > >>>
> > >>> 1. You have 48 bits 32 go for host remaining 16 goes for
switches.
> > > How
> > >>> do you aggregate at the TOR or AGGR switch boundary ? Are you
> > > assuming
> > >>> single HOST - SWITCH with max 65K flat macs ?
> > >>>
> > >>> [AD] The higher bits identify a switch - it's a switch-id. The
> hosts
> > > are
> > >>> dynamically assigned a host-id under that switch. Let's assume
23
> > > bits
> > >>> are for switch-id and 23 bits for host-id. To forward a packet
to
> > the
> > >>> host, you only have to look at the first 23 bits. That's a MAC
> > prefix
> > > to
> > >>> route against.
> > >>>
> > >>> [AD] You can have 2^23 switches in a network and 2^23 hosts
under
> > > each
> > >>> switch.
> > >>>
> > >>> 2. Can you deploy this on existing VMs and existing switches ?
> > >>>
> > >>> [AD] What do you mean by this? Any VM can be configured with any
> > MAC.
> > >>> Any physical host can be configured with any MAC on any logical
> > >>> interface. Configuration standpoint this is possible. Forwarding
> > >>> standpoint, that's another question.
> > >>>
> > >>> 3. What new protocol you envision to use to distribute those new
> > MACs
> > > ?
> > >>>
> > >>> [AD] IS-IS extensions. It can be TRILL extensions.
> > >>>
> > >>> 4. What is the advantage of using this vs ILNP if we assume that
> > > hosts
> > >>> should be modified ?
> > >>>
> > >>> [AD] I'm not familiar with the ILNP work, but I'm assuming you
are
> > >>> talking about Loc-Id separation. If not, correct me. If yes,
each
> > > Loc-Id
> > >>> binding can be a host route, with mobility. These host-routes
are
> a
> > >>> scaling problem. Traditional IP packet have IP as ID and MAC as
> LOC.
> > > We
> > >>> are just extending this LOC to make it actually location aware
> > rather
> > >>> than a flat address which is fixed regardless of where the
> location
> > > is.
> > >>>
> > >>> 5. The proposal does not support aggregation .. even the draft
> says
> > > it
> > >>> :)
> > >>>
> > >>> "The total number of hardware entries anywhere in the network
> equals
> > > the
> > >>>
> > >>> total number of switches and remains agnostic of VM mobility."
> > >>>
> > >>> [AD] For 1 million VM, and 25 VM per host, you need 40,000
hosts.
> > > With
> > >>> 48 port access switches, you need 833 switches. That's the
routing
> > > table
> > >>> size for any switch in the datacenter - core, aggregation,
access.
> > >>> Contrast this with host-routes, if each VM talks to 100 VMs,
then
> > > each
> > >>> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just
> > because
> > >>> the network prefix is 23 bits does not mean we have to store
10^23
> > >>> prefixes. We have to store only as many switches as there are in
> the
> > >>> network. Ratio between VM : switch is 1000 : 1 (today, assuming
48
> > > port
> > >>> access and 20 VM per port). That means instead of storing
> > host-routes
> > >>> which will grow proportional to VM growth, we store switch-id,
> which
> > >>> will grow at 1000 times slower rate. As VM density increases,
this
> > >>> growth rate is further slowed down. There are other techniques
to
> > >>> further reduce the rate of growth. But in any case, 1000 times
> > slower
> > > is
> > >>> a lot slow.
> > >>>
> > >>> So if I have 100K switches I can not do any aggregation and need
> to
> > >>> "route" 100K MAC addresses.
> > >>>
> > >>> [AD] I don't know how you came to that conclusion. Think of HMAC
> as
> > > an
> > >>> IP address. Instead of 32 bits it is 46 bits. You route by
> prefixes
> > > in
> > >>> L3, and you are routing by the same prefixes here. Just as you
> > > aggregate
> > >>> IP, same way you aggregate MAC. It's not different.
> > >>>
> > >>> 6. Who provides me the mapping between switch mac and host/vm
mac
> > > behind
> > >>>
> > >>> such switch ? Do switches proxy arp globally within your domain
?
> > >>>
> > >>> [AD] Variation of the same question. Above should answer it.
> > >>>
> > >>> Thx,
> > >>> R.
> > >>>
> > >>>
> > >>>> Robert,
> > >>>>
> > >>>>>> So you are advocating solution which is based on
encapsulation
> -
> > >>> that
> > >>>> is fine.
> > >>>>
> > >>>> No, I'm not. Did you read the draft I had mentioned?
> > >>>> Hierarchical MAC is not encapsulation. It is one 48 bit
address.
> > >>>>
> > >>>>>> However how could you ever arrive at the conclusion that
HMACs
> > > would
> > >>>>>> scale better then "anything we know". Well I don't know about
> > > you,
> > >>>> but I
> > >>>>>> know that the key to scaling is ability to aggregate. And it
is
> > > not
> > >>>> that
> > >>>>>> huge mystery that MACs aggregate rather poorly while there
are
> > > quite
> > >>>>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > >>> natively
> > >>>>
> > >>>> You are hitting the issue on the nail. So, read the draft I
> > > mentioned.
> > >>>> Hierarchical MAC is higher bits "network prefix" and lower bits
> > > "host
> > >>>> id".
> > >>>> That's summarizable like IP address and aggregated.
> > >>>> It has 46 bits to modify so larger than IPv4 internet.
> > >>>>
> > >>>> I won't comment on the rest, because you have made an
assumption
> > > about
> > >>>> encapsulation.
> > >>>>
> > >>>> I refer to this -
> > >>>> http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> > >>>>
> > >>>> Thanks, Ashish
> > >>>>
> > >>>>
> > >>>> -----Original Message-----
> > >>>> From: Robert Raszuk [mailto:robert@raszuk.net]
> > >>>> Sent: Tuesday, January 03, 2012 7:05 PM
> > >>>> To: Ashish Dalela (adalela)
> > >>>> Cc: Pedro Marques; dc@ietf.org
> > >>>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > >>>> interconnect
> > >>>>
> > >>>> Ashish,
> > >>>>
> > >>>>> The issues of scale you mentioned don't exist in Hierarchical
> > > MACs,
> > >>>>> which scale better than anything we know of.
> > >>>>
> > >>>> So you are advocating solution which is based on encapsulation
-
> > > that
> > >>> is
> > >>>> fine.
> > >>>>
> > >>>> However how could you ever arrive at the conclusion that HMACs
> > > would
> > >>>> scale better then "anything we know". Well I don't know about
> you,
> > > but
> > >>> I
> > >>>> know that the key to scaling is ability to aggregate. And it is
> not
> > >>> that
> > >>>>
> > >>>> huge mystery that MACs aggregate rather poorly while there are
> > > quite
> > >>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > > natively.
> > >>>>
> > >>>> For inter-dc this is IMHO a must. A must even if you build it
> using
> > >>>> traditional routers or OF enabled switches - does not matter.
> > >>>>
> > >>>>> I don't want to split the requirements into multiple use-cases
> > >>>>> because then this DC group will be many groups - one doing L2
> and
> > >>>>> another doing L3. That I think you will agree is not optimal
for
> > >>>>> anyone
> > >>>>
> > >>>> Why MAC-in-IP does not solve it for everyone ? And there are
> > > deployed
> > >>>> solutions already ..
> > >>>>
> > >>>> IMHO what this group should accomplish is not to try to
reinvent
> > > the
> > >>>> world, but perhaps as example discuss where is the right
boundary
> > > of
> > >>>> encapsulation, how should we communicate between network and
> hosts,
> > >>> what
> > >>>>
> > >>>> kind of DC instrumentation should be IETF blessed for easy
> > > integration
> > >>>> (ie min subset of functionality it should possess etc .... )
> > >>>>
> > >>>> R.
> > >>>>
> > >>>> _______________________________________________
> > >>>> dc mailing list
> > >>>> dc@ietf.org
> > >>>> https://www.ietf.org/mailman/listinfo/dc
> > >>>>
> > >>>>
> > >>>
> > >>> _______________________________________________
> > >>> dc mailing list
> > >>> dc@ietf.org
> > >>> https://www.ietf.org/mailman/listinfo/dc
> > >>
> > >> _______________________________________________
> > >> dc mailing list
> > >> dc@ietf.org
> > >> https://www.ietf.org/mailman/listinfo/dc
> > > _______________________________________________
> > > dc mailing list
> > > dc@ietf.org
> > > https://www.ietf.org/mailman/listinfo/dc
> > > _______________________________________________
> > > dc mailing list
> > > dc@ietf.org
> > > https://www.ietf.org/mailman/listinfo/dc
> > >
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
>

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From david.i.allan@ericsson.com  Thu Jan  5 10:26:10 2012
Return-Path: <david.i.allan@ericsson.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 542C021F8845 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:26:10 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Level: 
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[AWL=0.000,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bskxuJgP6-Da for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:26:09 -0800 (PST)
Received: from imr3.ericy.com (imr3.ericy.com [198.24.6.13]) by ietfa.amsl.com (Postfix) with ESMTP id CF00E21F87E8 for <dc@ietf.org>; Thu,  5 Jan 2012 10:26:07 -0800 (PST)
Received: from eusaamw0707.eamcs.ericsson.se ([147.117.20.32]) by imr3.ericy.com (8.13.8/8.13.8) with ESMTP id q05IQ5l1001513 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Thu, 5 Jan 2012 12:26:06 -0600
Received: from EUSAACMS0703.eamcs.ericsson.se ([169.254.1.43]) by eusaamw0707.eamcs.ericsson.se ([147.117.20.32]) with mapi; Thu, 5 Jan 2012 13:26:03 -0500
From: David Allan I <david.i.allan@ericsson.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>, Thomas Narten <narten@us.ibm.com>
Date: Thu, 5 Jan 2012 13:26:02 -0500
Thread-Topic: [dc] draft-dalela-dc-requirements-00.txt
Thread-Index: AczLCsm+jZpuzuroSy6ift2TqX44qAABEINwABnOOyAAGB1BMA==
Message-ID: <60C093A41B5E45409A19D42CF7786DFD52290487A4@EUSAACMS0703.eamcs.ericsson.se>
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com> <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se> <618BE8B40039924EB9AED233D4A09C5102B25ABD@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25ABD@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 18:26:10 -0000

HI Ashish:

Well as I indicated, for many the scale problem simplifies to virtualizing =
large numbers of broadcast domains. IMO that is one "criteria of goodness".

We COULD postulate that scaling the size of individual virtualized broadcas=
t domains is also a problem, but IMO that is simply a network/application d=
esign BCP, not an issue.

My observation actually applies to a number of drafts in both DC and ARMD..=
..this thread simply prompted me.

Cheers
D

-----Original Message-----
From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]=20
Sent: Wednesday, January 04, 2012 10:58 PM
To: David Allan I; Thomas Narten
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt

David,

A total of 1 sentence was dedicated to the L2 problem - "L2 networks can't =
be made to scale because of high number of broadcasts." And Linda has comme=
nted on this that the scaling issue is also due to MAC summarization, modif=
ication to be done. There is absolutely no discussion about pre-VLAN days, =
and I would like to know where you see it. Broadcast is contained by VLAN, =
and we are talking of that.=20

L2VPN is another story - because there was VPLS, and then there are lots of=
 new things. There are problems to be solved, and then problems to be solve=
d given some solution. So, the problem boundary shifts from the time you ta=
ke something as a "given". People generally take OSPF (or some L3 routing p=
rotocol) and VLAN as a given in the datacenter. Rest is not a "given". So, =
we have to start from what is given.

I'm fine, if we want to change the given to something else. Let me know wha=
t we think is the given today.

Thanks, Ashish

-----Original Message-----
From: David Allan I [mailto:david.i.allan@ericsson.com]
Sent: Thursday, January 05, 2012 12:26 AM
To: Thomas Narten; Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt


If the goal is to describe the generalized characteristics of what is
needed:
An absolutely flat broadcast domain does not scale...duh!
An absolutely flat L2 network does not scale...duh!
Partitioning the network into a large number of virtual broadcast
domains or L2VPNs/VLANs is what works for many adopters as it supports
PMO. This is what numerous existing standardized and proprietary
solutions offer with various shades of grey attribute wise (e.g.
scaling, ordering guarantees, properties when failures occur, broadcast
containment etc.). The one observation is that a 24 bit VLAN tag seems
to be the current gold standard, both with the IEEE and with proprietary
or proposed approaches.

It would be doing to group a service if the issues with Ethernet were
not presented based on a view stuck in perhaps the 2004-2005 timeframe,
or perhaps even before the standardization of the original 12 bit VLAN
tag, let alone what has come since.

;-)
Dave

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Thomas Narten
Sent: Wednesday, January 04, 2012 9:59 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: [dc] draft-dalela-dc-requirements-00.txt

Hi Ashish.

I had  look at this document as it is focused on requirements. Thanks
for doing this.

One starting comment, as the document says:

>    Scalability hasn't generally been a standards consideration and the
>    problems of scaling are left to implementation. But, in the case of
>    cloud datacenters, scaling is the basic requirement, and all
problems
>    of cloud datacenters arise due to scaling. The solution development
>    can't therefore ignore the scaling and optimality problem.

I disagree with the above. Scalability has always been one (of many)
factors that goes into development of a standard. Let's just take it as
a given that any solution has to scale adequately for the environment in
which it is to be deployed. Saying more than that (in general terms) is
probably not a useful discussion. To talk about scalability, one has to
talk about a specific technology and where it is or will be deployed.

Looking at Section 5, where the main requirements are listed:

>    5.1. The Basic Forwarding Problem
>=20
>    Traditionally, datacenter networks have used L2 or L3 technologies.
>    The need to massively scale virtualized hosts breaks both these
>    approaches. L2 networks can't be made to scale because of high
number
>    of broadcasts. L3 networks can't support host mobility, since
routing
>    uses subnets and an IP cannot be moved out of that subnet. Moving
IP
>    in a natively L3 network requires installing host routes at one or
>    more points in the path and that is an approach that can't be=20
> scaled.

I suspect there is general agreement that the above is a general
"problem". Having one big flat L2 in a data center is great for VM
migration and placement of services "any place, anytime", but can raise
scaling and other concerns. Pushing L3 all the way out to the edges
(e..g, ToR or Hypervisor) makes it hard to place (or move) services/VMs
arbitrarily.

The above is one of the motivations behind the NVO3 work.

> 5.2. The Datacenter Inter-Connectivity Problem
>=20
>    There are limits to how much a datacenter would be scaled.
Workloads
>    need to be placed closer to the clients to reduce latency and
>    bandwidth. Hence, datacenters need to be split into geographical
>    locations and connected over the Internet. Some of these
datacenters
>    may be owned by different administrators, as in the case of private
>    and public cloud interconnectivity. Workloads can move between
these
>    datacenters, similar to how they move within the datacenter.

In this section, my take away is that there will be multiple,
geographically separated data centers. And that they will need to be
connected together. I suspect everyone agrees with that.

But I don't see how this implies there is any specific IETF work that
needs doing. We already have geographically separated data centers, and
there are, e.g., plenty of VPN technologies available for connecting
them together.

What specifically is missing that prevents the above from being done
today? What is it that you think needs doing that can't be done with
existing standards?

> 5.3. The Multi-Tenancy Problem
>=20
>    Datacenters thus far have been wholly used by single tenant. To
>    separate departments within a tenant, VLANs have been used. This
>    seemed sufficient for the number of segments an enterprise would
>    need. But, this approach can't be extended to cloud datacenters.

I suspect you'll get a lot of agreement on this. And one of the key aims
of NVO3 is to address this.

Is the existing NVO3 approach not adequate for the above? If so why not?

> 5.4. The Technology-Topology Separation Problem
>=20
>    While large datacenters are becoming common, medium and small
>    datacenters will continue to exist. These may include a branch
office
>    connected to a central office, or a small enterprise datacenter
that
>    is connected to a huge public cloud. To move workloads across these
>    networks, the technologies used in the datacenter must be agnostic
of
>    the topology employed in the various sized datacenters.

>    A small datacenter may use a mesh topology. A medium datacenter may
>    use a three-tier topology. And a large datacenter may use a
two-tier
>    multi-path architecture. It has to be recognized that all these
>    datacenters of various sizes need to interoperate. In particular,
it
>    should be possible to use a common technology to connect large and
>    small datacenters, two large datacenters, or two small datacenters.

Isn't this already possible, and indeed, happening today?

What IETF work is needed? What standards gap needs filling?

>    5.5. The Network Convergence Problem
>=20
>    Cloud datacenters will be characterized by elasticity. That means
>    that virtual resources are constantly created and destroyed.
Typical
>    hardware and software reliabilities of today mean that failures at
>    scale will be fairly common, and automated recovery mechanisms will
>    need to be put in place. When combined with workload mobility for
the
>    sake of resource optimization and improving utilization, the churn
in
>    the network forwarding tables can be very significant.

WHat work does the above imply that the IETF needs to do ?

>    Mobility also affects virtualized network devices, such as virtual
>    switches, firewalls, load-balancers, etc. For instance, when a
server
>    fails and all the VMs are relocated, the associated virtual switch
>    and firewall must also be relocated. This means that any assumption
>    in mobility that the network is a static firmament on which hosts
are
>    dynamically attached becomes false. We have to assume that the
>    network is as dynamic as the hosts themselves.

This here is interesting. The implication is that when moving a VM,
either

a) a FW or LB (or both) may also have to be moved, or

b) some sort of path enforcement is needed that insures traffic from the
(now moved) VM continues to go through the same LB or FW as before.

Do I understand that correctly? And if so, what is the IETF work that
needs to be done to make all this happen?

>  5.6. The East-West Traffic Problem

Is this section saying anything more than there is a need for
multipathing for East West traffic?

> 5.7. The Network SLA Problem
>=20
>    Multi-tenant networks need to protect all tenants from overusing
>    network resources. For example, high-traffic load from one tenant
>    should not starve another tenant of bandwidth. Note that in a
multi-
>    tenant environment, no tenant has full control or visibility of
what
>    other tenants are doing, and how problems can be fixed. A real-time
>    debugging of such problems is very hard for a provider.

...

>    Second, mechanisms to measure and guarantee network SLAs will have
to
>    employ active flow management to guarantee bandwidth to all tenants
>    and keep the network provisioned only to the level required. Flow
>    management can be integrated as part of existing forwarding
>    techniques or may need new techniques. Network SLAs can play an
>    important role in determining if sufficient bandwidth is available
>    before a VM is moved to a new location.

Can this not be done today? What specific IETF work would be needed to
support the enforcement of SLAs?

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From wim.henderickx@alcatel-lucent.com  Thu Jan  5 10:50:02 2012
Return-Path: <wim.henderickx@alcatel-lucent.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B8DB021F87ED for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:50:02 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.249
X-Spam-Level: 
X-Spam-Status: No, score=-6.249 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HELO_EQ_FR=0.35, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zv2mBy-TAt0r for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:50:01 -0800 (PST)
Received: from smail2.alcatel.fr (smail2.alcatel.fr [64.208.49.57]) by ietfa.amsl.com (Postfix) with ESMTP id B6B4021F87E1 for <dc@ietf.org>; Thu,  5 Jan 2012 10:49:57 -0800 (PST)
Received: from FRMRSSXCHHUB03.dc-m.alcatel-lucent.com (FRMRSSXCHHUB03.dc-m.alcatel-lucent.com [135.120.45.63]) by smail2.alcatel.fr (8.14.3/8.14.3/ICT) with ESMTP id q05Inoal002021 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Thu, 5 Jan 2012 19:49:50 +0100
Received: from FRMRSSXCHMBSB1.dc-m.alcatel-lucent.com ([135.120.45.40]) by FRMRSSXCHHUB03.dc-m.alcatel-lucent.com ([135.120.45.63]) with mapi; Thu, 5 Jan 2012 19:49:50 +0100
From: "Henderickx, Wim (Wim)" <wim.henderickx@alcatel-lucent.com>
To: David Allan I <david.i.allan@ericsson.com>, "Ashish Dalela (adalela)" <adalela@cisco.com>, Thomas Narten <narten@us.ibm.com>
Date: Thu, 5 Jan 2012 19:49:48 +0100
Thread-Topic: [dc] draft-dalela-dc-requirements-00.txt
Thread-Index: AczLCsm+jZpuzuroSy6ift2TqX44qAABEINwABnOOyAAGB1BMAAA+LPw
Message-ID: <14C7F4F06DB5814AB0DE29716C4F6D671CD30A75@FRMRSSXCHMBSB1.dc-m.alcatel-lucent.com>
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com> <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se> <618BE8B40039924EB9AED233D4A09C5102B25ABD@XMB-BGL-416.cisco.com> <60C093A41B5E45409A19D42CF7786DFD52290487A4@EUSAACMS0703.eamcs.ericsson.se>
In-Reply-To: <60C093A41B5E45409A19D42CF7786DFD52290487A4@EUSAACMS0703.eamcs.ericsson.se>
Accept-Language: nl-NL, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: nl-NL, en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.69 on 155.132.188.80
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 18:50:02 -0000

David, before we say broadcast domains is not an issue we should understand=
 how many we will be expecting. Also multicast state has its limits btw.

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of David A=
llan I
Sent: donderdag 5 januari 2012 19:26
To: Ashish Dalela (adalela); Thomas Narten
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt

HI Ashish:

Well as I indicated, for many the scale problem simplifies to virtualizing =
large numbers of broadcast domains. IMO that is one "criteria of goodness".

We COULD postulate that scaling the size of individual virtualized broadcas=
t domains is also a problem, but IMO that is simply a network/application d=
esign BCP, not an issue.

My observation actually applies to a number of drafts in both DC and ARMD..=
..this thread simply prompted me.

Cheers
D

-----Original Message-----
From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]=20
Sent: Wednesday, January 04, 2012 10:58 PM
To: David Allan I; Thomas Narten
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt

David,

A total of 1 sentence was dedicated to the L2 problem - "L2 networks can't =
be made to scale because of high number of broadcasts." And Linda has comme=
nted on this that the scaling issue is also due to MAC summarization, modif=
ication to be done. There is absolutely no discussion about pre-VLAN days, =
and I would like to know where you see it. Broadcast is contained by VLAN, =
and we are talking of that.=20

L2VPN is another story - because there was VPLS, and then there are lots of=
 new things. There are problems to be solved, and then problems to be solve=
d given some solution. So, the problem boundary shifts from the time you ta=
ke something as a "given". People generally take OSPF (or some L3 routing p=
rotocol) and VLAN as a given in the datacenter. Rest is not a "given". So, =
we have to start from what is given.

I'm fine, if we want to change the given to something else. Let me know wha=
t we think is the given today.

Thanks, Ashish

-----Original Message-----
From: David Allan I [mailto:david.i.allan@ericsson.com]
Sent: Thursday, January 05, 2012 12:26 AM
To: Thomas Narten; Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt


If the goal is to describe the generalized characteristics of what is
needed:
An absolutely flat broadcast domain does not scale...duh!
An absolutely flat L2 network does not scale...duh!
Partitioning the network into a large number of virtual broadcast
domains or L2VPNs/VLANs is what works for many adopters as it supports
PMO. This is what numerous existing standardized and proprietary
solutions offer with various shades of grey attribute wise (e.g.
scaling, ordering guarantees, properties when failures occur, broadcast
containment etc.). The one observation is that a 24 bit VLAN tag seems
to be the current gold standard, both with the IEEE and with proprietary
or proposed approaches.

It would be doing to group a service if the issues with Ethernet were
not presented based on a view stuck in perhaps the 2004-2005 timeframe,
or perhaps even before the standardization of the original 12 bit VLAN
tag, let alone what has come since.

;-)
Dave

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Thomas Narten
Sent: Wednesday, January 04, 2012 9:59 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: [dc] draft-dalela-dc-requirements-00.txt

Hi Ashish.

I had  look at this document as it is focused on requirements. Thanks
for doing this.

One starting comment, as the document says:

>    Scalability hasn't generally been a standards consideration and the
>    problems of scaling are left to implementation. But, in the case of
>    cloud datacenters, scaling is the basic requirement, and all
problems
>    of cloud datacenters arise due to scaling. The solution development
>    can't therefore ignore the scaling and optimality problem.

I disagree with the above. Scalability has always been one (of many)
factors that goes into development of a standard. Let's just take it as
a given that any solution has to scale adequately for the environment in
which it is to be deployed. Saying more than that (in general terms) is
probably not a useful discussion. To talk about scalability, one has to
talk about a specific technology and where it is or will be deployed.

Looking at Section 5, where the main requirements are listed:

>    5.1. The Basic Forwarding Problem
>=20
>    Traditionally, datacenter networks have used L2 or L3 technologies.
>    The need to massively scale virtualized hosts breaks both these
>    approaches. L2 networks can't be made to scale because of high
number
>    of broadcasts. L3 networks can't support host mobility, since
routing
>    uses subnets and an IP cannot be moved out of that subnet. Moving
IP
>    in a natively L3 network requires installing host routes at one or
>    more points in the path and that is an approach that can't be=20
> scaled.

I suspect there is general agreement that the above is a general
"problem". Having one big flat L2 in a data center is great for VM
migration and placement of services "any place, anytime", but can raise
scaling and other concerns. Pushing L3 all the way out to the edges
(e..g, ToR or Hypervisor) makes it hard to place (or move) services/VMs
arbitrarily.

The above is one of the motivations behind the NVO3 work.

> 5.2. The Datacenter Inter-Connectivity Problem
>=20
>    There are limits to how much a datacenter would be scaled.
Workloads
>    need to be placed closer to the clients to reduce latency and
>    bandwidth. Hence, datacenters need to be split into geographical
>    locations and connected over the Internet. Some of these
datacenters
>    may be owned by different administrators, as in the case of private
>    and public cloud interconnectivity. Workloads can move between
these
>    datacenters, similar to how they move within the datacenter.

In this section, my take away is that there will be multiple,
geographically separated data centers. And that they will need to be
connected together. I suspect everyone agrees with that.

But I don't see how this implies there is any specific IETF work that
needs doing. We already have geographically separated data centers, and
there are, e.g., plenty of VPN technologies available for connecting
them together.

What specifically is missing that prevents the above from being done
today? What is it that you think needs doing that can't be done with
existing standards?

> 5.3. The Multi-Tenancy Problem
>=20
>    Datacenters thus far have been wholly used by single tenant. To
>    separate departments within a tenant, VLANs have been used. This
>    seemed sufficient for the number of segments an enterprise would
>    need. But, this approach can't be extended to cloud datacenters.

I suspect you'll get a lot of agreement on this. And one of the key aims
of NVO3 is to address this.

Is the existing NVO3 approach not adequate for the above? If so why not?

> 5.4. The Technology-Topology Separation Problem
>=20
>    While large datacenters are becoming common, medium and small
>    datacenters will continue to exist. These may include a branch
office
>    connected to a central office, or a small enterprise datacenter
that
>    is connected to a huge public cloud. To move workloads across these
>    networks, the technologies used in the datacenter must be agnostic
of
>    the topology employed in the various sized datacenters.

>    A small datacenter may use a mesh topology. A medium datacenter may
>    use a three-tier topology. And a large datacenter may use a
two-tier
>    multi-path architecture. It has to be recognized that all these
>    datacenters of various sizes need to interoperate. In particular,
it
>    should be possible to use a common technology to connect large and
>    small datacenters, two large datacenters, or two small datacenters.

Isn't this already possible, and indeed, happening today?

What IETF work is needed? What standards gap needs filling?

>    5.5. The Network Convergence Problem
>=20
>    Cloud datacenters will be characterized by elasticity. That means
>    that virtual resources are constantly created and destroyed.
Typical
>    hardware and software reliabilities of today mean that failures at
>    scale will be fairly common, and automated recovery mechanisms will
>    need to be put in place. When combined with workload mobility for
the
>    sake of resource optimization and improving utilization, the churn
in
>    the network forwarding tables can be very significant.

WHat work does the above imply that the IETF needs to do ?

>    Mobility also affects virtualized network devices, such as virtual
>    switches, firewalls, load-balancers, etc. For instance, when a
server
>    fails and all the VMs are relocated, the associated virtual switch
>    and firewall must also be relocated. This means that any assumption
>    in mobility that the network is a static firmament on which hosts
are
>    dynamically attached becomes false. We have to assume that the
>    network is as dynamic as the hosts themselves.

This here is interesting. The implication is that when moving a VM,
either

a) a FW or LB (or both) may also have to be moved, or

b) some sort of path enforcement is needed that insures traffic from the
(now moved) VM continues to go through the same LB or FW as before.

Do I understand that correctly? And if so, what is the IETF work that
needs to be done to make all this happen?

>  5.6. The East-West Traffic Problem

Is this section saying anything more than there is a need for
multipathing for East West traffic?

> 5.7. The Network SLA Problem
>=20
>    Multi-tenant networks need to protect all tenants from overusing
>    network resources. For example, high-traffic load from one tenant
>    should not starve another tenant of bandwidth. Note that in a
multi-
>    tenant environment, no tenant has full control or visibility of
what
>    other tenants are doing, and how problems can be fixed. A real-time
>    debugging of such problems is very hard for a provider.

...

>    Second, mechanisms to measure and guarantee network SLAs will have
to
>    employ active flow management to guarantee bandwidth to all tenants
>    and keep the network provisioned only to the level required. Flow
>    management can be integrated as part of existing forwarding
>    techniques or may need new techniques. Network SLAs can play an
>    important role in determining if sufficient bandwidth is available
>    before a VM is moved to a new location.

Can this not be done today? What specific IETF work would be needed to
support the enforcement of SLAs?

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From david.i.allan@ericsson.com  Thu Jan  5 10:56:02 2012
Return-Path: <david.i.allan@ericsson.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3984521F873E for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:56:02 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Level: 
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[AWL=0.000,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8pJnAE-AeNsu for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 10:56:01 -0800 (PST)
Received: from imr3.ericy.com (imr3.ericy.com [198.24.6.13]) by ietfa.amsl.com (Postfix) with ESMTP id F01D421F8623 for <dc@ietf.org>; Thu,  5 Jan 2012 10:56:00 -0800 (PST)
Received: from eusaamw0712.eamcs.ericsson.se ([147.117.20.181]) by imr3.ericy.com (8.13.8/8.13.8) with ESMTP id q05Itxux004384 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Thu, 5 Jan 2012 12:55:59 -0600
Received: from EUSAACMS0703.eamcs.ericsson.se ([169.254.1.43]) by eusaamw0712.eamcs.ericsson.se ([147.117.20.181]) with mapi; Thu, 5 Jan 2012 13:55:58 -0500
From: David Allan I <david.i.allan@ericsson.com>
To: "Henderickx, Wim (Wim)" <wim.henderickx@alcatel-lucent.com>, "Ashish Dalela (adalela)" <adalela@cisco.com>, Thomas Narten <narten@us.ibm.com>
Date: Thu, 5 Jan 2012 13:55:57 -0500
Thread-Topic: [dc] draft-dalela-dc-requirements-00.txt
Thread-Index: AczLCsm+jZpuzuroSy6ift2TqX44qAABEINwABnOOyAAGB1BMAAA+LPwAAAu12A=
Message-ID: <60C093A41B5E45409A19D42CF7786DFD52290487E9@EUSAACMS0703.eamcs.ericsson.se>
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com> <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se> <618BE8B40039924EB9AED233D4A09C5102B25ABD@XMB-BGL-416.cisco.com> <60C093A41B5E45409A19D42CF7786DFD52290487A4@EUSAACMS0703.eamcs.ericsson.se> <14C7F4F06DB5814AB0DE29716C4F6D671CD30A75@FRMRSSXCHMBSB1.dc-m.alcatel-lucent.com>
In-Reply-To: <14C7F4F06DB5814AB0DE29716C4F6D671CD30A75@FRMRSSXCHMBSB1.dc-m.alcatel-lucent.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 18:56:02 -0000

HI Wim:

Perhaps I should have been more precise. I did not mean it was not an issue=
, it was that I take the requirement to virtualize large numbers of them as=
 a given, and scaling that is the issue.

I hope that's clearer...and we're in wild agreement ;-)
Dave=20

-----Original Message-----
From: Henderickx, Wim (Wim) [mailto:wim.henderickx@alcatel-lucent.com]=20
Sent: Thursday, January 05, 2012 10:50 AM
To: David Allan I; Ashish Dalela (adalela); Thomas Narten
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt

David, before we say broadcast domains is not an issue we should understand=
 how many we will be expecting. Also multicast state has its limits btw.

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of David A=
llan I
Sent: donderdag 5 januari 2012 19:26
To: Ashish Dalela (adalela); Thomas Narten
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt

HI Ashish:

Well as I indicated, for many the scale problem simplifies to virtualizing =
large numbers of broadcast domains. IMO that is one "criteria of goodness".

We COULD postulate that scaling the size of individual virtualized broadcas=
t domains is also a problem, but IMO that is simply a network/application d=
esign BCP, not an issue.

My observation actually applies to a number of drafts in both DC and ARMD..=
..this thread simply prompted me.

Cheers
D

-----Original Message-----
From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
Sent: Wednesday, January 04, 2012 10:58 PM
To: David Allan I; Thomas Narten
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt

David,

A total of 1 sentence was dedicated to the L2 problem - "L2 networks can't =
be made to scale because of high number of broadcasts." And Linda has comme=
nted on this that the scaling issue is also due to MAC summarization, modif=
ication to be done. There is absolutely no discussion about pre-VLAN days, =
and I would like to know where you see it. Broadcast is contained by VLAN, =
and we are talking of that.=20

L2VPN is another story - because there was VPLS, and then there are lots of=
 new things. There are problems to be solved, and then problems to be solve=
d given some solution. So, the problem boundary shifts from the time you ta=
ke something as a "given". People generally take OSPF (or some L3 routing p=
rotocol) and VLAN as a given in the datacenter. Rest is not a "given". So, =
we have to start from what is given.

I'm fine, if we want to change the given to something else. Let me know wha=
t we think is the given today.

Thanks, Ashish

-----Original Message-----
From: David Allan I [mailto:david.i.allan@ericsson.com]
Sent: Thursday, January 05, 2012 12:26 AM
To: Thomas Narten; Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt


If the goal is to describe the generalized characteristics of what is
needed:
An absolutely flat broadcast domain does not scale...duh!
An absolutely flat L2 network does not scale...duh!
Partitioning the network into a large number of virtual broadcast domains o=
r L2VPNs/VLANs is what works for many adopters as it supports PMO. This is =
what numerous existing standardized and proprietary solutions offer with va=
rious shades of grey attribute wise (e.g.
scaling, ordering guarantees, properties when failures occur, broadcast con=
tainment etc.). The one observation is that a 24 bit VLAN tag seems to be t=
he current gold standard, both with the IEEE and with proprietary or propos=
ed approaches.

It would be doing to group a service if the issues with Ethernet were not p=
resented based on a view stuck in perhaps the 2004-2005 timeframe, or perha=
ps even before the standardization of the original 12 bit VLAN tag, let alo=
ne what has come since.

;-)
Dave

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Thomas =
Narten
Sent: Wednesday, January 04, 2012 9:59 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: [dc] draft-dalela-dc-requirements-00.txt

Hi Ashish.

I had  look at this document as it is focused on requirements. Thanks for d=
oing this.

One starting comment, as the document says:

>    Scalability hasn't generally been a standards consideration and the
>    problems of scaling are left to implementation. But, in the case of
>    cloud datacenters, scaling is the basic requirement, and all
problems
>    of cloud datacenters arise due to scaling. The solution development
>    can't therefore ignore the scaling and optimality problem.

I disagree with the above. Scalability has always been one (of many) factor=
s that goes into development of a standard. Let's just take it as a given t=
hat any solution has to scale adequately for the environment in which it is=
 to be deployed. Saying more than that (in general terms) is probably not a=
 useful discussion. To talk about scalability, one has to talk about a spec=
ific technology and where it is or will be deployed.

Looking at Section 5, where the main requirements are listed:

>    5.1. The Basic Forwarding Problem
>=20
>    Traditionally, datacenter networks have used L2 or L3 technologies.
>    The need to massively scale virtualized hosts breaks both these
>    approaches. L2 networks can't be made to scale because of high
number
>    of broadcasts. L3 networks can't support host mobility, since
routing
>    uses subnets and an IP cannot be moved out of that subnet. Moving
IP
>    in a natively L3 network requires installing host routes at one or
>    more points in the path and that is an approach that can't be=20
> scaled.

I suspect there is general agreement that the above is a general "problem".=
 Having one big flat L2 in a data center is great for VM migration and plac=
ement of services "any place, anytime", but can raise scaling and other con=
cerns. Pushing L3 all the way out to the edges (e..g, ToR or Hypervisor) ma=
kes it hard to place (or move) services/VMs arbitrarily.

The above is one of the motivations behind the NVO3 work.

> 5.2. The Datacenter Inter-Connectivity Problem
>=20
>    There are limits to how much a datacenter would be scaled.
Workloads
>    need to be placed closer to the clients to reduce latency and
>    bandwidth. Hence, datacenters need to be split into geographical
>    locations and connected over the Internet. Some of these
datacenters
>    may be owned by different administrators, as in the case of private
>    and public cloud interconnectivity. Workloads can move between
these
>    datacenters, similar to how they move within the datacenter.

In this section, my take away is that there will be multiple, geographicall=
y separated data centers. And that they will need to be connected together.=
 I suspect everyone agrees with that.

But I don't see how this implies there is any specific IETF work that needs=
 doing. We already have geographically separated data centers, and there ar=
e, e.g., plenty of VPN technologies available for connecting them together.

What specifically is missing that prevents the above from being done today?=
 What is it that you think needs doing that can't be done with existing sta=
ndards?

> 5.3. The Multi-Tenancy Problem
>=20
>    Datacenters thus far have been wholly used by single tenant. To
>    separate departments within a tenant, VLANs have been used. This
>    seemed sufficient for the number of segments an enterprise would
>    need. But, this approach can't be extended to cloud datacenters.

I suspect you'll get a lot of agreement on this. And one of the key aims of=
 NVO3 is to address this.

Is the existing NVO3 approach not adequate for the above? If so why not?

> 5.4. The Technology-Topology Separation Problem
>=20
>    While large datacenters are becoming common, medium and small
>    datacenters will continue to exist. These may include a branch
office
>    connected to a central office, or a small enterprise datacenter
that
>    is connected to a huge public cloud. To move workloads across these
>    networks, the technologies used in the datacenter must be agnostic
of
>    the topology employed in the various sized datacenters.

>    A small datacenter may use a mesh topology. A medium datacenter may
>    use a three-tier topology. And a large datacenter may use a
two-tier
>    multi-path architecture. It has to be recognized that all these
>    datacenters of various sizes need to interoperate. In particular,
it
>    should be possible to use a common technology to connect large and
>    small datacenters, two large datacenters, or two small datacenters.

Isn't this already possible, and indeed, happening today?

What IETF work is needed? What standards gap needs filling?

>    5.5. The Network Convergence Problem
>=20
>    Cloud datacenters will be characterized by elasticity. That means
>    that virtual resources are constantly created and destroyed.
Typical
>    hardware and software reliabilities of today mean that failures at
>    scale will be fairly common, and automated recovery mechanisms will
>    need to be put in place. When combined with workload mobility for
the
>    sake of resource optimization and improving utilization, the churn
in
>    the network forwarding tables can be very significant.

WHat work does the above imply that the IETF needs to do ?

>    Mobility also affects virtualized network devices, such as virtual
>    switches, firewalls, load-balancers, etc. For instance, when a
server
>    fails and all the VMs are relocated, the associated virtual switch
>    and firewall must also be relocated. This means that any assumption
>    in mobility that the network is a static firmament on which hosts
are
>    dynamically attached becomes false. We have to assume that the
>    network is as dynamic as the hosts themselves.

This here is interesting. The implication is that when moving a VM, either

a) a FW or LB (or both) may also have to be moved, or

b) some sort of path enforcement is needed that insures traffic from the (n=
ow moved) VM continues to go through the same LB or FW as before.

Do I understand that correctly? And if so, what is the IETF work that needs=
 to be done to make all this happen?

>  5.6. The East-West Traffic Problem

Is this section saying anything more than there is a need for multipathing =
for East West traffic?

> 5.7. The Network SLA Problem
>=20
>    Multi-tenant networks need to protect all tenants from overusing
>    network resources. For example, high-traffic load from one tenant
>    should not starve another tenant of bandwidth. Note that in a
multi-
>    tenant environment, no tenant has full control or visibility of
what
>    other tenants are doing, and how problems can be fixed. A real-time
>    debugging of such problems is very hard for a provider.

...

>    Second, mechanisms to measure and guarantee network SLAs will have
to
>    employ active flow management to guarantee bandwidth to all tenants
>    and keep the network provisioned only to the level required. Flow
>    management can be integrated as part of existing forwarding
>    techniques or may need new techniques. Network SLAs can play an
>    important role in determining if sufficient bandwidth is available
>    before a VM is moved to a new location.

Can this not be done today? What specific IETF work would be needed to supp=
ort the enforcement of SLAs?

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From dimitri.stiliadis@alcatel-lucent.com  Thu Jan  5 11:28:26 2012
Return-Path: <dimitri.stiliadis@alcatel-lucent.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7137121F8872 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 11:28:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Level: 
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bIS2K9xFz3Eu for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 11:28:25 -0800 (PST)
Received: from ihemail1.lucent.com (ihemail1.lucent.com [135.245.0.33]) by ietfa.amsl.com (Postfix) with ESMTP id D30A321F8869 for <dc@ietf.org>; Thu,  5 Jan 2012 11:28:24 -0800 (PST)
Received: from usnavsmail2.ndc.alcatel-lucent.com (usnavsmail2.ndc.alcatel-lucent.com [135.3.39.10]) by ihemail1.lucent.com (8.13.8/IER-o) with ESMTP id q05JSMve015167 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 5 Jan 2012 13:28:22 -0600 (CST)
Received: from USNAVSXCHHUB03.ndc.alcatel-lucent.com (usnavsxchhub03.ndc.alcatel-lucent.com [135.3.39.112]) by usnavsmail2.ndc.alcatel-lucent.com (8.14.3/8.14.3/GMO) with ESMTP id q05JSLST018076 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Thu, 5 Jan 2012 13:28:21 -0600
Received: from USNAVSXCHMBSA3.ndc.alcatel-lucent.com ([135.3.39.127]) by USNAVSXCHHUB03.ndc.alcatel-lucent.com ([135.3.39.112]) with mapi; Thu, 5 Jan 2012 13:28:21 -0600
From: "Stiliadis, Dimitrios (Dimitri)" <dimitri.stiliadis@alcatel-lucent.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>, "david.black@emc.com" <david.black@emc.com>
Date: Thu, 5 Jan 2012 13:28:17 -0600
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczLv8/i14m93uVUT1SkyOUfB6pYjgAAhaTAAAEwXYAAAoW84AAAKwQwAAFEQNAAAiw0AA==
Message-ID: <F5EF891E30B2AE46ACA20EB848689C21250CF7AADF@USNAVSXCHMBSA3.ndc.alcatel-lucent.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com><CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco. com> <AF 48CEB4-18A6	- 45CE-891B-ACFE599C8FB4@kumari.net><618BE8B40039924EB9AED233D4A09C5102B25C5A@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C204@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25C8B@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C26B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25C9F@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25C9F@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.57 on 135.245.2.33
X-Scanned-By: MIMEDefang 2.64 on 135.3.39.10
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 19:28:26 -0000

David:

> functionality is the same, only the IP addresses are different.  This
> is
> relatively straightforward for a MAC-in-IP (L2-in-L3) overlay when the
> service
> provided by the L2 overlay includes multicast.

I am not sure how "straightforward" this is. Yes, it is simple, if you
are talking about broadcast traffic, and there is one L3 multicast group
associated with each L2 overlay. But if you are talking about a
"multicast" service in the L2 domain, one would need multiple L3 multicast =
groups
(one for each multicast-group of the L2 overlay). The core would
need to support a very large number of multicast groups and someone
(NMS ?) would need to manage the L2 group to L3 group associations.


Dimitri






> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Ashish Dalela (adalela)
> Sent: Thursday, January 05, 2012 10:25 AM
> To: david.black@emc.com
> Cc: dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>
> David,
>
> When you have a default GW replicating packets, how do you get
> multipath?
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> david.black@emc.com
> Sent: Thursday, January 05, 2012 11:39 PM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>
> Ashish,
>
> > So, where does the multicast packet get replicated, since the network
> is
> > not aware of user-level multicast groups? How are those distribution
> > trees to be built over an overlay?
>
> The answer to this question starts from a realization that there are
> two
> networks, and hence two address spaces, involved - the user-visible
> overlay
> network and the underlying data center network that carries the overlay
> network via encap/decap.  The user VMs (or physical servers) only see
> the
> user-visible overlay network and its addresses, so the replication of
> user
> multicast packets is a service of that user-visible overlay network
> (specifically L3 nodes in that network that operate on the user-visible
> IP addresses), not the underlying network.
>
> To make this specific, suppose the user VMs are using 192.168.0.0/16
> addresses,
> and these are encapsulated over an underlying data center network that
> is
> using 10.0.0.0/8 addresses.  If a user VM (e.g., VDI desktop) sends a
> join
> request for a 10.0.0.0/8 address, *nothing* should happen, because
> those
> 10. addresses aren't visible in the overlay network.  Instead, the
> overlay
> network is responsible for providing IP multicast service for the
> 192.168.0.0/16
> addresses - the default IP gateway for that overlay network is one
> possible
> location.
>
> > How are those distribution trees to be built over an overlay?
>
> The same way that they're build in a non-overlay network - the L3
> multicast
> functionality is the same, only the IP addresses are different.  This
> is
> relatively straightforward for a MAC-in-IP (L2-in-L3) overlay when the
> service
> provided by the L2 overlay includes multicast.  When there's no
> multicast
> provided by the overlay's L2 service or no L2 service (IP-in-IP/L3-in-
> L3
> approach), multicast for the overlay network may involve explicit
> copying of
> packets at L3 nodes in the overlay network (not the underlying
> network),
> and is part of the overlay design.
>
> Thanks,
> --David
>
> > -----Original Message-----
> > From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> > Sent: Thursday, January 05, 2012 12:39 PM
> > To: Black, David
> > Cc: dc@ietf.org
> > Subject: RE: [dc] [armd] IP over IP solution for data center
> interconnect
> >
> > David,
> >
> > So, where does the multicast packet get replicated, since the network
> is
> > not aware of user-level multicast groups? How are those distribution
> > trees to be built over an overlay?
> >
> > Thanks, Ashish
> >
> >
> > -----Original Message-----
> > From: david.black@emc.com [mailto:david.black@emc.com]
> > Sent: Thursday, January 05, 2012 10:00 PM
> > To: Ashish Dalela (adalela)
> > Cc: dc@ietf.org
> > Subject: RE: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> > Ashish,
> >
> > For IP-based overlays, this starts by carefully distinguishing the
> inner
> > (user) and
> > outer (provider/infrastructure) IP address blocks, and completely
> > controlling access
> > to the outer IP address blocks via encap/decap.  The result is that
> an
> > IGMP join
> > from a user gets encapsulated and can't be processed with respect to
> > those outer
> > addresses.  That creates a requirement to provide an IP multicast
> > service to the
> > users over the encapsulation.
> >
> > Thanks,
> > --David
> >
> > > -----Original Message-----
> > > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> > Ashish Dalela (adalela)
> > > Sent: Thursday, January 05, 2012 11:01 AM
> > > To: Warren Kumari
> > > Cc: Pedro Marques; Black, David; dc@ietf.org
> > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> > >
> > > Hi Warren,
> > >
> > > You are prescribing to a hypervisor based solutions and table
> scaling
> > > issues don't arise there (or at least are not obvious) because it
> all
> > > happens in software. However, there are other issues. For example,
> how
> > > do you implement broadcast and multicast? The standard mechanism
> today
> > > is to map a broadcast domain to a multicast group in L3. Now, what
> > > happens if some rogue user sends an IGMP join to that group - what
> was
> > > on the VLAN is now accessible to everyone through an IGMP join. For
> > user
> > > level multicast, there are other issues. Assume I'm doing a VDI
> cloud,
> > > where users need to join multicast video conferencing. The group is
> > user
> > > determined, not admin determined. How do we know that the user is
> not
> > > joining a VLAN mapped multicast group?
> > >
> > > We need to keep a complete set of goals in mind. Otherwise, we can
> > solve
> > > an issue and miss a goal. E.g. multicast and broadcast.
> > >
> > > Thanks, Ashish
> > >
> > >
> > > -----Original Message-----
> > > From: Warren Kumari [mailto:warren@kumari.net]
> > > Sent: Thursday, January 05, 2012 9:06 PM
> > > To: Ashish Dalela (adalela)
> > > Cc: Warren Kumari; Pedro Marques; david.black@emc.com; dc@ietf.org
> > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > > interconnect
> > >
> > >
> > > On Jan 3, 2012, at 12:45 PM, Ashish Dalela (adalela) wrote:
> > >
> > > >
> > > > Suppose you have an IP solution.
> > >
> > > Sure.
> > >
> > > >
> > > > To support mobility you need IP-in-IP encapsulation.
> > >
> > > And if you do an overly you always do an IP encapsulation (to cover
> > GRE,
> > > IPIP, sit, IPSec, PPP, etc).
> > >
> > > >
> > > > As VM density increases, as VM-to-VM conversation grows, as
> > interfaces
> > > > per VM increase, the host routes increase.
> > >
> > > No.
> > >
> > > The only thing that the network needs to know is the routes to the
> > > hypervisors / physical machines -- this is a solved problem.
> > > The VM addresses and routes are only visible to the [gateways,
> > > hypervisors with VMs in that overlay, other VMs in the same
> overlay,
> > > mapping server].
> > >
> > > For a really old overview:
> > > http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00
> > >
> > > > These host routes are in addition to network routes, local
> host-port
> > > > bindings, ACLs, etc. That means in addition to everything that
> > existed
> > > > so far.
> > >
> > > No.
> > >
> > >
> > > >
> > > > Eventually, you hit a limit on the access, and you have to reduce
> > size
> > > > of network, reduce VM mobility, reduce VM density per server,
> reduce
> > > > application spread.
> > > >
> > >
> > > No.
> > >
> > > > The alternative is to constantly increase network hardware table
> > sizes
> > > > at access, which increases costs and energy.
> > > >
> > >
> > > No.
> > >
> > > > We have to realize that IP encapsulations put network and compute
> at
> > > > opposite sides of the cost trend. Compute cost reduces slowly as
> > size
> > > > grows. Network cost grows rapidly as size grows.
> > > >
> > >
> > > No.
> > >
> > >
> > >
> > > > Thanks,
> > > > Ashish
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf
> Of
> > > > Pedro Marques
> > > > Sent: Tuesday, January 03, 2012 10:46 PM
> > > > To: david.black@emc.com
> > > > Cc: dc@ietf.org
> > > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > > > interconnect
> > > >
> > > > That assumes that the MAC has relevance in the network. It is
> > possible
> > > > to build solutions such that packets are forwarded based on their
> IP
> > > > addresses rather than their MACs.
> > > >
> > > >  Pedro.
> > > >
> > > > On Tue, Jan 3, 2012 at 8:41 AM,  <david.black@emc.com> wrote:
> > > >> Ashish,
> > > >>
> > > >>>> [AD] The higher bits identify a switch - it's a switch-id.
> > > >>
> > > >> That breaks VM migration across switches by forcing a MAC
> change.
> > > >>
> > > >> Thanks,
> > > >> --David
> > > >>
> > > >>> -----Original Message-----
> > > >>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On
> Behalf
> > Of
> > > > Ashish Dalela (adalela)
> > > >>> Sent: Tuesday, January 03, 2012 11:15 AM
> > > >>> To: robert@raszuk.net
> > > >>> Cc: Pedro Marques; dc@ietf.org
> > > >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > > > interconnect
> > > >>>
> > > >>> Robert,
> > > >>>
> > > >>> Please see inline.
> > > >>>
> > > >>> -----Original Message-----
> > > >>> From: Robert Raszuk [mailto:robert@raszuk.net]
> > > >>> Sent: Tuesday, January 03, 2012 8:24 PM
> > > >>> To: Ashish Dalela (adalela)
> > > >>> Cc: Pedro Marques; dc@ietf.org
> > > >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > > >>> interconnect
> > > >>>
> > > >>> Ashish,
> > > >>>
> > > >>> OK let's just discuss what is in your draft on Hierarchical
> > > > Addressing.
> > > >>>
> > > >>> 1. You have 48 bits 32 go for host remaining 16 goes for
> switches.
> > > > How
> > > >>> do you aggregate at the TOR or AGGR switch boundary ? Are you
> > > > assuming
> > > >>> single HOST - SWITCH with max 65K flat macs ?
> > > >>>
> > > >>> [AD] The higher bits identify a switch - it's a switch-id. The
> > hosts
> > > > are
> > > >>> dynamically assigned a host-id under that switch. Let's assume
> 23
> > > > bits
> > > >>> are for switch-id and 23 bits for host-id. To forward a packet
> to
> > > the
> > > >>> host, you only have to look at the first 23 bits. That's a MAC
> > > prefix
> > > > to
> > > >>> route against.
> > > >>>
> > > >>> [AD] You can have 2^23 switches in a network and 2^23 hosts
> under
> > > > each
> > > >>> switch.
> > > >>>
> > > >>> 2. Can you deploy this on existing VMs and existing switches ?
> > > >>>
> > > >>> [AD] What do you mean by this? Any VM can be configured with
> any
> > > MAC.
> > > >>> Any physical host can be configured with any MAC on any logical
> > > >>> interface. Configuration standpoint this is possible.
> Forwarding
> > > >>> standpoint, that's another question.
> > > >>>
> > > >>> 3. What new protocol you envision to use to distribute those
> new
> > > MACs
> > > > ?
> > > >>>
> > > >>> [AD] IS-IS extensions. It can be TRILL extensions.
> > > >>>
> > > >>> 4. What is the advantage of using this vs ILNP if we assume
> that
> > > > hosts
> > > >>> should be modified ?
> > > >>>
> > > >>> [AD] I'm not familiar with the ILNP work, but I'm assuming you
> are
> > > >>> talking about Loc-Id separation. If not, correct me. If yes,
> each
> > > > Loc-Id
> > > >>> binding can be a host route, with mobility. These host-routes
> are
> > a
> > > >>> scaling problem. Traditional IP packet have IP as ID and MAC as
> > LOC.
> > > > We
> > > >>> are just extending this LOC to make it actually location aware
> > > rather
> > > >>> than a flat address which is fixed regardless of where the
> > location
> > > > is.
> > > >>>
> > > >>> 5. The proposal does not support aggregation .. even the draft
> > says
> > > > it
> > > >>> :)
> > > >>>
> > > >>> "The total number of hardware entries anywhere in the network
> > equals
> > > > the
> > > >>>
> > > >>> total number of switches and remains agnostic of VM mobility."
> > > >>>
> > > >>> [AD] For 1 million VM, and 25 VM per host, you need 40,000
> hosts.
> > > > With
> > > >>> 48 port access switches, you need 833 switches. That's the
> routing
> > > > table
> > > >>> size for any switch in the datacenter - core, aggregation,
> access.
> > > >>> Contrast this with host-routes, if each VM talks to 100 VMs,
> then
> > > > each
> > > >>> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just
> > > because
> > > >>> the network prefix is 23 bits does not mean we have to store
> 10^23
> > > >>> prefixes. We have to store only as many switches as there are
> in
> > the
> > > >>> network. Ratio between VM : switch is 1000 : 1 (today, assuming
> 48
> > > > port
> > > >>> access and 20 VM per port). That means instead of storing
> > > host-routes
> > > >>> which will grow proportional to VM growth, we store switch-id,
> > which
> > > >>> will grow at 1000 times slower rate. As VM density increases,
> this
> > > >>> growth rate is further slowed down. There are other techniques
> to
> > > >>> further reduce the rate of growth. But in any case, 1000 times
> > > slower
> > > > is
> > > >>> a lot slow.
> > > >>>
> > > >>> So if I have 100K switches I can not do any aggregation and
> need
> > to
> > > >>> "route" 100K MAC addresses.
> > > >>>
> > > >>> [AD] I don't know how you came to that conclusion. Think of
> HMAC
> > as
> > > > an
> > > >>> IP address. Instead of 32 bits it is 46 bits. You route by
> > prefixes
> > > > in
> > > >>> L3, and you are routing by the same prefixes here. Just as you
> > > > aggregate
> > > >>> IP, same way you aggregate MAC. It's not different.
> > > >>>
> > > >>> 6. Who provides me the mapping between switch mac and host/vm
> mac
> > > > behind
> > > >>>
> > > >>> such switch ? Do switches proxy arp globally within your domain
> ?
> > > >>>
> > > >>> [AD] Variation of the same question. Above should answer it.
> > > >>>
> > > >>> Thx,
> > > >>> R.
> > > >>>
> > > >>>
> > > >>>> Robert,
> > > >>>>
> > > >>>>>> So you are advocating solution which is based on
> encapsulation
> > -
> > > >>> that
> > > >>>> is fine.
> > > >>>>
> > > >>>> No, I'm not. Did you read the draft I had mentioned?
> > > >>>> Hierarchical MAC is not encapsulation. It is one 48 bit
> address.
> > > >>>>
> > > >>>>>> However how could you ever arrive at the conclusion that
> HMACs
> > > > would
> > > >>>>>> scale better then "anything we know". Well I don't know
> about
> > > > you,
> > > >>>> but I
> > > >>>>>> know that the key to scaling is ability to aggregate. And it
> is
> > > > not
> > > >>>> that
> > > >>>>>> huge mystery that MACs aggregate rather poorly while there
> are
> > > > quite
> > > >>>>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > > >>> natively
> > > >>>>
> > > >>>> You are hitting the issue on the nail. So, read the draft I
> > > > mentioned.
> > > >>>> Hierarchical MAC is higher bits "network prefix" and lower
> bits
> > > > "host
> > > >>>> id".
> > > >>>> That's summarizable like IP address and aggregated.
> > > >>>> It has 46 bits to modify so larger than IPv4 internet.
> > > >>>>
> > > >>>> I won't comment on the rest, because you have made an
> assumption
> > > > about
> > > >>>> encapsulation.
> > > >>>>
> > > >>>> I refer to this -
> > > >>>> http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> > > >>>>
> > > >>>> Thanks, Ashish
> > > >>>>
> > > >>>>
> > > >>>> -----Original Message-----
> > > >>>> From: Robert Raszuk [mailto:robert@raszuk.net]
> > > >>>> Sent: Tuesday, January 03, 2012 7:05 PM
> > > >>>> To: Ashish Dalela (adalela)
> > > >>>> Cc: Pedro Marques; dc@ietf.org
> > > >>>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > > >>>> interconnect
> > > >>>>
> > > >>>> Ashish,
> > > >>>>
> > > >>>>> The issues of scale you mentioned don't exist in Hierarchical
> > > > MACs,
> > > >>>>> which scale better than anything we know of.
> > > >>>>
> > > >>>> So you are advocating solution which is based on encapsulation
> -
> > > > that
> > > >>> is
> > > >>>> fine.
> > > >>>>
> > > >>>> However how could you ever arrive at the conclusion that HMACs
> > > > would
> > > >>>> scale better then "anything we know". Well I don't know about
> > you,
> > > > but
> > > >>> I
> > > >>>> know that the key to scaling is ability to aggregate. And it
> is
> > not
> > > >>> that
> > > >>>>
> > > >>>> huge mystery that MACs aggregate rather poorly while there are
> > > > quite
> > > >>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > > > natively.
> > > >>>>
> > > >>>> For inter-dc this is IMHO a must. A must even if you build it
> > using
> > > >>>> traditional routers or OF enabled switches - does not matter.
> > > >>>>
> > > >>>>> I don't want to split the requirements into multiple use-
> cases
> > > >>>>> because then this DC group will be many groups - one doing L2
> > and
> > > >>>>> another doing L3. That I think you will agree is not optimal
> for
> > > >>>>> anyone
> > > >>>>
> > > >>>> Why MAC-in-IP does not solve it for everyone ? And there are
> > > > deployed
> > > >>>> solutions already ..
> > > >>>>
> > > >>>> IMHO what this group should accomplish is not to try to
> reinvent
> > > > the
> > > >>>> world, but perhaps as example discuss where is the right
> boundary
> > > > of
> > > >>>> encapsulation, how should we communicate between network and
> > hosts,
> > > >>> what
> > > >>>>
> > > >>>> kind of DC instrumentation should be IETF blessed for easy
> > > > integration
> > > >>>> (ie min subset of functionality it should possess etc .... )
> > > >>>>
> > > >>>> R.
> > > >>>>
> > > >>>> _______________________________________________
> > > >>>> dc mailing list
> > > >>>> dc@ietf.org
> > > >>>> https://www.ietf.org/mailman/listinfo/dc
> > > >>>>
> > > >>>>
> > > >>>
> > > >>> _______________________________________________
> > > >>> dc mailing list
> > > >>> dc@ietf.org
> > > >>> https://www.ietf.org/mailman/listinfo/dc
> > > >>
> > > >> _______________________________________________
> > > >> dc mailing list
> > > >> dc@ietf.org
> > > >> https://www.ietf.org/mailman/listinfo/dc
> > > > _______________________________________________
> > > > dc mailing list
> > > > dc@ietf.org
> > > > https://www.ietf.org/mailman/listinfo/dc
> > > > _______________________________________________
> > > > dc mailing list
> > > > dc@ietf.org
> > > > https://www.ietf.org/mailman/listinfo/dc
> > > >
> > >
> > > _______________________________________________
> > > dc mailing list
> > > dc@ietf.org
> > > https://www.ietf.org/mailman/listinfo/dc
> >
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From david.black@emc.com  Thu Jan  5 11:46:16 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C422C21F8855 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 11:46:16 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.591
X-Spam-Level: 
X-Spam-Status: No, score=-106.591 tagged_above=-999 required=5 tests=[AWL=0.008, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bmhEF7+sEtla for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 11:46:15 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 1A16221F86B9 for <dc@ietf.org>; Thu,  5 Jan 2012 11:46:14 -0800 (PST)
Received: from hop04-l1d11-si01.isus.emc.com (HOP04-L1D11-SI01.isus.emc.com [10.254.111.54]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q05JkC7Y031809 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 5 Jan 2012 14:46:13 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.221.253]) by hop04-l1d11-si01.isus.emc.com (RSA Interceptor); Thu, 5 Jan 2012 14:45:50 -0500
Received: from mxhub12.corp.emc.com (mxhub12.corp.emc.com [10.254.92.107]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q05JjoaQ008746; Thu, 5 Jan 2012 14:45:50 -0500
Received: from mx14a.corp.emc.com ([169.254.1.216]) by mxhub12.corp.emc.com ([10.254.92.107]) with mapi; Thu, 5 Jan 2012 14:45:50 -0500
From: <david.black@emc.com>
To: <adalela@cisco.com>
Date: Thu, 5 Jan 2012 14:45:49 -0500
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczLv8/i14m93uVUT1SkyOUfB6pYjgAAhaTAAAEwXYAAAoW84AAAKwQwAAFEQNAAAvWAIA==
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C2CD@MX14A.corp.emc.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com><CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco. com> <AF 48CEB4-18A6	- 45CE-891B-ACFE599C8FB4@kumari.net><618BE8B40039924EB9AED233D4A09C5102B25C5A@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C204@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25C8B@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7C26B@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102B25C9F@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25C9F@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 19:46:16 -0000

Same answer as for a non-overlay network - pick whatever multipath technolo=
gy/design
that you want to use for L3 multicast (instead of just doing it at the defa=
ult
gateway), and apply that design to L3 in the overlay network.

The overlay network really is a network and really offers actual networking=
 services ...

Thanks,
--David


> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Ashis=
h Dalela (adalela)
> Sent: Thursday, January 05, 2012 1:25 PM
> To: Black, David
> Cc: dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
>
> David,
>
> When you have a default GW replicating packets, how do you get
> multipath?
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> david.black@emc.com
> Sent: Thursday, January 05, 2012 11:39 PM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center
> interconnect
>
> Ashish,
>
> > So, where does the multicast packet get replicated, since the network
> is
> > not aware of user-level multicast groups? How are those distribution
> > trees to be built over an overlay?
>
> The answer to this question starts from a realization that there are two
> networks, and hence two address spaces, involved - the user-visible
> overlay
> network and the underlying data center network that carries the overlay
> network via encap/decap.  The user VMs (or physical servers) only see
> the
> user-visible overlay network and its addresses, so the replication of
> user
> multicast packets is a service of that user-visible overlay network
> (specifically L3 nodes in that network that operate on the user-visible
> IP addresses), not the underlying network.
>
> To make this specific, suppose the user VMs are using 192.168.0.0/16
> addresses,
> and these are encapsulated over an underlying data center network that
> is
> using 10.0.0.0/8 addresses.  If a user VM (e.g., VDI desktop) sends a
> join
> request for a 10.0.0.0/8 address, *nothing* should happen, because those
> 10. addresses aren't visible in the overlay network.  Instead, the
> overlay
> network is responsible for providing IP multicast service for the
> 192.168.0.0/16
> addresses - the default IP gateway for that overlay network is one
> possible
> location.
>
> > How are those distribution trees to be built over an overlay?
>
> The same way that they're build in a non-overlay network - the L3
> multicast
> functionality is the same, only the IP addresses are different.  This is
> relatively straightforward for a MAC-in-IP (L2-in-L3) overlay when the
> service
> provided by the L2 overlay includes multicast.  When there's no
> multicast
> provided by the overlay's L2 service or no L2 service (IP-in-IP/L3-in-L3
> approach), multicast for the overlay network may involve explicit
> copying of
> packets at L3 nodes in the overlay network (not the underlying network),
> and is part of the overlay design.
>
> Thanks,
> --David
>
> > -----Original Message-----
> > From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> > Sent: Thursday, January 05, 2012 12:39 PM
> > To: Black, David
> > Cc: dc@ietf.org
> > Subject: RE: [dc] [armd] IP over IP solution for data center
> interconnect
> >
> > David,
> >
> > So, where does the multicast packet get replicated, since the network
> is
> > not aware of user-level multicast groups? How are those distribution
> > trees to be built over an overlay?
> >
> > Thanks, Ashish
> >
> >
> > -----Original Message-----
> > From: david.black@emc.com [mailto:david.black@emc.com]
> > Sent: Thursday, January 05, 2012 10:00 PM
> > To: Ashish Dalela (adalela)
> > Cc: dc@ietf.org
> > Subject: RE: [dc] [armd] IP over IP solution for data center
> > interconnect
> >
> > Ashish,
> >
> > For IP-based overlays, this starts by carefully distinguishing the
> inner
> > (user) and
> > outer (provider/infrastructure) IP address blocks, and completely
> > controlling access
> > to the outer IP address blocks via encap/decap.  The result is that an
> > IGMP join
> > from a user gets encapsulated and can't be processed with respect to
> > those outer
> > addresses.  That creates a requirement to provide an IP multicast
> > service to the
> > users over the encapsulation.
> >
> > Thanks,
> > --David
> >
> > > -----Original Message-----
> > > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> > Ashish Dalela (adalela)
> > > Sent: Thursday, January 05, 2012 11:01 AM
> > > To: Warren Kumari
> > > Cc: Pedro Marques; Black, David; dc@ietf.org
> > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > interconnect
> > >
> > > Hi Warren,
> > >
> > > You are prescribing to a hypervisor based solutions and table
> scaling
> > > issues don't arise there (or at least are not obvious) because it
> all
> > > happens in software. However, there are other issues. For example,
> how
> > > do you implement broadcast and multicast? The standard mechanism
> today
> > > is to map a broadcast domain to a multicast group in L3. Now, what
> > > happens if some rogue user sends an IGMP join to that group - what
> was
> > > on the VLAN is now accessible to everyone through an IGMP join. For
> > user
> > > level multicast, there are other issues. Assume I'm doing a VDI
> cloud,
> > > where users need to join multicast video conferencing. The group is
> > user
> > > determined, not admin determined. How do we know that the user is
> not
> > > joining a VLAN mapped multicast group?
> > >
> > > We need to keep a complete set of goals in mind. Otherwise, we can
> > solve
> > > an issue and miss a goal. E.g. multicast and broadcast.
> > >
> > > Thanks, Ashish
> > >
> > >
> > > -----Original Message-----
> > > From: Warren Kumari [mailto:warren@kumari.net]
> > > Sent: Thursday, January 05, 2012 9:06 PM
> > > To: Ashish Dalela (adalela)
> > > Cc: Warren Kumari; Pedro Marques; david.black@emc.com; dc@ietf.org
> > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > > interconnect
> > >
> > >
> > > On Jan 3, 2012, at 12:45 PM, Ashish Dalela (adalela) wrote:
> > >
> > > >
> > > > Suppose you have an IP solution.
> > >
> > > Sure.
> > >
> > > >
> > > > To support mobility you need IP-in-IP encapsulation.
> > >
> > > And if you do an overly you always do an IP encapsulation (to cover
> > GRE,
> > > IPIP, sit, IPSec, PPP, etc).
> > >
> > > >
> > > > As VM density increases, as VM-to-VM conversation grows, as
> > interfaces
> > > > per VM increase, the host routes increase.
> > >
> > > No.
> > >
> > > The only thing that the network needs to know is the routes to the
> > > hypervisors / physical machines -- this is a solved problem.
> > > The VM addresses and routes are only visible to the [gateways,
> > > hypervisors with VMs in that overlay, other VMs in the same overlay,
> > > mapping server].
> > >
> > > For a really old overview:
> > > http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00
> > >
> > > > These host routes are in addition to network routes, local
> host-port
> > > > bindings, ACLs, etc. That means in addition to everything that
> > existed
> > > > so far.
> > >
> > > No.
> > >
> > >
> > > >
> > > > Eventually, you hit a limit on the access, and you have to reduce
> > size
> > > > of network, reduce VM mobility, reduce VM density per server,
> reduce
> > > > application spread.
> > > >
> > >
> > > No.
> > >
> > > > The alternative is to constantly increase network hardware table
> > sizes
> > > > at access, which increases costs and energy.
> > > >
> > >
> > > No.
> > >
> > > > We have to realize that IP encapsulations put network and compute
> at
> > > > opposite sides of the cost trend. Compute cost reduces slowly as
> > size
> > > > grows. Network cost grows rapidly as size grows.
> > > >
> > >
> > > No.
> > >
> > >
> > >
> > > > Thanks,
> > > > Ashish
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf
> Of
> > > > Pedro Marques
> > > > Sent: Tuesday, January 03, 2012 10:46 PM
> > > > To: david.black@emc.com
> > > > Cc: dc@ietf.org
> > > > Subject: Re: [dc] [armd] IP over IP solution for data center
> > > > interconnect
> > > >
> > > > That assumes that the MAC has relevance in the network. It is
> > possible
> > > > to build solutions such that packets are forwarded based on their
> IP
> > > > addresses rather than their MACs.
> > > >
> > > >  Pedro.
> > > >
> > > > On Tue, Jan 3, 2012 at 8:41 AM,  <david.black@emc.com> wrote:
> > > >> Ashish,
> > > >>
> > > >>>> [AD] The higher bits identify a switch - it's a switch-id.
> > > >>
> > > >> That breaks VM migration across switches by forcing a MAC change.
> > > >>
> > > >> Thanks,
> > > >> --David
> > > >>
> > > >>> -----Original Message-----
> > > >>> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf
> > Of
> > > > Ashish Dalela (adalela)
> > > >>> Sent: Tuesday, January 03, 2012 11:15 AM
> > > >>> To: robert@raszuk.net
> > > >>> Cc: Pedro Marques; dc@ietf.org
> > > >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > > > interconnect
> > > >>>
> > > >>> Robert,
> > > >>>
> > > >>> Please see inline.
> > > >>>
> > > >>> -----Original Message-----
> > > >>> From: Robert Raszuk [mailto:robert@raszuk.net]
> > > >>> Sent: Tuesday, January 03, 2012 8:24 PM
> > > >>> To: Ashish Dalela (adalela)
> > > >>> Cc: Pedro Marques; dc@ietf.org
> > > >>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > > >>> interconnect
> > > >>>
> > > >>> Ashish,
> > > >>>
> > > >>> OK let's just discuss what is in your draft on Hierarchical
> > > > Addressing.
> > > >>>
> > > >>> 1. You have 48 bits 32 go for host remaining 16 goes for
> switches.
> > > > How
> > > >>> do you aggregate at the TOR or AGGR switch boundary ? Are you
> > > > assuming
> > > >>> single HOST - SWITCH with max 65K flat macs ?
> > > >>>
> > > >>> [AD] The higher bits identify a switch - it's a switch-id. The
> > hosts
> > > > are
> > > >>> dynamically assigned a host-id under that switch. Let's assume
> 23
> > > > bits
> > > >>> are for switch-id and 23 bits for host-id. To forward a packet
> to
> > > the
> > > >>> host, you only have to look at the first 23 bits. That's a MAC
> > > prefix
> > > > to
> > > >>> route against.
> > > >>>
> > > >>> [AD] You can have 2^23 switches in a network and 2^23 hosts
> under
> > > > each
> > > >>> switch.
> > > >>>
> > > >>> 2. Can you deploy this on existing VMs and existing switches ?
> > > >>>
> > > >>> [AD] What do you mean by this? Any VM can be configured with any
> > > MAC.
> > > >>> Any physical host can be configured with any MAC on any logical
> > > >>> interface. Configuration standpoint this is possible. Forwarding
> > > >>> standpoint, that's another question.
> > > >>>
> > > >>> 3. What new protocol you envision to use to distribute those new
> > > MACs
> > > > ?
> > > >>>
> > > >>> [AD] IS-IS extensions. It can be TRILL extensions.
> > > >>>
> > > >>> 4. What is the advantage of using this vs ILNP if we assume that
> > > > hosts
> > > >>> should be modified ?
> > > >>>
> > > >>> [AD] I'm not familiar with the ILNP work, but I'm assuming you
> are
> > > >>> talking about Loc-Id separation. If not, correct me. If yes,
> each
> > > > Loc-Id
> > > >>> binding can be a host route, with mobility. These host-routes
> are
> > a
> > > >>> scaling problem. Traditional IP packet have IP as ID and MAC as
> > LOC.
> > > > We
> > > >>> are just extending this LOC to make it actually location aware
> > > rather
> > > >>> than a flat address which is fixed regardless of where the
> > location
> > > > is.
> > > >>>
> > > >>> 5. The proposal does not support aggregation .. even the draft
> > says
> > > > it
> > > >>> :)
> > > >>>
> > > >>> "The total number of hardware entries anywhere in the network
> > equals
> > > > the
> > > >>>
> > > >>> total number of switches and remains agnostic of VM mobility."
> > > >>>
> > > >>> [AD] For 1 million VM, and 25 VM per host, you need 40,000
> hosts.
> > > > With
> > > >>> 48 port access switches, you need 833 switches. That's the
> routing
> > > > table
> > > >>> size for any switch in the datacenter - core, aggregation,
> access.
> > > >>> Contrast this with host-routes, if each VM talks to 100 VMs,
> then
> > > > each
> > > >>> access switch needs 48 * 25 * 100 =3D 120,000 host routes. Just
> > > because
> > > >>> the network prefix is 23 bits does not mean we have to store
> 10^23
> > > >>> prefixes. We have to store only as many switches as there are in
> > the
> > > >>> network. Ratio between VM : switch is 1000 : 1 (today, assuming
> 48
> > > > port
> > > >>> access and 20 VM per port). That means instead of storing
> > > host-routes
> > > >>> which will grow proportional to VM growth, we store switch-id,
> > which
> > > >>> will grow at 1000 times slower rate. As VM density increases,
> this
> > > >>> growth rate is further slowed down. There are other techniques
> to
> > > >>> further reduce the rate of growth. But in any case, 1000 times
> > > slower
> > > > is
> > > >>> a lot slow.
> > > >>>
> > > >>> So if I have 100K switches I can not do any aggregation and need
> > to
> > > >>> "route" 100K MAC addresses.
> > > >>>
> > > >>> [AD] I don't know how you came to that conclusion. Think of HMAC
> > as
> > > > an
> > > >>> IP address. Instead of 32 bits it is 46 bits. You route by
> > prefixes
> > > > in
> > > >>> L3, and you are routing by the same prefixes here. Just as you
> > > > aggregate
> > > >>> IP, same way you aggregate MAC. It's not different.
> > > >>>
> > > >>> 6. Who provides me the mapping between switch mac and host/vm
> mac
> > > > behind
> > > >>>
> > > >>> such switch ? Do switches proxy arp globally within your domain
> ?
> > > >>>
> > > >>> [AD] Variation of the same question. Above should answer it.
> > > >>>
> > > >>> Thx,
> > > >>> R.
> > > >>>
> > > >>>
> > > >>>> Robert,
> > > >>>>
> > > >>>>>> So you are advocating solution which is based on
> encapsulation
> > -
> > > >>> that
> > > >>>> is fine.
> > > >>>>
> > > >>>> No, I'm not. Did you read the draft I had mentioned?
> > > >>>> Hierarchical MAC is not encapsulation. It is one 48 bit
> address.
> > > >>>>
> > > >>>>>> However how could you ever arrive at the conclusion that
> HMACs
> > > > would
> > > >>>>>> scale better then "anything we know". Well I don't know about
> > > > you,
> > > >>>> but I
> > > >>>>>> know that the key to scaling is ability to aggregate. And it
> is
> > > > not
> > > >>>> that
> > > >>>>>> huge mystery that MACs aggregate rather poorly while there
> are
> > > > quite
> > > >>>>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > > >>> natively
> > > >>>>
> > > >>>> You are hitting the issue on the nail. So, read the draft I
> > > > mentioned.
> > > >>>> Hierarchical MAC is higher bits "network prefix" and lower bits
> > > > "host
> > > >>>> id".
> > > >>>> That's summarizable like IP address and aggregated.
> > > >>>> It has 46 bits to modify so larger than IPv4 internet.
> > > >>>>
> > > >>>> I won't comment on the rest, because you have made an
> assumption
> > > > about
> > > >>>> encapsulation.
> > > >>>>
> > > >>>> I refer to this -
> > > >>>> http://tools.ietf.org/html/draft-dalela-dc-approaches-00.
> > > >>>>
> > > >>>> Thanks, Ashish
> > > >>>>
> > > >>>>
> > > >>>> -----Original Message-----
> > > >>>> From: Robert Raszuk [mailto:robert@raszuk.net]
> > > >>>> Sent: Tuesday, January 03, 2012 7:05 PM
> > > >>>> To: Ashish Dalela (adalela)
> > > >>>> Cc: Pedro Marques; dc@ietf.org
> > > >>>> Subject: Re: [dc] [armd] IP over IP solution for data center
> > > >>>> interconnect
> > > >>>>
> > > >>>> Ashish,
> > > >>>>
> > > >>>>> The issues of scale you mentioned don't exist in Hierarchical
> > > > MACs,
> > > >>>>> which scale better than anything we know of.
> > > >>>>
> > > >>>> So you are advocating solution which is based on encapsulation
> -
> > > > that
> > > >>> is
> > > >>>> fine.
> > > >>>>
> > > >>>> However how could you ever arrive at the conclusion that HMACs
> > > > would
> > > >>>> scale better then "anything we know". Well I don't know about
> > you,
> > > > but
> > > >>> I
> > > >>>> know that the key to scaling is ability to aggregate. And it is
> > not
> > > >>> that
> > > >>>>
> > > >>>> huge mystery that MACs aggregate rather poorly while there are
> > > > quite
> > > >>>> well deployed protocols (be it IPv4 or IPv6) which aggregate
> > > > natively.
> > > >>>>
> > > >>>> For inter-dc this is IMHO a must. A must even if you build it
> > using
> > > >>>> traditional routers or OF enabled switches - does not matter.
> > > >>>>
> > > >>>>> I don't want to split the requirements into multiple use-cases
> > > >>>>> because then this DC group will be many groups - one doing L2
> > and
> > > >>>>> another doing L3. That I think you will agree is not optimal
> for
> > > >>>>> anyone
> > > >>>>
> > > >>>> Why MAC-in-IP does not solve it for everyone ? And there are
> > > > deployed
> > > >>>> solutions already ..
> > > >>>>
> > > >>>> IMHO what this group should accomplish is not to try to
> reinvent
> > > > the
> > > >>>> world, but perhaps as example discuss where is the right
> boundary
> > > > of
> > > >>>> encapsulation, how should we communicate between network and
> > hosts,
> > > >>> what
> > > >>>>
> > > >>>> kind of DC instrumentation should be IETF blessed for easy
> > > > integration
> > > >>>> (ie min subset of functionality it should possess etc .... )
> > > >>>>
> > > >>>> R.
> > > >>>>
> > > >>>> _______________________________________________
> > > >>>> dc mailing list
> > > >>>> dc@ietf.org
> > > >>>> https://www.ietf.org/mailman/listinfo/dc
> > > >>>>
> > > >>>>
> > > >>>
> > > >>> _______________________________________________
> > > >>> dc mailing list
> > > >>> dc@ietf.org
> > > >>> https://www.ietf.org/mailman/listinfo/dc
> > > >>
> > > >> _______________________________________________
> > > >> dc mailing list
> > > >> dc@ietf.org
> > > >> https://www.ietf.org/mailman/listinfo/dc
> > > > _______________________________________________
> > > > dc mailing list
> > > > dc@ietf.org
> > > > https://www.ietf.org/mailman/listinfo/dc
> > > > _______________________________________________
> > > > dc mailing list
> > > > dc@ietf.org
> > > > https://www.ietf.org/mailman/listinfo/dc
> > > >
> > >
> > > _______________________________________________
> > > dc mailing list
> > > dc@ietf.org
> > > https://www.ietf.org/mailman/listinfo/dc
> >
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


From wim.henderickx@alcatel-lucent.com  Thu Jan  5 11:50:54 2012
Return-Path: <wim.henderickx@alcatel-lucent.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 19AAD21F881F for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 11:50:54 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.249
X-Spam-Level: 
X-Spam-Status: No, score=-6.249 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HELO_EQ_FR=0.35, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ICG2wG0QkPap for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 11:50:52 -0800 (PST)
Received: from smail3.alcatel.fr (smail3.alcatel.fr [64.208.49.56]) by ietfa.amsl.com (Postfix) with ESMTP id ADF6A21F86E5 for <dc@ietf.org>; Thu,  5 Jan 2012 11:50:51 -0800 (PST)
Received: from FRMRSSXCHHUB03.dc-m.alcatel-lucent.com (FRMRSSXCHHUB03.dc-m.alcatel-lucent.com [135.120.45.63]) by smail3.alcatel.fr (8.14.3/8.14.3/ICT) with ESMTP id q05JokEc003068 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NOT); Thu, 5 Jan 2012 20:50:46 +0100
Received: from FRMRSSXCHMBSB1.dc-m.alcatel-lucent.com ([135.120.45.40]) by FRMRSSXCHHUB03.dc-m.alcatel-lucent.com ([135.120.45.63]) with mapi; Thu, 5 Jan 2012 20:50:46 +0100
From: "Henderickx, Wim (Wim)" <wim.henderickx@alcatel-lucent.com>
To: David Allan I <david.i.allan@ericsson.com>, "Ashish Dalela (adalela)" <adalela@cisco.com>, Thomas Narten <narten@us.ibm.com>
Date: Thu, 5 Jan 2012 20:50:45 +0100
Thread-Topic: [dc] draft-dalela-dc-requirements-00.txt
Thread-Index: AczLCsm+jZpuzuroSy6ift2TqX44qAABEINwABnOOyAAGB1BMAAA+LPwAAAu12AAAe1/8A==
Message-ID: <14C7F4F06DB5814AB0DE29716C4F6D671CD30A76@FRMRSSXCHMBSB1.dc-m.alcatel-lucent.com>
References: <201201041759.q04Hx68f009409@cichlid.raleigh.ibm.com> <60C093A41B5E45409A19D42CF7786DFD5228FC5031@EUSAACMS0703.eamcs.ericsson.se> <618BE8B40039924EB9AED233D4A09C5102B25ABD@XMB-BGL-416.cisco.com> <60C093A41B5E45409A19D42CF7786DFD52290487A4@EUSAACMS0703.eamcs.ericsson.se> <14C7F4F06DB5814AB0DE29716C4F6D671CD30A75@FRMRSSXCHMBSB1.dc-m.alcatel-lucent.com> <60C093A41B5E45409A19D42CF7786DFD52290487E9@EUSAACMS0703.eamcs.ericsson.se>
In-Reply-To: <60C093A41B5E45409A19D42CF7786DFD52290487E9@EUSAACMS0703.eamcs.ericsson.se>
Accept-Language: nl-NL, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: nl-NL, en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.69 on 155.132.188.83
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 05 Jan 2012 19:50:54 -0000

Dave, yes we are. It would be good to get some perspective on the required =
scale from the people pushing the overlay solution. There are some options =
to aggregate flooding domains but it will depends on the scale if we would =
need such mechanism in the overlay solution or not.

-----Original Message-----
From: David Allan I [mailto:david.i.allan@ericsson.com]=20
Sent: donderdag 5 januari 2012 19:56
To: Henderickx, Wim (Wim); Ashish Dalela (adalela); Thomas Narten
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt

HI Wim:

Perhaps I should have been more precise. I did not mean it was not an issue=
, it was that I take the requirement to virtualize large numbers of them as=
 a given, and scaling that is the issue.

I hope that's clearer...and we're in wild agreement ;-)
Dave=20

-----Original Message-----
From: Henderickx, Wim (Wim) [mailto:wim.henderickx@alcatel-lucent.com]=20
Sent: Thursday, January 05, 2012 10:50 AM
To: David Allan I; Ashish Dalela (adalela); Thomas Narten
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt

David, before we say broadcast domains is not an issue we should understand=
 how many we will be expecting. Also multicast state has its limits btw.

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of David A=
llan I
Sent: donderdag 5 januari 2012 19:26
To: Ashish Dalela (adalela); Thomas Narten
Cc: dc@ietf.org
Subject: Re: [dc] draft-dalela-dc-requirements-00.txt

HI Ashish:

Well as I indicated, for many the scale problem simplifies to virtualizing =
large numbers of broadcast domains. IMO that is one "criteria of goodness".

We COULD postulate that scaling the size of individual virtualized broadcas=
t domains is also a problem, but IMO that is simply a network/application d=
esign BCP, not an issue.

My observation actually applies to a number of drafts in both DC and ARMD..=
..this thread simply prompted me.

Cheers
D

-----Original Message-----
From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
Sent: Wednesday, January 04, 2012 10:58 PM
To: David Allan I; Thomas Narten
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt

David,

A total of 1 sentence was dedicated to the L2 problem - "L2 networks can't =
be made to scale because of high number of broadcasts." And Linda has comme=
nted on this that the scaling issue is also due to MAC summarization, modif=
ication to be done. There is absolutely no discussion about pre-VLAN days, =
and I would like to know where you see it. Broadcast is contained by VLAN, =
and we are talking of that.=20

L2VPN is another story - because there was VPLS, and then there are lots of=
 new things. There are problems to be solved, and then problems to be solve=
d given some solution. So, the problem boundary shifts from the time you ta=
ke something as a "given". People generally take OSPF (or some L3 routing p=
rotocol) and VLAN as a given in the datacenter. Rest is not a "given". So, =
we have to start from what is given.

I'm fine, if we want to change the given to something else. Let me know wha=
t we think is the given today.

Thanks, Ashish

-----Original Message-----
From: David Allan I [mailto:david.i.allan@ericsson.com]
Sent: Thursday, January 05, 2012 12:26 AM
To: Thomas Narten; Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: RE: [dc] draft-dalela-dc-requirements-00.txt


If the goal is to describe the generalized characteristics of what is
needed:
An absolutely flat broadcast domain does not scale...duh!
An absolutely flat L2 network does not scale...duh!
Partitioning the network into a large number of virtual broadcast domains o=
r L2VPNs/VLANs is what works for many adopters as it supports PMO. This is =
what numerous existing standardized and proprietary solutions offer with va=
rious shades of grey attribute wise (e.g.
scaling, ordering guarantees, properties when failures occur, broadcast con=
tainment etc.). The one observation is that a 24 bit VLAN tag seems to be t=
he current gold standard, both with the IEEE and with proprietary or propos=
ed approaches.

It would be doing to group a service if the issues with Ethernet were not p=
resented based on a view stuck in perhaps the 2004-2005 timeframe, or perha=
ps even before the standardization of the original 12 bit VLAN tag, let alo=
ne what has come since.

;-)
Dave

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Thomas =
Narten
Sent: Wednesday, January 04, 2012 9:59 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: [dc] draft-dalela-dc-requirements-00.txt

Hi Ashish.

I had  look at this document as it is focused on requirements. Thanks for d=
oing this.

One starting comment, as the document says:

>    Scalability hasn't generally been a standards consideration and the
>    problems of scaling are left to implementation. But, in the case of
>    cloud datacenters, scaling is the basic requirement, and all
problems
>    of cloud datacenters arise due to scaling. The solution development
>    can't therefore ignore the scaling and optimality problem.

I disagree with the above. Scalability has always been one (of many) factor=
s that goes into development of a standard. Let's just take it as a given t=
hat any solution has to scale adequately for the environment in which it is=
 to be deployed. Saying more than that (in general terms) is probably not a=
 useful discussion. To talk about scalability, one has to talk about a spec=
ific technology and where it is or will be deployed.

Looking at Section 5, where the main requirements are listed:

>    5.1. The Basic Forwarding Problem
>=20
>    Traditionally, datacenter networks have used L2 or L3 technologies.
>    The need to massively scale virtualized hosts breaks both these
>    approaches. L2 networks can't be made to scale because of high
number
>    of broadcasts. L3 networks can't support host mobility, since
routing
>    uses subnets and an IP cannot be moved out of that subnet. Moving
IP
>    in a natively L3 network requires installing host routes at one or
>    more points in the path and that is an approach that can't be=20
> scaled.

I suspect there is general agreement that the above is a general "problem".=
 Having one big flat L2 in a data center is great for VM migration and plac=
ement of services "any place, anytime", but can raise scaling and other con=
cerns. Pushing L3 all the way out to the edges (e..g, ToR or Hypervisor) ma=
kes it hard to place (or move) services/VMs arbitrarily.

The above is one of the motivations behind the NVO3 work.

> 5.2. The Datacenter Inter-Connectivity Problem
>=20
>    There are limits to how much a datacenter would be scaled.
Workloads
>    need to be placed closer to the clients to reduce latency and
>    bandwidth. Hence, datacenters need to be split into geographical
>    locations and connected over the Internet. Some of these
datacenters
>    may be owned by different administrators, as in the case of private
>    and public cloud interconnectivity. Workloads can move between
these
>    datacenters, similar to how they move within the datacenter.

In this section, my take away is that there will be multiple, geographicall=
y separated data centers. And that they will need to be connected together.=
 I suspect everyone agrees with that.

But I don't see how this implies there is any specific IETF work that needs=
 doing. We already have geographically separated data centers, and there ar=
e, e.g., plenty of VPN technologies available for connecting them together.

What specifically is missing that prevents the above from being done today?=
 What is it that you think needs doing that can't be done with existing sta=
ndards?

> 5.3. The Multi-Tenancy Problem
>=20
>    Datacenters thus far have been wholly used by single tenant. To
>    separate departments within a tenant, VLANs have been used. This
>    seemed sufficient for the number of segments an enterprise would
>    need. But, this approach can't be extended to cloud datacenters.

I suspect you'll get a lot of agreement on this. And one of the key aims of=
 NVO3 is to address this.

Is the existing NVO3 approach not adequate for the above? If so why not?

> 5.4. The Technology-Topology Separation Problem
>=20
>    While large datacenters are becoming common, medium and small
>    datacenters will continue to exist. These may include a branch
office
>    connected to a central office, or a small enterprise datacenter
that
>    is connected to a huge public cloud. To move workloads across these
>    networks, the technologies used in the datacenter must be agnostic
of
>    the topology employed in the various sized datacenters.

>    A small datacenter may use a mesh topology. A medium datacenter may
>    use a three-tier topology. And a large datacenter may use a
two-tier
>    multi-path architecture. It has to be recognized that all these
>    datacenters of various sizes need to interoperate. In particular,
it
>    should be possible to use a common technology to connect large and
>    small datacenters, two large datacenters, or two small datacenters.

Isn't this already possible, and indeed, happening today?

What IETF work is needed? What standards gap needs filling?

>    5.5. The Network Convergence Problem
>=20
>    Cloud datacenters will be characterized by elasticity. That means
>    that virtual resources are constantly created and destroyed.
Typical
>    hardware and software reliabilities of today mean that failures at
>    scale will be fairly common, and automated recovery mechanisms will
>    need to be put in place. When combined with workload mobility for
the
>    sake of resource optimization and improving utilization, the churn
in
>    the network forwarding tables can be very significant.

WHat work does the above imply that the IETF needs to do ?

>    Mobility also affects virtualized network devices, such as virtual
>    switches, firewalls, load-balancers, etc. For instance, when a
server
>    fails and all the VMs are relocated, the associated virtual switch
>    and firewall must also be relocated. This means that any assumption
>    in mobility that the network is a static firmament on which hosts
are
>    dynamically attached becomes false. We have to assume that the
>    network is as dynamic as the hosts themselves.

This here is interesting. The implication is that when moving a VM, either

a) a FW or LB (or both) may also have to be moved, or

b) some sort of path enforcement is needed that insures traffic from the (n=
ow moved) VM continues to go through the same LB or FW as before.

Do I understand that correctly? And if so, what is the IETF work that needs=
 to be done to make all this happen?

>  5.6. The East-West Traffic Problem

Is this section saying anything more than there is a need for multipathing =
for East West traffic?

> 5.7. The Network SLA Problem
>=20
>    Multi-tenant networks need to protect all tenants from overusing
>    network resources. For example, high-traffic load from one tenant
>    should not starve another tenant of bandwidth. Note that in a
multi-
>    tenant environment, no tenant has full control or visibility of
what
>    other tenants are doing, and how problems can be fixed. A real-time
>    debugging of such problems is very hard for a provider.

...

>    Second, mechanisms to measure and guarantee network SLAs will have
to
>    employ active flow management to guarantee bandwidth to all tenants
>    and keep the network provisioned only to the level required. Flow
>    management can be integrated as part of existing forwarding
>    techniques or may need new techniques. Network SLAs can play an
>    important role in determining if sufficient bandwidth is available
>    before a VM is moved to a new location.

Can this not be done today? What specific IETF work would be needed to supp=
ort the enforcement of SLAs?

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From aldrin.isaac@gmail.com  Thu Jan  5 19:37:07 2012
Return-Path: <aldrin.isaac@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1A6EA21F8504 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 19:37:07 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.193
X-Spam-Level: 
X-Spam-Status: No, score=-3.193 tagged_above=-999 required=5 tests=[AWL=0.405,  BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Pp3JdeL3puQF for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 19:37:04 -0800 (PST)
Received: from mail-qy0-f172.google.com (mail-qy0-f172.google.com [209.85.216.172]) by ietfa.amsl.com (Postfix) with ESMTP id 6F6B611E8072 for <dc@ietf.org>; Thu,  5 Jan 2012 19:37:04 -0800 (PST)
Received: by qcsf15 with SMTP id f15so794605qcs.31 for <dc@ietf.org>; Thu, 05 Jan 2012 19:37:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :message-id:references:to:x-mailer; bh=1JIVIPSRUjHZuWqrHWxVs5Xsf4gB2QegREuwTI1GCaU=; b=iW3ECAzJCexV371AZXK8c92eLN5TXrXeTX00W0fRJQImby69AjzDfNOmt6qc9lh0Ht XUKMkc/ZyWlF7a56wNcAkm5SWWnqO/uzUdraap08Jl16MIIZCbDsu4q8B59HdiaIMbtY wHq3lZnRbiuXNInaiwOP/KD89pCn4MSzSQyIM=
Received: by 10.229.75.221 with SMTP id z29mr1597074qcj.128.1325821023958; Thu, 05 Jan 2012 19:37:03 -0800 (PST)
Received: from mymac.home (ool-44c1c730.dyn.optonline.net. [68.193.199.48]) by mx.google.com with ESMTPS id cf18sm68421547qab.9.2012.01.05.19.37.02 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 05 Jan 2012 19:37:02 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: multipart/alternative; boundary="Apple-Mail=_2BA89F04-A464-492B-87A1-B547CA99028C"
From: Aldrin Isaac <aldrin.isaac@gmail.com>
In-Reply-To: <AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net>
Date: Thu, 5 Jan 2012 22:37:01 -0500
Message-Id: <27D5DF12-DA16-42C4-A33B-84EBFFFC3A45@gmail.com>
References: <AFA7E5B6-4ABE-46FA-95B2-80BC5D3F62DA@netapp.com><1FEE3F8F5CCDE64C9A8E8F4AD27C19EE762EB4@szxeml525-mbs.china.huawei.com><618BE8B40039924EB9AED233D4A09C5102B250DA@XMB-BGL-416.cisco.com><4EFC947A.4020007@riw.us><7C4DFCE962635144B8FAE8CA11D0BF1E05A5CC9B4B@MX14A.corp.emc.com><618BE8B40039924EB9AED233D4A09C5102B25360@XMB-BGL-416.cisco.com><CAMXVrt6Spg9LVrwUoLm3XPGFt-stNqrKsO+SE6QsN4raEb1sLw@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B2538D@XMB-BGL-416.cisco.com><CAMXVrt4X+wB4AncQFpyt3H=iVg14rc_gHN8uF6D8n4XdAyBf1Q@mail.gmail.com><618BE8B40039924EB9AED233D4A09C5102B25647@XMB-BGL-416.cisco.com><4F030418.1070202@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B25683@XMB-BGL-416.cisco.com><4F031689.1050303@raszuk.net><618BE8B40039924EB9AED233D4A09C5102B256A3@XMB-BGL-416.cisco.com><7C4DFCE962635144B8FAE8CA11D0BF1E05A5E7BBA5@MX14A.corp.emc.com> <CAMXVrt5g-2WCBvm0kcpFx27KG9kdPSBzeRJ20F-degvzaEhYoQ@mail.gmail.com> <618BE8B40039924EB9AED233D4A09C5102B256D4@XMB-BGL-416.cisco .com> <AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net>
To: Warren Kumari <warren@kumari.net>
X-Mailer: Apple Mail (2.1251.1)
Cc: Pedro Marques <pedro.r.marques@gmail.com>, david.black@emc.com, "Ashish Dalela \(adalela\)" <adalela@cisco.com>, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 03:37:07 -0000

--Apple-Mail=_2BA89F04-A464-492B-87A1-B547CA99028C
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii


> The only thing that the network needs to know is the routes to the =
hypervisors / physical machines -- this is a solved problem.
> The VM addresses and routes are only visible to the [gateways, =
hypervisors with VMs in that overlay, other VMs in the same overlay, =
mapping server].
>=20
> For a really old overview: =
http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00

Is the issue that needs to be resolved for overlays only regarding the =
number of ARPs that need to be managed by the gateway?  Could this issue =
not be resolved operationally by say having more gateways? =20

What other issues exist and need to be resolved for overlays, besides =
deciding on a tunneling encapsulation?  Is the IETF expected to =
standardize control protocols for overlays?

There is clearly a need for server-based virtual networks as well as a =
need for scalable network-based Ethernet virtual networks for DC.  =
Shouldn't these be separate conversations?=

--Apple-Mail=_2BA89F04-A464-492B-87A1-B547CA99028C
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=us-ascii

<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
"><div><br><blockquote type=3D"cite"><div>The only thing that the =
network needs to know is the routes to the hypervisors / physical =
machines -- this is a solved problem.<br>The VM addresses and routes are =
only visible to the [gateways, hypervisors with VMs in that overlay, =
other VMs in the same overlay, mapping server].<br><br>For a really old =
overview: <a =
href=3D"http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00">h=
ttp://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00</a><font =
class=3D"Apple-style-span" =
color=3D"#00731d"><br></font></div></blockquote></div><br><div>Is the =
issue that needs to be resolved for overlays only regarding the number =
of ARPs that need to be managed by the gateway? &nbsp;Could this issue =
not be resolved operationally by say having more gateways? =
&nbsp;</div><div><br></div><div>What other issues exist and need to be =
resolved for overlays, besides deciding on a tunneling encapsulation? =
&nbsp;Is the IETF expected to standardize control protocols for =
overlays?</div><div><br></div><div>There is clearly a need for =
server-based virtual networks as well as a need for scalable =
network-based Ethernet virtual networks for DC. &nbsp;Shouldn't these be =
separate conversations?</div></body></html>=

--Apple-Mail=_2BA89F04-A464-492B-87A1-B547CA99028C--

From adalela@cisco.com  Thu Jan  5 21:03:22 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5ED8121F87DA for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 21:03:22 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.427
X-Spam-Level: 
X-Spam-Status: No, score=-2.427 tagged_above=-999 required=5 tests=[AWL=0.171,  BAYES_00=-2.599, HTML_MESSAGE=0.001]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id occAUSyIGB68 for <dc@ietfa.amsl.com>; Thu,  5 Jan 2012 21:03:19 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 24C9121F87CF for <dc@ietf.org>; Thu,  5 Jan 2012 21:03:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=17098; q=dns/txt; s=iport; t=1325826198; x=1327035798; h=mime-version:subject:date:message-id:in-reply-to: references:from:to:cc; bh=SpaL64z77TwycmC19WkB6sUb4B1r79C2YYC0mzGNO3s=; b=eeOLxWPxgpF3nJUaSToLTAR51I726gnr1WrkmQjRURj1Cb4KDkPg/ecp FxTPUovDq8nboChEUI0GEd6OEcpp3mqV/7fHdErHbSpMpHXg36h9a0U5b BUiTkV+OufTp39+Is/fAsKDkNIODNyD51JK1R1eanQRGD45WUUafIZKu+ E=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AgcFAPx/Bk9Io8UY/2dsb2JhbABDggVJohkBiR+BcgEBAQQSAQkRAzwNEAIBCBEEAQELBhcBBgEgJQkIAQEECwgIEweHYJdoAZ4Siy5jBIg3lz+HTQ
X-IronPort-AV: E=Sophos;i="4.71,466,1320624000"; d="scan'208,217";a="2875836"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 06 Jan 2012 05:03:16 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q0653GWH030290; Fri, 6 Jan 2012 05:03:16 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Fri, 6 Jan 2012 10:33:16 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CCCC30.7FCDA88F"
Date: Fri, 6 Jan 2012 10:33:13 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25D23@XMB-BGL-416.cisco.com>
In-Reply-To: <27D5DF12-DA16-42C4-A33B-84EBFFFC3A45@gmail.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczMJHkIMDhYiFQ0TFiF0gJ5cDLwJwABfVnw
References: <AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net> <27D5DF12-DA16-42C4-A33B-84EBFFFC3A45@gmail.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Aldrin Isaac" <aldrin.isaac@gmail.com>
X-OriginalArrivalTime: 06 Jan 2012 05:03:16.0554 (UTC) FILETIME=[800A0AA0:01CCCC30]
Cc: Pedro Marques <pedro.r.marques@gmail.com>, david.black@emc.com, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 05:03:22 -0000

This is a multi-part message in MIME format.

------_=_NextPart_001_01CCCC30.7FCDA88F
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable

Aldrin,

=20

I like the way you say - "the only thing the network needs to know" -
:-) The same could be said of the hypervisor as well :-)

=20

The problem is that once you start pushing the intelligence into the
server stack, you will have to keep pushing more and more  - e.g.
firewalls, multicast, broadcast, packet inspection, flow mgmt, etc. You
will find that you are re-inventing on the overlay everything that
exists on the underlay. The challenges are stacked up against the
overlay, not the underlay.

=20

You will also find that you need to communicate between the overlay and
the underlay to get the desired bandwidth, QoS, flow mgmt, multipathing,
tree optimization, and that is never going to be easy. You will also
find that hardware accelerated networks perform better, deliver the
requisite high-availability, and consume less energy, as compared to
doing the same thing in the hypervisor.

=20

Having said that, I recognize there are two models of virtualization
that we know of. The "overlay" model is like a network hypervisor in
which individual customers are like VMs. The "multiplexed" model is like
a multi-user OS. Both multi-user OS and the multi-VM hypervisor are
isolated equally well. But they have different use-cases.

=20

The main use-case for the VM hypervisor model is to multiplex multiple
different OS/tools/application "environments" into the same HW, letting
each VM the ability to shut itself down, reboot or whatever it wants to
do. In case of network, that "environment" is a set of protocols. So, if
you have one tenant running OSPF another one running IS-IS and yet
another running BGP, and they want to keep playing with their network
configuration, it makes sense to run these in the overlay or the VM
mode.=20

=20

If everyone has the same environment (i.e. runs the same protocols, and
expects common controls), it makes more sense to run them in the
multi-user (multiplexed) mode rather than multi-VM (overlay) mode. The
multi-VM model delegates administration to the VM owner. The multi-user
model owns the administration while letting a "tenant" use the network.
This is a conscious choice - is cloud going to open up the configuration
of network like they open up the administration of VM? From what I know,
the answer is "no".

=20

These architectural models exist outside the networking domain, and I
would reach out into another domain to borrow the intuitions, use-cases
and challenges. Architecturally, you will find the same challenges in
managing the overlay model as exist in the multi-VM model. If there are
no benefits to be gained from that, you are better off in the
multiplexed mode.

=20

But then again - beauty lies in the eyes of the beholder.

=20

Thanks, Ashish

=20

=20

=20

From: Aldrin Isaac [mailto:aldrin.isaac@gmail.com]=20
Sent: Friday, January 06, 2012 9:07 AM
Cc: Ashish Dalela (adalela); Pedro Marques; david.black@emc.com;
dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect

=20





The only thing that the network needs to know is the routes to the
hypervisors / physical machines -- this is a solved problem.
The VM addresses and routes are only visible to the [gateways,
hypervisors with VMs in that overlay, other VMs in the same overlay,
mapping server].

For a really old overview:
http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00

=20

Is the issue that needs to be resolved for overlays only regarding the
number of ARPs that need to be managed by the gateway?  Could this issue
not be resolved operationally by say having more gateways? =20

=20

What other issues exist and need to be resolved for overlays, besides
deciding on a tunneling encapsulation?  Is the IETF expected to
standardize control protocols for overlays?

=20

There is clearly a need for server-based virtual networks as well as a
need for scalable network-based Ethernet virtual networks for DC.
Shouldn't these be separate conversations?


------_=_NextPart_001_01CCCC30.7FCDA88F
Content-Type: text/html;
	charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" =
xmlns:p=3D"urn:schemas-microsoft-com:office:powerpoint" =
xmlns:a=3D"urn:schemas-microsoft-com:office:access" =
xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" =
xmlns:s=3D"uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" =
xmlns:rs=3D"urn:schemas-microsoft-com:rowset" xmlns:z=3D"#RowsetSchema" =
xmlns:b=3D"urn:schemas-microsoft-com:office:publisher" =
xmlns:ss=3D"urn:schemas-microsoft-com:office:spreadsheet" =
xmlns:c=3D"urn:schemas-microsoft-com:office:component:spreadsheet" =
xmlns:odc=3D"urn:schemas-microsoft-com:office:odc" =
xmlns:oa=3D"urn:schemas-microsoft-com:office:activation" =
xmlns:html=3D"http://www.w3.org/TR/REC-html40" =
xmlns:q=3D"http://schemas.xmlsoap.org/soap/envelope/" =
xmlns:rtc=3D"http://microsoft.com/officenet/conferencing" =
xmlns:D=3D"DAV:" xmlns:Repl=3D"http://schemas.microsoft.com/repl/" =
xmlns:mt=3D"http://schemas.microsoft.com/sharepoint/soap/meetings/" =
xmlns:x2=3D"http://schemas.microsoft.com/office/excel/2003/xml" =
xmlns:ppda=3D"http://www.passport.com/NameSpace.xsd" =
xmlns:ois=3D"http://schemas.microsoft.com/sharepoint/soap/ois/" =
xmlns:dir=3D"http://schemas.microsoft.com/sharepoint/soap/directory/" =
xmlns:ds=3D"http://www.w3.org/2000/09/xmldsig#" =
xmlns:dsp=3D"http://schemas.microsoft.com/sharepoint/dsp" =
xmlns:udc=3D"http://schemas.microsoft.com/data/udc" =
xmlns:xsd=3D"http://www.w3.org/2001/XMLSchema" =
xmlns:sub=3D"http://schemas.microsoft.com/sharepoint/soap/2002/1/alerts/"=
 xmlns:ec=3D"http://www.w3.org/2001/04/xmlenc#" =
xmlns:sp=3D"http://schemas.microsoft.com/sharepoint/" =
xmlns:sps=3D"http://schemas.microsoft.com/sharepoint/soap/" =
xmlns:xsi=3D"http://www.w3.org/2001/XMLSchema-instance" =
xmlns:udcs=3D"http://schemas.microsoft.com/data/udc/soap" =
xmlns:udcxf=3D"http://schemas.microsoft.com/data/udc/xmlfile" =
xmlns:udcp2p=3D"http://schemas.microsoft.com/data/udc/parttopart" =
xmlns:wf=3D"http://schemas.microsoft.com/sharepoint/soap/workflow/" =
xmlns:dsss=3D"http://schemas.microsoft.com/office/2006/digsig-setup" =
xmlns:dssi=3D"http://schemas.microsoft.com/office/2006/digsig" =
xmlns:mdssi=3D"http://schemas.openxmlformats.org/package/2006/digital-sig=
nature" =
xmlns:mver=3D"http://schemas.openxmlformats.org/markup-compatibility/2006=
" xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns:mrels=3D"http://schemas.openxmlformats.org/package/2006/relationshi=
ps" xmlns:spwp=3D"http://microsoft.com/sharepoint/webpartpages" =
xmlns:ex12t=3D"http://schemas.microsoft.com/exchange/services/2006/types"=
 =
xmlns:ex12m=3D"http://schemas.microsoft.com/exchange/services/2006/messag=
es" =
xmlns:pptsl=3D"http://schemas.microsoft.com/sharepoint/soap/SlideLibrary/=
" =
xmlns:spsl=3D"http://microsoft.com/webservices/SharePointPortalServer/Pub=
lishedLinksService" xmlns:Z=3D"urn:schemas-microsoft-com:" =
xmlns:st=3D"&#1;" xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 12 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue =
vlink=3Dpurple style=3D'word-wrap: break-word;-webkit-nbsp-mode: =
space;-webkit-line-break: after-white-space'><div =
class=3DWordSection1><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Aldrin,<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>I like the way you say &#8211; &#8220;the only thing the network =
needs to know&#8221; - :-) The same could be said of the hypervisor as =
well :-)<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>The problem is that once you start pushing the intelligence into the =
server stack, you will have to keep pushing more and more&nbsp; - e.g. =
firewalls, multicast, broadcast, packet inspection, flow mgmt, etc. You =
will find that you are re-inventing on the overlay everything that =
exists on the underlay. The challenges are stacked up against the =
overlay, not the underlay.<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>You will also find that you need to communicate between the overlay =
and the underlay to get the desired bandwidth, QoS, flow mgmt, =
multipathing, tree optimization, and that is never going to be easy. You =
will also find that hardware accelerated networks perform better, =
deliver the requisite high-availability, and consume less energy, as =
compared to doing the same thing in the =
hypervisor.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Having said that, I recognize there are two models of virtualization =
that we know of. The &#8220;overlay&#8221; model is like a network =
hypervisor in which individual customers are like VMs. The =
&#8220;multiplexed&#8221; model is like a multi-user OS. Both multi-user =
OS and the multi-VM hypervisor are isolated equally well. But they have =
different use-cases.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>The main use-case for the VM hypervisor model is to multiplex =
multiple different OS/tools/application &#8220;environments&#8221; into =
the same HW, letting each VM the ability to shut itself down, reboot or =
whatever it wants to do. In case of network, that =
&#8220;environment&#8221; is a set of protocols. So, if you have one =
tenant running OSPF another one running IS-IS and yet another running =
BGP, and they want to keep playing with their network configuration, it =
makes sense to run these in the overlay or the VM mode. =
<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>If everyone has the same environment (i.e. runs the same protocols, =
and expects common controls), it makes more sense to run them in the =
multi-user (multiplexed) mode rather than multi-VM (overlay) mode. The =
multi-VM model delegates administration to the VM owner. The multi-user =
model owns the administration while letting a &#8220;tenant&#8221; use =
the network. This is a conscious choice &#8211; is cloud going to open =
up the configuration of network like they open up the administration of =
VM? From what I know, the answer is =
&#8220;no&#8221;.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>These architectural models exist outside the networking domain, and I =
would reach out into another domain to borrow the intuitions, use-cases =
and challenges. Architecturally, you will find the same challenges in =
managing the overlay model as exist in the multi-VM model. If there are =
no benefits to be gained from that, you are better off in the =
multiplexed mode.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>But then again &#8211; beauty lies in the eyes of the =
beholder.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Thanks, Ashish<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><div><div =
style=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in'><p class=3DMsoNormal><b><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>=
</b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> =
Aldrin Isaac [mailto:aldrin.isaac@gmail.com] <br><b>Sent:</b> Friday, =
January 06, 2012 9:07 AM<br><b>Cc:</b> Ashish Dalela (adalela); Pedro =
Marques; david.black@emc.com; dc@ietf.org<br><b>Subject:</b> Re: [dc] =
[armd] IP over IP solution for data center =
interconnect<o:p></o:p></span></p></div></div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><div><p =
class=3DMsoNormal><br><br><o:p></o:p></p><div><p class=3DMsoNormal>The =
only thing that the network needs to know is the routes to the =
hypervisors / physical machines -- this is a solved problem.<br>The VM =
addresses and routes are only visible to the [gateways, hypervisors with =
VMs in that overlay, other VMs in the same overlay, mapping =
server].<br><br>For a really old overview: <a =
href=3D"http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00">=
http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00</a><o:p><=
/o:p></p></div></div><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><div><p =
class=3DMsoNormal>Is the issue that needs to be resolved for overlays =
only regarding the number of ARPs that need to be managed by the =
gateway? &nbsp;Could this issue not be resolved operationally by say =
having more gateways? &nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p></div><div><p =
class=3DMsoNormal>What other issues exist and need to be resolved for =
overlays, besides deciding on a tunneling encapsulation? &nbsp;Is the =
IETF expected to standardize control protocols for =
overlays?<o:p></o:p></p></div><div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p></div><div><p =
class=3DMsoNormal>There is clearly a need for server-based virtual =
networks as well as a need for scalable network-based Ethernet virtual =
networks for DC. &nbsp;Shouldn't these be separate =
conversations?<o:p></o:p></p></div></div></body></html>
------_=_NextPart_001_01CCCC30.7FCDA88F--

From rbonica@juniper.net  Fri Jan  6 07:11:36 2012
Return-Path: <rbonica@juniper.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 242F321F87BA for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 07:11:36 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.52
X-Spam-Level: 
X-Spam-Status: No, score=-106.52 tagged_above=-999 required=5 tests=[AWL=0.079, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id wzGG8kjZ8ARP for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 07:11:35 -0800 (PST)
Received: from exprod7og110.obsmtp.com (exprod7og110.obsmtp.com [64.18.2.173]) by ietfa.amsl.com (Postfix) with ESMTP id 9A9BE21F87C9 for <dc@ietf.org>; Fri,  6 Jan 2012 07:11:16 -0800 (PST)
Received: from P-EMHUB03-HQ.jnpr.net ([66.129.224.36]) (using TLSv1) by exprod7ob110.postini.com ([64.18.6.12]) with SMTP ID DSNKTwcPE0QgA4RuBwoQmS0ic+JqfX/6drMp@postini.com; Fri, 06 Jan 2012 07:11:35 PST
Received: from p-emfe01-wf.jnpr.net (172.28.145.24) by P-EMHUB03-HQ.jnpr.net (172.24.192.37) with Microsoft SMTP Server (TLS) id 8.3.213.0; Fri, 6 Jan 2012 07:06:25 -0800
Received: from EMBX01-WF.jnpr.net ([fe80::1914:3299:33d9:e43b]) by p-emfe01-wf.jnpr.net ([fe80::d0d1:653d:5b91:a123%11]) with mapi; Fri, 6 Jan 2012 10:06:24 -0500
From: Ronald Bonica <rbonica@juniper.net>
To: "Eggert, Lars" <lars@netapp.com>
Date: Fri, 6 Jan 2012 10:06:22 -0500
Thread-Topic: [dc] interim dates decided?
Thread-Index: AQHMxHKmKTq3rKgeQE2pr9b6CAG3SZXyrjIAgADxFwCACswGgIABFTQw
Message-ID: <13205C286662DE4387D9AF3AC30EF456D74EEEEA67@EMBX01-WF.jnpr.net>
References: <4EF98391.6010500@piuha.net> <DD803EEF-7ED8-455E-B8BD-0F05279814A1@kumari.net> <13205C286662DE4387D9AF3AC30EF456D74EB56784@EMBX01-WF.jnpr.net> <3F24AB27-AB9D-48BE-AD66-25133FD4D588@netapp.com>
In-Reply-To: <3F24AB27-AB9D-48BE-AD66-25133FD4D588@netapp.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] interim dates decided?
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 15:11:36 -0000

Please stand by. Working details.

> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Eggert, Lars
> Sent: Thursday, January 05, 2012 9:34 AM
> To: Ronald Bonica
> Cc: dc@ietf.org
> Subject: Re: [dc] interim dates decided?
>=20
> Hi,
>=20
> On Dec 29, 2011, at 18:41, Ronald Bonica wrote:
> > The Doodle Poll indicates that February 22-23 is the most popular
> date, followed by February 20-21.
> >
> > We are currently evaluating an offer to host the meeting in suburban
> Boston on February 21-22. If you indicated that you would be available
> on February 20-21 or February 22-23, but cannot be available on
> February 21-22, please send me unicast email.
> >
> > Please *do not make your travel reservations yet*. We are still
> evaluating whether the proposed venue will work for us. If it doesn't
> work, we may meet somewhere else.
>=20
> is there a date and place now? I'd be nice to be able to book travel
> sooner rather than later.
>=20
> Thanks,
> Lars

From cdl@asgaard.org  Fri Jan  6 09:20:39 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 6838021F899F for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 09:20:39 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.684
X-Spam-Level: 
X-Spam-Status: No, score=-5.684 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, J_CHICKENPOX_13=0.6, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id cqEIdpHlKszw for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 09:20:38 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id 8805D21F881B for <dc@ietf.org>; Fri,  6 Jan 2012 09:20:38 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id 78FC0A87178; Fri,  6 Jan 2012 17:20:37 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JLmTxOXaOYT0; Fri,  6 Jan 2012 17:20:35 +0000 (UTC)
Received: from [10.10.0.163] (unknown [207.228.237.150]) by asgaard.org (Postfix) with ESMTPSA id 4867CA8716D; Fri,  6 Jan 2012 17:20:25 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: text/plain; charset=utf-8
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE764022@szxeml525-mbs.china.huawei.com>
Date: Fri, 6 Jan 2012 09:20:12 -0800
Content-Transfer-Encoding: quoted-printable
Message-Id: <82D68180-B0A1-48B4-B1A5-5115D55AE2BD@asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <40F3FB9C-CBCB-41ED-A1E7-FB99DB3A928D@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE764022@szxeml525-mbs.china.huawei.com>
To: Xuxiaohu <xuxiaohu@huawei.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Thomas Narten <narten@us.ibm.com>, Russ White <russw@riw.us>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 17:20:39 -0000

Greetings - in-line,

On 03Jan2012, at 19.11, Xuxiaohu wrote:

>=20
>> -----=E9=82=AE=E4=BB=B6=E5=8E=9F=E4=BB=B6-----
>> =E5=8F=91=E4=BB=B6=E4=BA=BA: Christopher LILJENSTOLPE =
[mailto:cdl@asgaard.org]
>> =E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4: 2012=E5=B9=B41=E6=9C=884=E6=97=A5=
 1:13
>> =E6=94=B6=E4=BB=B6=E4=BA=BA: Xuxiaohu
>> =E6=8A=84=E9=80=81: Thomas Narten; Russ White; dc@ietf.org
>> =E4=B8=BB=E9=A2=98: Re: [dc] Elevator Pitch
>>=20
>> Greetings,
>>=20
>>=20
>> On 29Dec2011, at 20.01, Xuxiaohu wrote:
>>=20
>>>=20
>>>> -----=E9=82=AE=E4=BB=B6=E5=8E=9F=E4=BB=B6-----
>>>> =E5=8F=91=E4=BB=B6=E4=BA=BA: Christopher LILJENSTOLPE =
[mailto:cdl@asgaard.org]
>>>> =E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4: 2011=E5=B9=B412=E6=9C=8830=E6=97=
=A5 1:20
>>>> =E6=94=B6=E4=BB=B6=E4=BA=BA: Xuxiaohu
>>>> =E6=8A=84=E9=80=81: Thomas Narten; Russ White; dc@ietf.org
>>>> =E4=B8=BB=E9=A2=98: Re: [dc] Elevator Pitch
>>>>=20
>>>> Greetings Xuxiaohu,
>>>>=20
>>>> On 29Dec2011, at 00.55, Xuxiaohu wrote:
>>>>=20
>>>>> Hi Thomas,
>>>>>=20
>>>>>> -----=E9=82=AE=E4=BB=B6=E5=8E=9F=E4=BB=B6-----
>>>>>> =E5=8F=91=E4=BB=B6=E4=BA=BA: dc-bounces@ietf.org =
[mailto:dc-bounces@ietf.org] =E4=BB=A3=E8=A1=A8
>> Thomas
>>>>>> Narten
>>>>>> =E5=8F=91=E9=80=81=E6=97=B6=E9=97=B4: 2011=E5=B9=B412=E6=9C=8829=E6=
=97=A5 1:01
>>>>>> =E6=94=B6=E4=BB=B6=E4=BA=BA: Russ White
>>>>>> =E6=8A=84=E9=80=81: dc@ietf.org
>>>>>> =E4=B8=BB=E9=A2=98: Re: [dc] Elevator Pitch
>>>>>>=20
>> <snip>
>>=20
>>> Hi Chris,
>>>=20
>>> Would you please give a concrete example where the communication
>> between different tenants is very common in the multi-tenant cloud =
data
>> center?
>>=20
>> Here's the question - if you are a tenant in a data center, and you =
are writing a
>> mash-up application against some other content provider, how do you =
know if
>> they are in the same data center, or not?  My guess is that there is =
quite a bit
>> of traffic between tenants of EC2, btw.  I know that that was the =
intent at my
>> last gig - we wanted SaaS-like providers to live in our DC's and =
develop an
>> eco-system around our core services.  Other major cross-dc traffic - =
how
>> about all of my services like spam filtering, backup, etc.  In a DC, =
I may call
>> them "core services" but they are, in fact, another tenant.  How =
about large
>> scale content providers that mash-up between their own offerings.  =
Many of
>> those properties are viewed as "separate customers" by the =
infrastructure
>> teams (can't name names here).  Any inter-offering mash-ups would =
definitely
>> be cross-dc.
>=20
> Hi Chris,
>=20
> I have no doubt about the possibility you mentioned above. However, =
once you are considering to optimize the forwarding path of inter-tenant =
traffic within the scope of L2VPN or L3VPN solutions, while taking the =
address space overlapping, firewall policy issues, etc. into account, =
you will find it is a much complex job. Whether or not it is worthwhile =
to do that optimization heavily depends on whether the volume of =
inter-tenant traffic is much significant.=20

This is one that, if we miss, will come back and bite us.  If you have =
no way of getting traffic between "tenants" in a data centre, I have NO =
idea how you will deal with shared services (such as storage, backup, =
transit, etc) as well as the classical inter-tennant problem.  If the =
proposal is to just send it up the tree (as is done today), you will =
continue to have the same issues as we do now (that is driving all this =
work).  If you doubt this, ask yourself why there is a drive for greater =
cross-sectional bandwidth in the data center.  If you believe that the =
classical tree solution will work, then we are all chasing our tail =
here. =20

	Chris

>=20
> Best regards,
> Xiaohu
>=20
>>> Best regards,
>>> Xiaohu
>>>=20
>>>>>=20
>>>>>=20
>>>>>> Or will they want an alternative approach?
>>>>>>=20
>>>>>>> 2. Why does this mobility need to be at layer 2 specifically? =
Are we
>>>>>>> assuming DDNS and other sorts of solutions in this space will =
simply
>>>>>>> never be fast enough/scale far enough/etc?
>>>>>>=20
>>>>>> Like it or not, the key requirement for VM mobility is that the =
VM's
>>>>>> IP address does not change. That means the VM can't really move =
from
>>>>>> one IP subnet to another. That means either moving to bigger and
>>>>>> bigger L2s (all under one IP subnet) as the DC expands or the =
need to
>>>>>> inject /32 host routes.
>>>>>=20
>>>>> In the DCI scenario where the PE routers are usually performed at =
the
>>>> aggregation SWs or even core SWs, the PE routers would need a much =
large
>>>> forwarding table. Provided the routing table containing millions of =
entries,
>>>> which is available on most today's high-end routers, was still not =
large
>> enough,
>>>> the on-demand FIB installation or on-demand route announcement
>>>> mechanisms can be used further to scale the solution. Note that the =
trigger
>> for
>>>> the FIB installation or route announcement is ARP request packets =
rather
>> than
>>>> data packets. Hence it will not cause the so-called initial packet =
loss or
>> latency
>>>> issue.
>>>>>=20
>>>>>> Neither of those approaches seems particularly scalable/desirable =
if
>>>>>> you look 10 years down the road and think of 1M+ physical =
machines in
>>>>>> a DC.
>>>>>=20
>>>>> Maybe we should also take the development speed of =
routing/switching
>> chip
>>>> and CPU technologies into account:)
>>>>=20
>>>> It's more a question of cost/performance on off-chip memory/TCAMs.
>> That is
>>>> a slightly different curve :)
>>>>=20
>>>> 	Chris
>>>>=20
>>>>>=20
>>>>> Best regards,
>>>>> Xiaohu
>>>>>=20
>>>>>> Thomas
>>>>>>=20
>>>>>> _______________________________________________
>>>>>> dc mailing list
>>>>>> dc@ietf.org
>>>>>> https://www.ietf.org/mailman/listinfo/dc
>>>>> _______________________________________________
>>>>> dc mailing list
>>>>> dc@ietf.org
>>>>> https://www.ietf.org/mailman/listinfo/dc
>>>>=20
>>>> --
>>>> =E6=9D=8E=E6=9F=AF=E7=9D=BF
>>>> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
>>>> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>>>=20
>>> _______________________________________________
>>> dc mailing list
>>> dc@ietf.org
>>> https://www.ietf.org/mailman/listinfo/dc
>>=20
>> --
>> =E6=9D=8E=E6=9F=AF=E7=9D=BF
>> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
>> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


From cdl@asgaard.org  Fri Jan  6 09:24:26 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 04DFC21F854A for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 09:24:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.142
X-Spam-Level: 
X-Spam-Status: No, score=-6.142 tagged_above=-999 required=5 tests=[AWL=0.458,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id QuX9Hdm5WwJz for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 09:24:25 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id 35B1221F8545 for <dc@ietf.org>; Fri,  6 Jan 2012 09:24:25 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id 12CE0A87216; Fri,  6 Jan 2012 17:24:25 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YKm9DqVZJG2e; Fri,  6 Jan 2012 17:24:24 +0000 (UTC)
Received: from [10.10.0.163] (unknown [207.228.237.150]) by asgaard.org (Postfix) with ESMTPSA id E68C3A87202; Fri,  6 Jan 2012 17:24:22 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: text/plain; charset=utf-8
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <4F046F2A.4060707@riw.us>
Date: Fri, 6 Jan 2012 09:24:10 -0800
Content-Transfer-Encoding: quoted-printable
Message-Id: <0065D266-FE3B-424C-8A0D-3E7BF272262C@asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net><6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com><201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <4EFC826C.80708@riw.us> <682C5C0D-10FD-49D7-BF48-28EB6EFBA72B@asgaard.org> <4EFF0DCA.5090707@riw.us> <D7F34AF6-E93C-44F5-8C60-B3E9E8C2E38C@asgaard.org> <4F046F2A.4060707@riw.us>
To: Russ White <russw@riw.us>
X-Mailer: Apple Mail (2.1251.1)
Cc: dc@ietf.org
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 17:24:26 -0000

Greetings,

On 04Jan2012, at 07.24, Russ White wrote:

>=20
>=20
>> If that caching modality is not monitored, I agree.  However, I
>> don't believe that this needs to be an unbounded asynchronous cache.=20=

>> However if we can adjust the cache system based on ratios, etc - and=20=

>> monitor it, it's no longer hidden, nor necessarily a cliff.  I'm not=20=

>> saying caching is the only way, but it may be one approach.
>=20
> So long as the cache isn't "hidden from the hosts" in the network, but
> rather is something the host knows about and can interact with
> (understands the state of), caching can be useful. When application
> state becomes unbundled from cache state, things tend to slow down
> beyond "useful," I think, to provide the "working room" between the
> cache and the application.

Hense one of the drivers of SDN's.

>=20
>>> I'm not certain I understand this... I think you mean like a DFZ, a
>>> control plane that knows every possible destination. But you have=20
>>> to separate knowing every possible destination from knowing every=20
>>> possible route to that destination. Even the DFZ in the 'net is=20
>>> really an aggregated suboptimal subset. I don't know of any
>>> network on this scale that has an optimal route to every
>>> destination, and I don't think it's really possible to build one
>>> unless you want to make processing power and control plane
>>> bandwidth usage unbounded.
>>=20
>> For an unbounded network, I agree.  However, if it is within a=20
>> bounded subset (i.e. a dc or collection of dc's) I believe it is=20
>> possible.
>=20
> Except that we are already talking about relatively unbounded sizes, I
> think --100 million routes (well, not really unbounded, but bounded at =
a
> point far beyond what we have today).

I'm interested in the 100 M routes.  I'm not aware, at this time, of any =
data center with 100 M nodes in it.  If we do end up with that size, =
then yes, it does start getting interesting.

>=20
>> However, if that set of optimal paths is computed for every=20
>> source/dest pair (or at least for every unique best path) once,
>> based on a global topological/demand view, a "global" set of best
>> paths may be accomplished within that constrained universe.
>=20
> Assuming you're doing the computation on a very high end set of =
hardware
> --which implies "off line," I think. But this is a different model =
from
> what we deal with today, and I'm not certain how willing the group =
would
> be to go in that direction.
>=20
> Fast reroute (see the other thread with Yakov) has made it possible to
> precompute fast backup paths locally, potentially solving one of the
> various problems with off line computation. So long as you have the
> second stage react fast enough not to cause double failure failures...

That would be one approach

>=20
>> Agreed, if there are destinations hidden by aggregation - however, a=20=

>> "global" view may not have that problem.
>=20
> The key to a "global view," done off line (again, assuming this is =
what
> you're driving at), is that you can actually optimize aggregation
> --there are heuristics in existence that can do so, and I'm certain
> research would yield more. The tradeoff is in the details of getting
> forwarding state fast enough, building a separate control plane =
network,
> and things like that.

Yup - it's all fun.
>=20
> :-)
>=20
> Russ
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


From rbonica@juniper.net  Fri Jan  6 10:52:16 2012
Return-Path: <rbonica@juniper.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id EFE8E21F89E1 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 10:52:16 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.522
X-Spam-Level: 
X-Spam-Status: No, score=-106.522 tagged_above=-999 required=5 tests=[AWL=0.077, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RVaUUdsc2h8G for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 10:52:16 -0800 (PST)
Received: from exprod7og109.obsmtp.com (exprod7og109.obsmtp.com [64.18.2.171]) by ietfa.amsl.com (Postfix) with ESMTP id 1CC4221F89D0 for <dc@ietf.org>; Fri,  6 Jan 2012 10:52:16 -0800 (PST)
Received: from P-EMHUB03-HQ.jnpr.net ([66.129.224.36]) (using TLSv1) by exprod7ob109.postini.com ([64.18.6.12]) with SMTP ID DSNKTwdC3gcQ+GD7MU5I7XtYGrsetQD+p4p8@postini.com; Fri, 06 Jan 2012 10:52:16 PST
Received: from p-emfe02-wf.jnpr.net (172.28.145.25) by P-EMHUB03-HQ.jnpr.net (172.24.192.37) with Microsoft SMTP Server (TLS) id 8.3.213.0; Fri, 6 Jan 2012 10:50:44 -0800
Received: from EMBX01-WF.jnpr.net ([fe80::1914:3299:33d9:e43b]) by p-emfe02-wf.jnpr.net ([fe80::c126:c633:d2dc:8090%11]) with mapi; Fri, 6 Jan 2012 13:50:43 -0500
From: Ronald Bonica <rbonica@juniper.net>
To: "dc@ietf.org" <dc@ietf.org>
Date: Fri, 6 Jan 2012 13:50:39 -0500
Thread-Topic: DC Work Plan
Thread-Index: AczMpBVlzok2urS5RMGeXcLx/H8AYQ==
Message-ID: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-cr-hashedpuzzle: AaNC ChqM CiAy EA90 EYT+ E2aw FMTm F6GA HJ+1 HTK4 ILnJ Iyc1 I8wK JBVt JTzO KZc0; 1; ZABjAEAAaQBlAHQAZgAuAG8AcgBnAA==; Sosha1_v1; 7; {7400BC53-E72D-4D40-96B2-290F3C60D150}; cgBiAG8AbgBpAGMAYQBAAGoAdQBuAGkAcABlAHIALgBuAGUAdAA=; Fri, 06 Jan 2012 18:50:39 GMT;RABDACAAVwBvAHIAawAgAFAAbABhAG4A
x-cr-puzzleid: {7400BC53-E72D-4D40-96B2-290F3C60D150}
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 18:52:17 -0000

Folks,

The goal of this mailing list and the interim meeting which it supports is =
to articulate data center requirements. While there has been a lively excha=
nge on the mailing list, we are not significantly closer to our goal than w=
e were a month ago. Therefore, I would like to recommend the following work=
 plan:

1. adopt draft-dalela-dc-requirements as a very rough baseline

2. Rework Section 4
2.1 Rename Section 4 to "Data Center Characteristics"
2.2 Initiate a mailing list thread (Subject: DCREQ Section 4 Outline) to de=
termine whether Sections 4.1, 4.2 and 4.3 represent the salient characteris=
tics of the data center. Should section headers be added, changed or delete=
d?
2.3 Form a design team (or design teams) that will present proposed content=
 for each subsection

3. Rework Section 5
3.1 Initiate a mailing list thread (Subject: DCREQ Section 5 Outline) to de=
termine whether Sections 5.1 - 5.10 represent the salient components of the=
 data center problem space. Should section headers be added, changed or del=
eted?
3.2 Form a design team (or design teams) that will present proposed content=
 for each subsection

4. Discuss the presentations / form consensus

5. Generate new text for Sections 4 and 5

6. Submit to IESG for publication as INFORMATIONAL

Please comment on this work plan ASAP.


--------------------------
Ron Bonica
vcard:       www.bonica.org/ron/ronbonica.vcf



From adalela@cisco.com  Fri Jan  6 11:04:31 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id BE19921F8802 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 11:04:31 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.432
X-Spam-Level: 
X-Spam-Status: No, score=-2.432 tagged_above=-999 required=5 tests=[AWL=0.167,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id f3fGQeZEhh4s for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 11:04:31 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 89E1021F8801 for <dc@ietf.org>; Fri,  6 Jan 2012 11:04:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2194; q=dns/txt; s=iport; t=1325876670; x=1327086270; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to; bh=odCJrqfNlhx/i2+YnJP5K6uOla2yMbVVf3TKGEYL1wQ=; b=AgdeIGJ3FHxecmm9sObqze0Dcc4vzbhptq+L1Cc2bris5BxOVcl81+E6 HVoWzXp/keKh4qR6oVW15oX+2SOfSUbN6zgeTkaLQoTmtuGBKssrEF5dH lDBx4g4epffSwBGjBwVZBw7D6miCcuTCUEHVaEdHp6N1GEB38ox9jVNIN c=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ap8EANJEB09Io8UY/2dsb2JhbABDrUaBcgEBAQQBAQEPAR0KDyUXBAIBCA4DBAEBCwYXAQYBJh8JCAEBBAEKCAgBGYdgmBIBnh6LLmMEiDefDQ
X-IronPort-AV: E=Sophos;i="4.71,469,1320624000";  d="scan'208";a="2934882"
Received: from vla196-nat.cisco.com (HELO bgl-core-1.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 06 Jan 2012 19:04:28 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q06J4S0d025765; Fri, 6 Jan 2012 19:04:28 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Sat, 7 Jan 2012 00:34:28 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Sat, 7 Jan 2012 00:34:26 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25EF8@XMB-BGL-416.cisco.com>
In-Reply-To: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] DC Work Plan
Thread-Index: AczMpBVlzok2urS5RMGeXcLx/H8AYQAAUqlQ
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Ronald Bonica" <rbonica@juniper.net>, <dc@ietf.org>
X-OriginalArrivalTime: 06 Jan 2012 19:04:28.0753 (UTC) FILETIME=[03D06C10:01CCCCA6]
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 19:04:31 -0000

Ron,

I was working on modifying the current draft to incorporate the several
comments received over the email. But, that can wait if you want to
start a more structured approach right away. On the other hand, I could
incorporate the various comments / clarifications already received,
submit version 01 and then take it forward through the steps already
mentioned below. I am fine with either approach.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Ronald Bonica
Sent: Saturday, January 07, 2012 12:21 AM
To: dc@ietf.org
Subject: [dc] DC Work Plan

Folks,

The goal of this mailing list and the interim meeting which it supports
is to articulate data center requirements. While there has been a lively
exchange on the mailing list, we are not significantly closer to our
goal than we were a month ago. Therefore, I would like to recommend the
following work plan:

1. adopt draft-dalela-dc-requirements as a very rough baseline

2. Rework Section 4
2.1 Rename Section 4 to "Data Center Characteristics"
2.2 Initiate a mailing list thread (Subject: DCREQ Section 4 Outline) to
determine whether Sections 4.1, 4.2 and 4.3 represent the salient
characteristics of the data center. Should section headers be added,
changed or deleted?
2.3 Form a design team (or design teams) that will present proposed
content for each subsection

3. Rework Section 5
3.1 Initiate a mailing list thread (Subject: DCREQ Section 5 Outline) to
determine whether Sections 5.1 - 5.10 represent the salient components
of the data center problem space. Should section headers be added,
changed or deleted?
3.2 Form a design team (or design teams) that will present proposed
content for each subsection

4. Discuss the presentations / form consensus

5. Generate new text for Sections 4 and 5

6. Submit to IESG for publication as INFORMATIONAL

Please comment on this work plan ASAP.


--------------------------
Ron Bonica
vcard:       www.bonica.org/ron/ronbonica.vcf


_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From robert@raszuk.net  Fri Jan  6 11:16:33 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 11E9021F8820 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 11:16:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id N0awohsgVDeO for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 11:16:32 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 1B6E721F8789 for <dc@ietf.org>; Fri,  6 Jan 2012 11:16:32 -0800 (PST)
Received: (qmail 8881 invoked by uid 399); 6 Jan 2012 19:16:30 -0000
Received: from unknown (HELO ?192.168.1.57?) (83.28.249.72) by mail1310.opentransfer.com with ESMTP; 6 Jan 2012 19:16:30 -0000
X-Originating-IP: 83.28.249.72
Message-ID: <4F07488E.2070103@raszuk.net>
Date: Fri, 06 Jan 2012 20:16:30 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: dc@ietf.org
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net>
In-Reply-To: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 19:16:33 -0000

Ron,

I don't see what is there to adopt if you recommend to rework section 4 
and section 5 which is pretty much are 90% of this document.

IMHO adoption should be considered after section 4 and section 5 are 
reworked.

Thx,
R.

> Folks,
>
> The goal of this mailing list and the interim meeting which it supports is to articulate data center requirements. While there has been a lively exchange on the mailing list, we are not significantly closer to our goal than we were a month ago. Therefore, I would like to recommend the following work plan:
>
> 1. adopt draft-dalela-dc-requirements as a very rough baseline
>
> 2. Rework Section 4
> 2.1 Rename Section 4 to "Data Center Characteristics"
> 2.2 Initiate a mailing list thread (Subject: DCREQ Section 4 Outline) to determine whether Sections 4.1, 4.2 and 4.3 represent the salient characteristics of the data center. Should section headers be added, changed or deleted?
> 2.3 Form a design team (or design teams) that will present proposed content for each subsection
>
> 3. Rework Section 5
> 3.1 Initiate a mailing list thread (Subject: DCREQ Section 5 Outline) to determine whether Sections 5.1 - 5.10 represent the salient components of the data center problem space. Should section headers be added, changed or deleted?
> 3.2 Form a design team (or design teams) that will present proposed content for each subsection
>
> 4. Discuss the presentations / form consensus
>
> 5. Generate new text for Sections 4 and 5
>
> 6. Submit to IESG for publication as INFORMATIONAL
>
> Please comment on this work plan ASAP.
>
>
> --------------------------
> Ron Bonica
> vcard:       www.bonica.org/ron/ronbonica.vcf
>
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>
>


From vumip1@gmail.com  Fri Jan  6 11:19:06 2012
Return-Path: <vumip1@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E083221F87F4 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 11:19:06 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.156
X-Spam-Level: 
X-Spam-Status: No, score=-3.156 tagged_above=-999 required=5 tests=[AWL=-0.158, BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_32=0.6, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id AMtGG3esCTlN for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 11:19:06 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id CE93321F8789 for <dc@ietf.org>; Fri,  6 Jan 2012 11:19:05 -0800 (PST)
Received: by iabz21 with SMTP id z21so3347611iab.31 for <dc@ietf.org>; Fri, 06 Jan 2012 11:19:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=aSRIac8/YC2BHQD59OWbJHKBxP3qxPMAc0E2xE5auh8=; b=FY9rCjA5o7PhKc0E1D3fRBUWRg+wQjOqDfMLJRyiwCvraMZqmleJLy8phcmDyMenDY elsqtBvVWy77wRQ0GobXo5HQWCFFAdZ4bovyRFDBWF+1mjdP9eUfK2swmvEz8joPoRaB X/8Yt/E1/qJpLqzamdJIXXh2rZVQpahbpoel0=
MIME-Version: 1.0
Received: by 10.42.168.135 with SMTP id w7mr6986167icy.9.1325877545448; Fri, 06 Jan 2012 11:19:05 -0800 (PST)
Received: by 10.50.77.197 with HTTP; Fri, 6 Jan 2012 11:19:05 -0800 (PST)
In-Reply-To: <201201032055.q03KtgnA016017@cichlid.raleigh.ibm.com>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <CANtnpwj3hCD4UbidDzG=4xChJOaQ1T8mLqQkDUWxoRZV1hjuYA@mail.gmail.com> <201112281650.pBSGo7Mn011365@cichlid.raleigh.ibm.com> <CANtnpwgKKh_6emFK2Gx_WfqU929UK3rzQmh1cuWxoJFGH6eHUw@mail.gmail.com> <2E742C02-F621-497D-AE06-6A91EEEBA498@cdl.asgaard.org> <201201032055.q03KtgnA016017@cichlid.raleigh.ibm.com>
Date: Fri, 6 Jan 2012 14:19:05 -0500
Message-ID: <CANtnpwiXAQjpCQqfodfqGVDuc+X4TcjVCdH-7grkdhWKZ7nvyQ@mail.gmail.com>
From: Bhumip Khasnabish <vumip1@gmail.com>
To: Thomas Narten <narten@us.ibm.com>
Content-Type: multipart/alternative; boundary=90e6ba6138461cb66504b5e0ecd4
Cc: Ronald Bonica <rbonica@juniper.net>, dc@ietf.org, Christopher LILJENSTOLPE <ietf@cdl.asgaard.org>, "So, Ning" <ning.so@verizon.com>
Subject: Re: [dc] Elevator Pitch (was: Scoping the Interim meeting)
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 19:19:07 -0000

--90e6ba6138461cb66504b5e0ecd4
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Hello Thomas,

Thanks.

(1) Pls note that we have follow up drafts
(VDCS, VPN4DC, VEPC, VPN-O-CS, VPN-O-DCS, VRM, security framework for VDCS,
etc.)
from different contributors
for many of the problems that are mentionedd in the work items draft.
I am sure that you are familiar with at least a few of these drafts.

(2) Do you have any Agenda in mind for the interim mtg.?
I was just suggesting that we can look at the DataCenter
design/development/operations problems based on high-priority
use cases, and can explore which ones can possibly be solved using
the solutions that IETF can develop promptly...

(3) We can do the "teasing out problem areas for which the IETF .."
off-line and
can use the F2F mtg to finalize the work items and can discuss possible
approaches.

Once again, many thanks for contributions.

Best.

Bhumip



On Tue, Jan 3, 2012 at 3:55 PM, Thomas Narten <narten@us.ibm.com> wrote:

> +1 to Chris' comments.
>
> Christopher LILJENSTOLPE <ietf@cdl.asgaard.org> writes:
>
> > Secondly, the question we should be asking is "What does the IETF
> >  NEED to do" not "What can the IETF do" We can do a great many
> >  things, the bulk of which will not be helpful or a good use of
> >  resources.
>
> Agree completely. If those advocating that the IETF "do work" cannot
> answer the above succinctly and in a way that fellow IETFers can
> understand, the only conclusion that can be drawn is that the IETF
> cannot (and should not) take any further action at this time.
>
> > > On 28Dec2011, at 17.40, Bhumip Khasnabish wrote:
> > > We need comments and suggestions from you and others to update this
> doc.
> > >
> > > We also have another draft covering potential work items
> > > (
> > >
> http://tools.ietf.org/html/draft-khasnabish-cloud-industry-workitems-surv=
ey-01
>
> I had a look at this document, and most if not all of the
> "requirements" or "work items" are very very high level, and it is not
> at all clear to me how they relate to IETF work.
>
> > > We can discuss these further in the interim mtg.
>
> I strongly disagree. See above. There needs to be much more prep work
> done on describing a specific technical or operational problem that
> the IETF would work on before there would be much use in having a f2f
> discussion. IMO.
>
> > I would like to propose a different approach, if I may.  If we took
> >  a focused set of problem statements and ran them through the
> >  following set of filters:
>
> > 1) Is this a current/real or near-to-mid term probable issue and is
> >  it substantial?
>
> > 2) If yes, is it being adequately covered by another SDO and is it
> >  in that SDO's domain?
>
> > 3) if no, is it in the domain of IETF competency?
>
> > 4) if yes, do we want to work on it?
>
> Agree completely. Again, this list should be devoted exclusively to
> teasing out problem areas for which the IETF would seem to be the
> right place to so specific work.
>
> Thomas
>
>


--=20
Best.

Bhumip Khasnabish
vumip1@gmail.com
bhumip.khasnabish@zteusa.com
+1-781-752-8003 (mobile)
http://tinyurl.com/bhumip

                   __o
             _ `\ <, _
.......... ( =95 ) / ( =95 ) ......................

--90e6ba6138461cb66504b5e0ecd4
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

<div><font color=3D"#000099" size=3D"4">Hello Thomas, </font></div>
<div><font color=3D"#000099" size=3D"4"></font>=A0</div>
<div><font color=3D"#000099" size=3D"4">Thanks.</font></div>
<div><font color=3D"#000099" size=3D"4"></font>=A0</div>
<div><font color=3D"#000099" size=3D"4">(1) Pls note that we have follow up=
 drafts </font></div>
<div><font color=3D"#000099" size=3D"4">(VDCS, VPN4DC, VEPC, VPN-O-CS, VPN-=
O-DCS, VRM, security framework for VDCS, etc.) </font></div>
<div><font color=3D"#000099" size=3D"4">from different contributors</font><=
/div>
<div><font color=3D"#000099" size=3D"4">for many of the problems that </fon=
t><font color=3D"#000099" size=3D"4">are mentionedd in the work items draft=
. </font></div>
<div><font color=3D"#000099" size=3D"4">I am sure that you are familiar wit=
h at least a few of these drafts.</font></div>
<div><font color=3D"#000099" size=3D"4"></font>=A0</div>
<div><font color=3D"#000099" size=3D"4">(2) Do you have any Agenda in mind =
for the interim mtg.?</font></div>
<div><font color=3D"#000099" size=3D"4">I was just suggesting that </font><=
font color=3D"#000099" size=3D"4">we can look at the DataCenter </font></di=
v>
<div><font color=3D"#000099" size=3D"4">design/development/operations probl=
ems based on high-priority</font></div>
<div><font color=3D"#000099" size=3D"4">use cases, and can explore which on=
es can possibly be solved using</font></div>
<div><font color=3D"#000099" size=3D"4">the solutions that IETF can develop=
 promptly...</font></div>
<div><font color=3D"#000099" size=3D"4"></font>=A0</div>
<div><font color=3D"#000099" size=3D"4">(3) We can do the &quot;teasing out=
 problem areas for which the IETF ..&quot; off-line and </font></div>
<div><font color=3D"#000099"><font size=3D"4">can use the F2F </font><font =
size=3D"4">mtg to finalize the work items and can discuss possible approach=
es.</font></font></div>
<div><font color=3D"#000099" size=3D"4"></font>=A0</div>
<div><font color=3D"#000099" size=3D"4">Once again, many thanks for contrib=
utions.</font></div>
<div><font color=3D"#000099" size=3D"4"></font>=A0</div>
<div><font color=3D"#000099" size=3D"4">Best.</font></div>
<div><font color=3D"#000099" size=3D"4"></font>=A0</div>
<div><font color=3D"#000099" size=3D"4">Bhumip</font></div>
<div><br><br>=A0</div>
<div class=3D"gmail_quote">On Tue, Jan 3, 2012 at 3:55 PM, Thomas Narten <s=
pan dir=3D"ltr">&lt;<a href=3D"mailto:narten@us.ibm.com" target=3D"_blank">=
narten@us.ibm.com</a>&gt;</span> wrote:<br>
<blockquote style=3D"BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PA=
DDING-LEFT:1ex" class=3D"gmail_quote">+1 to Chris&#39; comments.<br>
<div><br>Christopher LILJENSTOLPE &lt;<a href=3D"mailto:ietf@cdl.asgaard.or=
g" target=3D"_blank">ietf@cdl.asgaard.org</a>&gt; writes:<br><br>&gt; Secon=
dly, the question we should be asking is &quot;What does the IETF<br>&gt; =
=A0NEED to do&quot; not &quot;What can the IETF do&quot; We can do a great =
many<br>
&gt; =A0things, the bulk of which will not be helpful or a good use of<br>&=
gt; =A0resources.<br><br></div>Agree completely. If those advocating that t=
he IETF &quot;do work&quot; cannot<br>answer the above succinctly and in a =
way that fellow IETFers can<br>
understand, the only conclusion that can be drawn is that the IETF<br>canno=
t (and should not) take any further action at this time.<br>
<div><br>&gt; &gt; On 28Dec2011, at 17.40, Bhumip Khasnabish wrote:<br></di=
v>
<div>&gt; &gt; We need comments and suggestions from you and others to upda=
te this doc.<br>&gt; &gt;<br>&gt; &gt; We also have another draft covering =
potential work items<br>&gt; &gt; (<br>&gt; &gt; <a href=3D"http://tools.ie=
tf.org/html/draft-khasnabish-cloud-industry-workitems-survey-01" target=3D"=
_blank">http://tools.ietf.org/html/draft-khasnabish-cloud-industry-workitem=
s-survey-01</a><br>
<br></div>I had a look at this document, and most if not all of the<br>&quo=
t;requirements&quot; or &quot;work items&quot; are very very high level, an=
d it is not<br>at all clear to me how they relate to IETF work.<br>
<div><br>&gt; &gt; We can discuss these further in the interim mtg.<br><br>=
</div>I strongly disagree. See above. There needs to be much more prep work=
<br>done on describing a specific technical or operational problem that<br>
the IETF would work on before there would be much use in having a f2f<br>di=
scussion. IMO.<br>
<div><br>&gt; I would like to propose a different approach, if I may. =A0If=
 we took<br>&gt; =A0a focused set of problem statements and ran them throug=
h the<br>&gt; =A0following set of filters:<br><br>&gt; 1) Is this a current=
/real or near-to-mid term probable issue and is<br>
&gt; =A0it substantial?<br><br>&gt; 2) If yes, is it being adequately cover=
ed by another SDO and is it<br>&gt; =A0in that SDO&#39;s domain?<br><br>&gt=
; 3) if no, is it in the domain of IETF competency?<br><br>&gt; 4) if yes, =
do we want to work on it?<br>
<br></div>Agree completely. Again, this list should be devoted exclusively =
to<br>teasing out problem areas for which the IETF would seem to be the<br>=
right place to so specific work.<br><font color=3D"#888888"><br>Thomas<br>
<br></font></blockquote></div><br><br clear=3D"all"><br>-- <br>
<div>Best.<br><br>Bhumip Khasnabish</div>
<div><a href=3D"mailto:vumip1@gmail.com" target=3D"_blank">vumip1@gmail.com=
</a> </div>
<div><a href=3D"mailto:bhumip.khasnabish@zteusa.com" target=3D"_blank">bhum=
ip.khasnabish@zteusa.com</a> =A0</div>
<div><a href=3D"tel:%2B1-781-752-8003" target=3D"_blank" value=3D"+17817528=
003">+1-781-752-8003</a> (mobile) <br><a href=3D"http://tinyurl.com/bhumip"=
 target=3D"_blank">http://tinyurl.com/bhumip</a></div>
<div><br>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 __o<br>=A0=
=A0=A0 =A0=A0=A0=A0=A0=A0=A0=A0 _ `\ &lt;, _<br>.......... ( =95 ) / ( =95 =
) ......................<br></div><br>

--90e6ba6138461cb66504b5e0ecd4--

From bhedlund@force10networks.com  Fri Jan  6 11:58:31 2012
Return-Path: <bhedlund@force10networks.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D227721F8735 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 11:58:31 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.909
X-Spam-Level: 
X-Spam-Status: No, score=-2.909 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, FH_HOST_EQ_D_D_D_D=0.765, FH_HOST_EQ_D_D_D_DB=0.888, HELO_MISMATCH_COM=0.553, HOST_EQ_STATIC=1.172, HOST_MISMATCH_NET=0.311, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CYgrej5Rdva2 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 11:58:30 -0800 (PST)
Received: from mx.force10networks.com (64-186-164-204.static-ip.telepacific.net [64.186.164.204]) by ietfa.amsl.com (Postfix) with ESMTP id D1A5721F84AE for <dc@ietf.org>; Fri,  6 Jan 2012 11:58:30 -0800 (PST)
Received: from EX07-SJC-MBX1.force10networks.com ([fe80:0000:0000:0000:e980:0ee4:72.36.142.28]) by exch7-sjc-fe.force10networks.com ([10.11.0.87]) with mapi; Fri, 6 Jan 2012 11:58:30 -0800
From: Brad Hedlund <bhedlund@force10networks.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>, Aldrin Isaac <aldrin.isaac@gmail.com>
Date: Fri, 6 Jan 2012 11:58:30 -0800
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczMJHkIMDhYiFQ0TFiF0gJ5cDLwJwABfVnwACB2rn4=
Message-ID: <F9947EE081154C47BA2D281D8F3FAE8E0371A21D4F@EX07-SJC-MBX1.force10networks.com>
References: <AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net> <27D5DF12-DA16-42C4-A33B-84EBFFFC3A45@gmail.com>, <618BE8B40039924EB9AED233D4A09C5102B25D23@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25D23@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/alternative; boundary="_000_F9947EE081154C47BA2D281D8F3FAE8E0371A21D4FEX07SJCMBX1fo_"
MIME-Version: 1.0
Cc: Pedro Marques <pedro.r.marques@gmail.com>, "david.black@emc.com" <david.black@emc.com>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 19:58:31 -0000

--_000_F9947EE081154C47BA2D281D8F3FAE8E0371A21D4FEX07SJCMBX1fo_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable

"So, if you have one tenant running OSPF another one running IS-IS and yet =
another running BGP, and they want to keep playing with their network confi=
guration, it makes sense to run these in the overlay or the VM mode. "
This didn't make much sense to me. Why would a customer of an overal model =
be managing or configuring a routing protocol?  The overlay simplifies the =
customer's topology view to that of a single logical segment.

Cheers,
Brad

________________________________
From: dc-bounces@ietf.org [dc-bounces@ietf.org] On Behalf Of Ashish Dalela =
(adalela) [adalela@cisco.com]
Sent: Thursday, January 05, 2012 11:03 PM
To: Aldrin Isaac
Cc: Pedro Marques; david.black@emc.com; dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect

Aldrin,

I like the way you say =96 =93the only thing the network needs to know=94 -=
 :-) The same could be said of the hypervisor as well :-)

The problem is that once you start pushing the intelligence into the server=
 stack, you will have to keep pushing more and more  - e.g. firewalls, mult=
icast, broadcast, packet inspection, flow mgmt, etc. You will find that you=
 are re-inventing on the overlay everything that exists on the underlay. Th=
e challenges are stacked up against the overlay, not the underlay.

You will also find that you need to communicate between the overlay and the=
 underlay to get the desired bandwidth, QoS, flow mgmt, multipathing, tree =
optimization, and that is never going to be easy. You will also find that h=
ardware accelerated networks perform better, deliver the requisite high-ava=
ilability, and consume less energy, as compared to doing the same thing in =
the hypervisor.

Having said that, I recognize there are two models of virtualization that w=
e know of. The =93overlay=94 model is like a network hypervisor in which in=
dividual customers are like VMs. The =93multiplexed=94 model is like a mult=
i-user OS. Both multi-user OS and the multi-VM hypervisor are isolated equa=
lly well. But they have different use-cases.

The main use-case for the VM hypervisor model is to multiplex multiple diff=
erent OS/tools/application =93environments=94 into the same HW, letting eac=
h VM the ability to shut itself down, reboot or whatever it wants to do. In=
 case of network, that =93environment=94 is a set of protocols. So, if you =
have one tenant running OSPF another one running IS-IS and yet another runn=
ing BGP, and they want to keep playing with their network configuration, it=
 makes sense to run these in the overlay or the VM mode.

If everyone has the same environment (i.e. runs the same protocols, and exp=
ects common controls), it makes more sense to run them in the multi-user (m=
ultiplexed) mode rather than multi-VM (overlay) mode. The multi-VM model de=
legates administration to the VM owner. The multi-user model owns the admin=
istration while letting a =93tenant=94 use the network. This is a conscious=
 choice =96 is cloud going to open up the configuration of network like the=
y open up the administration of VM? From what I know, the answer is =93no=
=94.

These architectural models exist outside the networking domain, and I would=
 reach out into another domain to borrow the intuitions, use-cases and chal=
lenges. Architecturally, you will find the same challenges in managing the =
overlay model as exist in the multi-VM model. If there are no benefits to b=
e gained from that, you are better off in the multiplexed mode.

But then again =96 beauty lies in the eyes of the beholder.

Thanks, Ashish



From: Aldrin Isaac [mailto:aldrin.isaac@gmail.com]
Sent: Friday, January 06, 2012 9:07 AM
Cc: Ashish Dalela (adalela); Pedro Marques; david.black@emc.com; dc@ietf.or=
g
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect



The only thing that the network needs to know is the routes to the hypervis=
ors / physical machines -- this is a solved problem.
The VM addresses and routes are only visible to the [gateways, hypervisors =
with VMs in that overlay, other VMs in the same overlay, mapping server].

For a really old overview: http://tools.ietf.org/html/draft-wkumari-dcops-l=
3-vmmobility-00

Is the issue that needs to be resolved for overlays only regarding the numb=
er of ARPs that need to be managed by the gateway?  Could this issue not be=
 resolved operationally by say having more gateways?

What other issues exist and need to be resolved for overlays, besides decid=
ing on a tunneling encapsulation?  Is the IETF expected to standardize cont=
rol protocols for overlays?

There is clearly a need for server-based virtual networks as well as a need=
 for scalable network-based Ethernet virtual networks for DC.  Shouldn't th=
ese be separate conversations?

--_000_F9947EE081154C47BA2D281D8F3FAE8E0371A21D4FEX07SJCMBX1fo_
Content-Type: text/html; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable

<html dir=3D"ltr"><head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
<style>@font-face {
	font-family: Cambria Math;
}
@font-face {
	font-family: Calibri;
}
@font-face {
	font-family: Tahoma;
}
@page WordSection1 {margin: 1.0in 1.0in 1.0in 1.0in; }
P.MsoNormal {
	MARGIN: 0in 0in 0pt; FONT-FAMILY: "Times New Roman","serif"; FONT-SIZE: 12=
pt
}
LI.MsoNormal {
	MARGIN: 0in 0in 0pt; FONT-FAMILY: "Times New Roman","serif"; FONT-SIZE: 12=
pt
}
DIV.MsoNormal {
	MARGIN: 0in 0in 0pt; FONT-FAMILY: "Times New Roman","serif"; FONT-SIZE: 12=
pt
}
A:link {
	COLOR: blue; TEXT-DECORATION: underline
}
SPAN.MsoHyperlink {
	COLOR: blue; TEXT-DECORATION: underline
}
A:visited {
	COLOR: purple; TEXT-DECORATION: underline
}
SPAN.MsoHyperlinkFollowed {
	COLOR: purple; TEXT-DECORATION: underline
}
SPAN.EmailStyle17 {
	FONT-FAMILY: "Calibri","sans-serif"; COLOR: #1f497d
}
.MsoChpDefault {
	FONT-SIZE: 10pt
}
DIV.WordSection1 {
=09
}
</style>
<meta name=3D"GENERATOR" content=3D"MSHTML 8.00.7600.16912">
<style title=3D"owaParaStyle"><!--P {
	MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
--></style>
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple" ocsi=3D"x">
<blockquote style=3D"MARGIN-RIGHT: 0px" dir=3D"ltr">
<div dir=3D"ltr"><font color=3D"#000000" size=3D"2" face=3D"Tahoma">&quot;<=
font color=3D"#1f497d" size=3D"3" face=3D"Calibri">So, if you have one tena=
nt running OSPF another one running IS-IS and yet another running BGP, and =
they want to keep playing with their network configuration,
 it makes sense to run these in the overlay or the VM mode. &quot;</font></=
font></div>
</blockquote>
<div dir=3D"ltr"><font color=3D"#1f497d" face=3D"calibri">This didn't make =
much sense to me. Why would a customer of an overal model be&nbsp;managing =
or configuring&nbsp;a routing protocol?&nbsp; The overlay simplifies the cu=
stomer's topology&nbsp;view to that of a single logical segment.</font></di=
v>
<div dir=3D"ltr"><font color=3D"#1f497d" face=3D"calibri"></font>&nbsp;</di=
v>
<div dir=3D"ltr"><font color=3D"#1f497d" face=3D"calibri">Cheers,</font></d=
iv>
<div dir=3D"ltr"><font color=3D"#1f497d" face=3D"calibri">Brad</font></div>
<div dir=3D"ltr"><font size=3D"2" face=3D"tahoma"></font>&nbsp;</div>
<div style=3D"DIRECTION: ltr" id=3D"divRpF600727">
<hr tabindex=3D"-1">
<font size=3D"2" face=3D"Tahoma"><b>From:</b> dc-bounces@ietf.org [dc-bounc=
es@ietf.org] On Behalf Of Ashish Dalela (adalela) [adalela@cisco.com]<br>
<b>Sent:</b> Thursday, January 05, 2012 11:03 PM<br>
<b>To:</b> Aldrin Isaac<br>
<b>Cc:</b> Pedro Marques; david.black@emc.com; dc@ietf.org<br>
<b>Subject:</b> Re: [dc] [armd] IP over IP solution for data center interco=
nnect<br>
</font><br>
</div>
<div></div>
<div>
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt">Aldrin,</span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt">I like the way you say =96 =93the only thi=
ng the network needs to know=94 - :-) The same could be said of the hypervi=
sor as well :-)</span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt">The problem is that once you start pushing=
 the intelligence into the server stack, you will have to keep pushing more=
 and more&nbsp; - e.g. firewalls, multicast,
 broadcast, packet inspection, flow mgmt, etc. You will find that you are r=
e-inventing on the overlay everything that exists on the underlay. The chal=
lenges are stacked up against the overlay, not the underlay.</span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt">You will also find that you need to commun=
icate between the overlay and the underlay to get the desired bandwidth, Qo=
S, flow mgmt, multipathing, tree optimization,
 and that is never going to be easy. You will also find that hardware accel=
erated networks perform better, deliver the requisite high-availability, an=
d consume less energy, as compared to doing the same thing in the hyperviso=
r.</span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt">Having said that, I recognize there are tw=
o models of virtualization that we know of. The =93overlay=94 model is like=
 a network hypervisor in which individual
 customers are like VMs. The =93multiplexed=94 model is like a multi-user O=
S. Both multi-user OS and the multi-VM hypervisor are isolated equally well=
. But they have different use-cases.</span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt">The main use-case for the VM hypervisor mo=
del is to multiplex multiple different OS/tools/application =93environments=
=94 into the same HW, letting each VM the
 ability to shut itself down, reboot or whatever it wants to do. In case of=
 network, that =93environment=94 is a set of protocols. So, if you have one=
 tenant running OSPF another one running IS-IS and yet another running BGP,=
 and they want to keep playing with
 their network configuration, it makes sense to run these in the overlay or=
 the VM mode.
</span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt">If everyone has the same environment (i.e.=
 runs the same protocols, and expects common controls), it makes more sense=
 to run them in the multi-user (multiplexed)
 mode rather than multi-VM (overlay) mode. The multi-VM model delegates adm=
inistration to the VM owner. The multi-user model owns the administration w=
hile letting a =93tenant=94 use the network. This is a conscious choice =96=
 is cloud going to open up the configuration
 of network like they open up the administration of VM? From what I know, t=
he answer is =93no=94.</span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt">These architectural models exist outside t=
he networking domain, and I would reach out into another domain to borrow t=
he intuitions, use-cases and challenges.
 Architecturally, you will find the same challenges in managing the overlay=
 model as exist in the multi-VM model. If there are no benefits to be gaine=
d from that, you are better off in the multiplexed mode.</span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt">But then again =96 beauty lies in the eyes=
 of the beholder.</span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt">Thanks, Ashish</span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY: 'Calibri','sans-serif'; =
COLOR: #1f497d; FONT-SIZE: 11pt"></span>&nbsp;</p>
<div>
<div style=3D"BORDER-BOTTOM: medium none; BORDER-LEFT: medium none; PADDING=
-BOTTOM: 0in; PADDING-LEFT: 0in; PADDING-RIGHT: 0in; BORDER-TOP: #b5c4df 1p=
t solid; BORDER-RIGHT: medium none; PADDING-TOP: 3pt">
<p class=3D"MsoNormal"><b><span style=3D"FONT-FAMILY: 'Tahoma','sans-serif'=
; FONT-SIZE: 10pt">From:</span></b><span style=3D"FONT-FAMILY: 'Tahoma','sa=
ns-serif'; FONT-SIZE: 10pt"> Aldrin Isaac [mailto:aldrin.isaac@gmail.com]
<br>
<b>Sent:</b> Friday, January 06, 2012 9:07 AM<br>
<b>Cc:</b> Ashish Dalela (adalela); Pedro Marques; david.black@emc.com; dc@=
ietf.org<br>
<b>Subject:</b> Re: [dc] [armd] IP over IP solution for data center interco=
nnect</span></p>
</div>
</div>
<p class=3D"MsoNormal">&nbsp;</p>
<div>
<p class=3D"MsoNormal"><br>
<br>
</p>
<div>
<p class=3D"MsoNormal">The only thing that the network needs to know is the=
 routes to the hypervisors / physical machines -- this is a solved problem.=
<br>
The VM addresses and routes are only visible to the [gateways, hypervisors =
with VMs in that overlay, other VMs in the same overlay, mapping server].<b=
r>
<br>
For a really old overview: <a href=3D"http://tools.ietf.org/html/draft-wkum=
ari-dcops-l3-vmmobility-00" target=3D"_blank">
http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00</a></p>
</div>
</div>
<p class=3D"MsoNormal">&nbsp;</p>
<div>
<p class=3D"MsoNormal">Is the issue that needs to be resolved for overlays =
only regarding the number of ARPs that need to be managed by the gateway? &=
nbsp;Could this issue not be resolved operationally by say having more gate=
ways? &nbsp;</p>
</div>
<div>
<p class=3D"MsoNormal">&nbsp;</p>
</div>
<div>
<p class=3D"MsoNormal">What other issues exist and need to be resolved for =
overlays, besides deciding on a tunneling encapsulation? &nbsp;Is the IETF =
expected to standardize control protocols for overlays?</p>
</div>
<div>
<p class=3D"MsoNormal">&nbsp;</p>
</div>
<div>
<p class=3D"MsoNormal">There is clearly a need for server-based virtual net=
works as well as a need for scalable network-based Ethernet virtual network=
s for DC. &nbsp;Shouldn't these be separate conversations?</p>
</div>
</div>
</div>
</body>
</html>

--_000_F9947EE081154C47BA2D281D8F3FAE8E0371A21D4FEX07SJCMBX1fo_--

From rbonica@juniper.net  Fri Jan  6 12:06:57 2012
Return-Path: <rbonica@juniper.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 67B9D21F862B for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 12:06:57 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.525
X-Spam-Level: 
X-Spam-Status: No, score=-106.525 tagged_above=-999 required=5 tests=[AWL=0.074, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id y52DPtotAucp for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 12:06:56 -0800 (PST)
Received: from exprod7og111.obsmtp.com (exprod7og111.obsmtp.com [64.18.2.175]) by ietfa.amsl.com (Postfix) with ESMTP id 4869821F8643 for <dc@ietf.org>; Fri,  6 Jan 2012 12:06:55 -0800 (PST)
Received: from P-EMHUB03-HQ.jnpr.net ([66.129.224.36]) (using TLSv1) by exprod7ob111.postini.com ([64.18.6.12]) with SMTP ID DSNKTwdUXhywksPDKdNOuXe/qwiTJKdnwUCj@postini.com; Fri, 06 Jan 2012 12:06:56 PST
Received: from p-emfe02-wf.jnpr.net (172.28.145.25) by P-EMHUB03-HQ.jnpr.net (172.24.192.37) with Microsoft SMTP Server (TLS) id 8.3.213.0; Fri, 6 Jan 2012 12:06:28 -0800
Received: from EMBX01-WF.jnpr.net ([fe80::1914:3299:33d9:e43b]) by p-emfe02-wf.jnpr.net ([fe80::c126:c633:d2dc:8090%11]) with mapi; Fri, 6 Jan 2012 15:06:28 -0500
From: Ronald Bonica <rbonica@juniper.net>
To: "robert@raszuk.net" <robert@raszuk.net>, "dc@ietf.org" <dc@ietf.org>
Date: Fri, 6 Jan 2012 15:06:26 -0500
Thread-Topic: [dc] DC Work Plan
Thread-Index: AczMp7ZidhzlSe1PTmOAfrLSiMnFGwABceDA
Message-ID: <13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net>
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4F07488E.2070103@raszuk.net>
In-Reply-To: <4F07488E.2070103@raszuk.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 20:06:57 -0000

Robert,

draft-dalela-dc-requirements provides scaffolding from which we can work. S=
pecifically, we can adopt its approach of dedicating one section to a descr=
iption of data center characteristics and another section to problem space =
components. All other discussion is out of bounds.

Considering how unstructured our discussion has been to date, a little scaf=
folding might be helpful.

                                                     Ron


> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Robert Raszuk
> Sent: Friday, January 06, 2012 2:17 PM
> To: dc@ietf.org
> Subject: Re: [dc] DC Work Plan
>=20
> Ron,
>=20
> I don't see what is there to adopt if you recommend to rework section 4
> and section 5 which is pretty much are 90% of this document.
>=20
> IMHO adoption should be considered after section 4 and section 5 are
> reworked.
>=20
> Thx,
> R.
>=20
> > Folks,
> >
> > The goal of this mailing list and the interim meeting which it
> supports is to articulate data center requirements. While there has
> been a lively exchange on the mailing list, we are not significantly
> closer to our goal than we were a month ago. Therefore, I would like to
> recommend the following work plan:
> >
> > 1. adopt draft-dalela-dc-requirements as a very rough baseline
> >
> > 2. Rework Section 4
> > 2.1 Rename Section 4 to "Data Center Characteristics"
> > 2.2 Initiate a mailing list thread (Subject: DCREQ Section 4 Outline)
> to determine whether Sections 4.1, 4.2 and 4.3 represent the salient
> characteristics of the data center. Should section headers be added,
> changed or deleted?
> > 2.3 Form a design team (or design teams) that will present proposed
> content for each subsection
> >
> > 3. Rework Section 5
> > 3.1 Initiate a mailing list thread (Subject: DCREQ Section 5 Outline)
> to determine whether Sections 5.1 - 5.10 represent the salient
> components of the data center problem space. Should section headers be
> added, changed or deleted?
> > 3.2 Form a design team (or design teams) that will present proposed
> content for each subsection
> >
> > 4. Discuss the presentations / form consensus
> >
> > 5. Generate new text for Sections 4 and 5
> >
> > 6. Submit to IESG for publication as INFORMATIONAL
> >
> > Please comment on this work plan ASAP.
> >
> >
> > --------------------------
> > Ron Bonica
> > vcard:       www.bonica.org/ron/ronbonica.vcf
> >
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
> >
> >
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From mphmmr@gmail.com  Fri Jan  6 12:48:26 2012
Return-Path: <mphmmr@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4F82821F877C for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 12:48:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.106
X-Spam-Level: 
X-Spam-Status: No, score=0.106 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, CN_BODY_35=0.339, HTML_MESSAGE=0.001, J_CHICKENPOX_13=0.6, MIME_CHARSET_FARAWAY=2.45, RCVD_IN_DNSWL_LOW=-1, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id hy7UHphzl6nb for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 12:48:25 -0800 (PST)
Received: from mail-wi0-f172.google.com (mail-wi0-f172.google.com [209.85.212.172]) by ietfa.amsl.com (Postfix) with ESMTP id AFC9021F8716 for <dc@ietf.org>; Fri,  6 Jan 2012 12:48:24 -0800 (PST)
Received: by wibhj6 with SMTP id hj6so1740862wib.31 for <dc@ietf.org>; Fri, 06 Jan 2012 12:48:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=iZKiHg5Z4W0rtoTIZe9lXCV/lus3LqrT0SzRWmfjDHg=; b=tJvTuYhdnx07R5xz8GbROWCD5pOv0hmoq0ng02RYmjlFQGXeX/AgcF2YCdqX4ix6R/ PXpu2ugdcyxN9iOaJmrt0hXqQVgG4m+j/cVIk8CbyOIarLmXeg6wLWJJK2r8pSyS+reW PENxPITKfOdErsSqi7dRmbZc2Z8jHSliE5dD4=
MIME-Version: 1.0
Received: by 10.180.76.233 with SMTP id n9mr9760288wiw.14.1325882903806; Fri, 06 Jan 2012 12:48:23 -0800 (PST)
Received: by 10.216.132.90 with HTTP; Fri, 6 Jan 2012 12:48:23 -0800 (PST)
In-Reply-To: <82D68180-B0A1-48B4-B1A5-5115D55AE2BD@asgaard.org>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <40F3FB9C-CBCB-41ED-A1E7-FB99DB3A928D@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE764022@szxeml525-mbs.china.huawei.com> <82D68180-B0A1-48B4-B1A5-5115D55AE2BD@asgaard.org>
Date: Fri, 6 Jan 2012 15:48:23 -0500
Message-ID: <CAA3wLqUm2exyn2h08t-L_rWA8-436gb3i46r=KDAYMnQfOKsVg@mail.gmail.com>
From: Michael Hammer <mphmmr@gmail.com>
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
Content-Type: multipart/alternative; boundary=f46d043c7cf67ec18d04b5e22b70
Cc: Thomas Narten <narten@us.ibm.com>, Russ White <russw@riw.us>, Xuxiaohu <xuxiaohu@huawei.com>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 20:48:26 -0000

--f46d043c7cf67ec18d04b5e22b70
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

Christopher,

Do you assume that the only FW applications are at the top of the tree and
serve all customers?
Or, could there be customer-centric FW apps that demarcates the boundary
between two tenants
that would also be on VMs within the DC and globally addressable?
Would such traffic need to take other than an optimal route between VMs
inside the DC?

Trying to picture the anchor points for routing.

Mike


On Fri, Jan 6, 2012 at 12:20 PM, Christopher LILJENSTOLPE
<cdl@asgaard.org>wrote:

> Greetings - in-line,
>
> On 03Jan2012, at 19.11, Xuxiaohu wrote:
>
> >
> >> -----=D3=CA=BC=FE=D4=AD=BC=FE-----
> >> =B7=A2=BC=FE=C8=CB: Christopher LILJENSTOLPE [mailto:cdl@asgaard.org]
> >> =B7=A2=CB=CD=CA=B1=BC=E4: 2012=C4=EA1=D4=C24=C8=D5 1:13
> >> =CA=D5=BC=FE=C8=CB: Xuxiaohu
> >> =B3=AD=CB=CD: Thomas Narten; Russ White; dc@ietf.org
> >> =D6=F7=CC=E2: Re: [dc] Elevator Pitch
> >>
> >> Greetings,
> >>
> >>
> >> On 29Dec2011, at 20.01, Xuxiaohu wrote:
> >>
> >>>
> >>>> -----=D3=CA=BC=FE=D4=AD=BC=FE-----
> >>>> =B7=A2=BC=FE=C8=CB: Christopher LILJENSTOLPE [mailto:cdl@asgaard.org=
]
> >>>> =B7=A2=CB=CD=CA=B1=BC=E4: 2011=C4=EA12=D4=C230=C8=D5 1:20
> >>>> =CA=D5=BC=FE=C8=CB: Xuxiaohu
> >>>> =B3=AD=CB=CD: Thomas Narten; Russ White; dc@ietf.org
> >>>> =D6=F7=CC=E2: Re: [dc] Elevator Pitch
> >>>>
> >>>> Greetings Xuxiaohu,
> >>>>
> >>>> On 29Dec2011, at 00.55, Xuxiaohu wrote:
> >>>>
> >>>>> Hi Thomas,
> >>>>>
> >>>>>> -----=D3=CA=BC=FE=D4=AD=BC=FE-----
> >>>>>> =B7=A2=BC=FE=C8=CB: dc-bounces@ietf.org [mailto:dc-bounces@ietf.or=
g] =B4=FA=B1=ED
> >> Thomas
> >>>>>> Narten
> >>>>>> =B7=A2=CB=CD=CA=B1=BC=E4: 2011=C4=EA12=D4=C229=C8=D5 1:01
> >>>>>> =CA=D5=BC=FE=C8=CB: Russ White
> >>>>>> =B3=AD=CB=CD: dc@ietf.org
> >>>>>> =D6=F7=CC=E2: Re: [dc] Elevator Pitch
> >>>>>>
> >> <snip>
> >>
> >>> Hi Chris,
> >>>
> >>> Would you please give a concrete example where the communication
> >> between different tenants is very common in the multi-tenant cloud dat=
a
> >> center?
> >>
> >> Here's the question - if you are a tenant in a data center, and you ar=
e
> writing a
> >> mash-up application against some other content provider, how do you
> know if
> >> they are in the same data center, or not?  My guess is that there is
> quite a bit
> >> of traffic between tenants of EC2, btw.  I know that that was the
> intent at my
> >> last gig - we wanted SaaS-like providers to live in our DC's and
> develop an
> >> eco-system around our core services.  Other major cross-dc traffic - h=
ow
> >> about all of my services like spam filtering, backup, etc.  In a DC, I
> may call
> >> them "core services" but they are, in fact, another tenant.  How about
> large
> >> scale content providers that mash-up between their own offerings.  Man=
y
> of
> >> those properties are viewed as "separate customers" by the
> infrastructure
> >> teams (can't name names here).  Any inter-offering mash-ups would
> definitely
> >> be cross-dc.
> >
> > Hi Chris,
> >
> > I have no doubt about the possibility you mentioned above. However, onc=
e
> you are considering to optimize the forwarding path of inter-tenant traff=
ic
> within the scope of L2VPN or L3VPN solutions, while taking the address
> space overlapping, firewall policy issues, etc. into account, you will fi=
nd
> it is a much complex job. Whether or not it is worthwhile to do that
> optimization heavily depends on whether the volume of inter-tenant traffi=
c
> is much significant.
>
> This is one that, if we miss, will come back and bite us.  If you have no
> way of getting traffic between "tenants" in a data centre, I have NO idea
> how you will deal with shared services (such as storage, backup, transit,
> etc) as well as the classical inter-tennant problem.  If the proposal is =
to
> just send it up the tree (as is done today), you will continue to have th=
e
> same issues as we do now (that is driving all this work).  If you doubt
> this, ask yourself why there is a drive for greater cross-sectional
> bandwidth in the data center.  If you believe that the classical tree
> solution will work, then we are all chasing our tail here.
>
>        Chris
>
> >
> > Best regards,
> > Xiaohu
> >
> >>> Best regards,
> >>> Xiaohu
> >>>
> >>>>>
> >>>>>
> >>>>>> Or will they want an alternative approach?
> >>>>>>
> >>>>>>> 2. Why does this mobility need to be at layer 2 specifically? Are
> we
> >>>>>>> assuming DDNS and other sorts of solutions in this space will
> simply
> >>>>>>> never be fast enough/scale far enough/etc?
> >>>>>>
> >>>>>> Like it or not, the key requirement for VM mobility is that the VM=
's
> >>>>>> IP address does not change. That means the VM can't really move fr=
om
> >>>>>> one IP subnet to another. That means either moving to bigger and
> >>>>>> bigger L2s (all under one IP subnet) as the DC expands or the need
> to
> >>>>>> inject /32 host routes.
> >>>>>
> >>>>> In the DCI scenario where the PE routers are usually performed at t=
he
> >>>> aggregation SWs or even core SWs, the PE routers would need a much
> large
> >>>> forwarding table. Provided the routing table containing millions of
> entries,
> >>>> which is available on most today's high-end routers, was still not
> large
> >> enough,
> >>>> the on-demand FIB installation or on-demand route announcement
> >>>> mechanisms can be used further to scale the solution. Note that the
> trigger
> >> for
> >>>> the FIB installation or route announcement is ARP request packets
> rather
> >> than
> >>>> data packets. Hence it will not cause the so-called initial packet
> loss or
> >> latency
> >>>> issue.
> >>>>>
> >>>>>> Neither of those approaches seems particularly scalable/desirable =
if
> >>>>>> you look 10 years down the road and think of 1M+ physical machines
> in
> >>>>>> a DC.
> >>>>>
> >>>>> Maybe we should also take the development speed of routing/switchin=
g
> >> chip
> >>>> and CPU technologies into account:)
> >>>>
> >>>> It's more a question of cost/performance on off-chip memory/TCAMs.
> >> That is
> >>>> a slightly different curve :)
> >>>>
> >>>>    Chris
> >>>>
> >>>>>
> >>>>> Best regards,
> >>>>> Xiaohu
> >>>>>
> >>>>>> Thomas
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> dc mailing list
> >>>>>> dc@ietf.org
> >>>>>> https://www.ietf.org/mailman/listinfo/dc
> >>>>> _______________________________________________
> >>>>> dc mailing list
> >>>>> dc@ietf.org
> >>>>> https://www.ietf.org/mailman/listinfo/dc
> >>>>
> >>>> --
> >>>> =C0=EE=BF=C2=EE=A3
> >>>> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
> >>>> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
> >>>
> >>> _______________________________________________
> >>> dc mailing list
> >>> dc@ietf.org
> >>> https://www.ietf.org/mailman/listinfo/dc
> >>
> >> --
> >> =C0=EE=BF=C2=EE=A3
> >> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
> >> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
>
> --
> =C0=EE=BF=C2=EE=A3
> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>

--f46d043c7cf67ec18d04b5e22b70
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

Christopher,<div><br></div><div>Do you assume that the only FW applications=
 are at the top of the tree and serve all customers?</div><div>Or, could th=
ere be customer-centric FW apps that demarcates the boundary between two te=
nants&nbsp;</div>
<div>that would also be on VMs within the DC and globally addressable?</div=
><div>Would such traffic need to take other than an optimal route between V=
Ms inside the DC?</div><div><br></div><div>Trying to picture the anchor poi=
nts for routing.</div>
<div><br></div><div>Mike</div><div><br><br><div class=3D"gmail_quote">On Fr=
i, Jan 6, 2012 at 12:20 PM, Christopher LILJENSTOLPE <span dir=3D"ltr">&lt;=
<a href=3D"mailto:cdl@asgaard.org">cdl@asgaard.org</a>&gt;</span> wrote:<br=
><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1=
px #ccc solid;padding-left:1ex">
Greetings - in-line,<br>
<br>
On 03Jan2012, at 19.11, Xuxiaohu wrote:<br>
<br>
&gt;<br>
&gt;&gt; -----=D3=CA=BC=FE=D4=AD=BC=FE-----<br>
&gt;&gt; =B7=A2=BC=FE=C8=CB: Christopher LILJENSTOLPE [mailto:<a href=3D"ma=
ilto:cdl@asgaard.org">cdl@asgaard.org</a>]<br>
&gt;&gt; =B7=A2=CB=CD=CA=B1=BC=E4: 2012=C4=EA1=D4=C24=C8=D5 1:13<br>
&gt;&gt; =CA=D5=BC=FE=C8=CB: Xuxiaohu<br>
&gt;&gt; =B3=AD=CB=CD: Thomas Narten; Russ White; <a href=3D"mailto:dc@ietf=
.org">dc@ietf.org</a><br>
&gt;&gt; =D6=F7=CC=E2: Re: [dc] Elevator Pitch<br>
&gt;&gt;<br>
&gt;&gt; Greetings,<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; On 29Dec2011, at 20.01, Xuxiaohu wrote:<br>
&gt;&gt;<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; -----=D3=CA=BC=FE=D4=AD=BC=FE-----<br>
&gt;&gt;&gt;&gt; =B7=A2=BC=FE=C8=CB: Christopher LILJENSTOLPE [mailto:<a hr=
ef=3D"mailto:cdl@asgaard.org">cdl@asgaard.org</a>]<br>
&gt;&gt;&gt;&gt; =B7=A2=CB=CD=CA=B1=BC=E4: 2011=C4=EA12=D4=C230=C8=D5 1:20<=
br>
&gt;&gt;&gt;&gt; =CA=D5=BC=FE=C8=CB: Xuxiaohu<br>
&gt;&gt;&gt;&gt; =B3=AD=CB=CD: Thomas Narten; Russ White; <a href=3D"mailto=
:dc@ietf.org">dc@ietf.org</a><br>
&gt;&gt;&gt;&gt; =D6=F7=CC=E2: Re: [dc] Elevator Pitch<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; Greetings Xuxiaohu,<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; On 29Dec2011, at 00.55, Xuxiaohu wrote:<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; Hi Thomas,<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;&gt; -----=D3=CA=BC=FE=D4=AD=BC=FE-----<br>
&gt;&gt;&gt;&gt;&gt;&gt; =B7=A2=BC=FE=C8=CB: <a href=3D"mailto:dc-bounces@i=
etf.org">dc-bounces@ietf.org</a> [mailto:<a href=3D"mailto:dc-bounces@ietf.=
org">dc-bounces@ietf.org</a>] =B4=FA=B1=ED<br>
&gt;&gt; Thomas<br>
&gt;&gt;&gt;&gt;&gt;&gt; Narten<br>
&gt;&gt;&gt;&gt;&gt;&gt; =B7=A2=CB=CD=CA=B1=BC=E4: 2011=C4=EA12=D4=C229=C8=
=D5 1:01<br>
&gt;&gt;&gt;&gt;&gt;&gt; =CA=D5=BC=FE=C8=CB: Russ White<br>
&gt;&gt;&gt;&gt;&gt;&gt; =B3=AD=CB=CD: <a href=3D"mailto:dc@ietf.org">dc@ie=
tf.org</a><br>
&gt;&gt;&gt;&gt;&gt;&gt; =D6=F7=CC=E2: Re: [dc] Elevator Pitch<br>
&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt; &lt;snip&gt;<br>
&gt;&gt;<br>
&gt;&gt;&gt; Hi Chris,<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Would you please give a concrete example where the communicati=
on<br>
&gt;&gt; between different tenants is very common in the multi-tenant cloud=
 data<br>
&gt;&gt; center?<br>
&gt;&gt;<br>
&gt;&gt; Here&#39;s the question - if you are a tenant in a data center, an=
d you are writing a<br>
&gt;&gt; mash-up application against some other content provider, how do yo=
u know if<br>
&gt;&gt; they are in the same data center, or not? &nbsp;My guess is that t=
here is quite a bit<br>
&gt;&gt; of traffic between tenants of EC2, btw. &nbsp;I know that that was=
 the intent at my<br>
&gt;&gt; last gig - we wanted SaaS-like providers to live in our DC&#39;s a=
nd develop an<br>
&gt;&gt; eco-system around our core services. &nbsp;Other major cross-dc tr=
affic - how<br>
&gt;&gt; about all of my services like spam filtering, backup, etc. &nbsp;I=
n a DC, I may call<br>
&gt;&gt; them &quot;core services&quot; but they are, in fact, another tena=
nt. &nbsp;How about large<br>
&gt;&gt; scale content providers that mash-up between their own offerings. =
&nbsp;Many of<br>
&gt;&gt; those properties are viewed as &quot;separate customers&quot; by t=
he infrastructure<br>
&gt;&gt; teams (can&#39;t name names here). &nbsp;Any inter-offering mash-u=
ps would definitely<br>
&gt;&gt; be cross-dc.<br>
&gt;<br>
&gt; Hi Chris,<br>
&gt;<br>
&gt; I have no doubt about the possibility you mentioned above. However, on=
ce you are considering to optimize the forwarding path of inter-tenant traf=
fic within the scope of L2VPN or L3VPN solutions, while taking the address =
space overlapping, firewall policy issues, etc. into account, you will find=
 it is a much complex job. Whether or not it is worthwhile to do that optim=
ization heavily depends on whether the volume of inter-tenant traffic is mu=
ch significant.<br>

<br>
This is one that, if we miss, will come back and bite us. &nbsp;If you have=
 no way of getting traffic between &quot;tenants&quot; in a data centre, I =
have NO idea how you will deal with shared services (such as storage, backu=
p, transit, etc) as well as the classical inter-tennant problem. &nbsp;If t=
he proposal is to just send it up the tree (as is done today), you will con=
tinue to have the same issues as we do now (that is driving all this work).=
 &nbsp;If you doubt this, ask yourself why there is a drive for greater cro=
ss-sectional bandwidth in the data center. &nbsp;If you believe that the cl=
assical tree solution will work, then we are all chasing our tail here.<br>

<br>
 &nbsp; &nbsp; &nbsp; &nbsp;Chris<br>
<br>
&gt;<br>
&gt; Best regards,<br>
&gt; Xiaohu<br>
&gt;<br>
&gt;&gt;&gt; Best regards,<br>
&gt;&gt;&gt; Xiaohu<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;&gt; Or will they want an alternative approach?<br>
&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;&gt;&gt; 2. Why does this mobility need to be at layer =
2 specifically? Are we<br>
&gt;&gt;&gt;&gt;&gt;&gt;&gt; assuming DDNS and other sorts of solutions in =
this space will simply<br>
&gt;&gt;&gt;&gt;&gt;&gt;&gt; never be fast enough/scale far enough/etc?<br>
&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;&gt; Like it or not, the key requirement for VM mobilit=
y is that the VM&#39;s<br>
&gt;&gt;&gt;&gt;&gt;&gt; IP address does not change. That means the VM can&=
#39;t really move from<br>
&gt;&gt;&gt;&gt;&gt;&gt; one IP subnet to another. That means either moving=
 to bigger and<br>
&gt;&gt;&gt;&gt;&gt;&gt; bigger L2s (all under one IP subnet) as the DC exp=
ands or the need to<br>
&gt;&gt;&gt;&gt;&gt;&gt; inject /32 host routes.<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; In the DCI scenario where the PE routers are usually p=
erformed at the<br>
&gt;&gt;&gt;&gt; aggregation SWs or even core SWs, the PE routers would nee=
d a much large<br>
&gt;&gt;&gt;&gt; forwarding table. Provided the routing table containing mi=
llions of entries,<br>
&gt;&gt;&gt;&gt; which is available on most today&#39;s high-end routers, w=
as still not large<br>
&gt;&gt; enough,<br>
&gt;&gt;&gt;&gt; the on-demand FIB installation or on-demand route announce=
ment<br>
&gt;&gt;&gt;&gt; mechanisms can be used further to scale the solution. Note=
 that the trigger<br>
&gt;&gt; for<br>
&gt;&gt;&gt;&gt; the FIB installation or route announcement is ARP request =
packets rather<br>
&gt;&gt; than<br>
&gt;&gt;&gt;&gt; data packets. Hence it will not cause the so-called initia=
l packet loss or<br>
&gt;&gt; latency<br>
&gt;&gt;&gt;&gt; issue.<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;&gt; Neither of those approaches seems particularly sca=
lable/desirable if<br>
&gt;&gt;&gt;&gt;&gt;&gt; you look 10 years down the road and think of 1M+ p=
hysical machines in<br>
&gt;&gt;&gt;&gt;&gt;&gt; a DC.<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; Maybe we should also take the development speed of rou=
ting/switching<br>
&gt;&gt; chip<br>
&gt;&gt;&gt;&gt; and CPU technologies into account:)<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; It&#39;s more a question of cost/performance on off-chip m=
emory/TCAMs.<br>
&gt;&gt; That is<br>
&gt;&gt;&gt;&gt; a slightly different curve :)<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; &nbsp; &nbsp;Chris<br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt; Best regards,<br>
&gt;&gt;&gt;&gt;&gt; Xiaohu<br>
&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;&gt; Thomas<br>
&gt;&gt;&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt;&gt;&gt; _______________________________________________<br=
>
&gt;&gt;&gt;&gt;&gt;&gt; dc mailing list<br>
&gt;&gt;&gt;&gt;&gt;&gt; <a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>
&gt;&gt;&gt;&gt;&gt;&gt; <a href=3D"https://www.ietf.org/mailman/listinfo/d=
c" target=3D"_blank">https://www.ietf.org/mailman/listinfo/dc</a><br>
&gt;&gt;&gt;&gt;&gt; _______________________________________________<br>
&gt;&gt;&gt;&gt;&gt; dc mailing list<br>
&gt;&gt;&gt;&gt;&gt; <a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>
&gt;&gt;&gt;&gt;&gt; <a href=3D"https://www.ietf.org/mailman/listinfo/dc" t=
arget=3D"_blank">https://www.ietf.org/mailman/listinfo/dc</a><br>
&gt;&gt;&gt;&gt;<br>
&gt;&gt;&gt;&gt; --<br>
&gt;&gt;&gt;&gt; =C0=EE=BF=C2=EE=A3<br>
&gt;&gt;&gt;&gt; Check my PGP key here: <a href=3D"https://www.asgaard.org/=
~cdl/cdl.asc" target=3D"_blank">https://www.asgaard.org/~cdl/cdl.asc</a><br=
>
&gt;&gt;&gt;&gt; Current vCard here: <a href=3D"https://www.asgaard.org/~cd=
l/cdl.vcf" target=3D"_blank">https://www.asgaard.org/~cdl/cdl.vcf</a><br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; _______________________________________________<br>
&gt;&gt;&gt; dc mailing list<br>
&gt;&gt;&gt; <a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>
&gt;&gt;&gt; <a href=3D"https://www.ietf.org/mailman/listinfo/dc" target=3D=
"_blank">https://www.ietf.org/mailman/listinfo/dc</a><br>
&gt;&gt;<br>
&gt;&gt; --<br>
&gt;&gt; =C0=EE=BF=C2=EE=A3<br>
&gt;&gt; Check my PGP key here: <a href=3D"https://www.asgaard.org/~cdl/cdl=
.asc" target=3D"_blank">https://www.asgaard.org/~cdl/cdl.asc</a><br>
&gt;&gt; Current vCard here: <a href=3D"https://www.asgaard.org/~cdl/cdl.vc=
f" target=3D"_blank">https://www.asgaard.org/~cdl/cdl.vcf</a><br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; dc mailing list<br>
&gt; <a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>
&gt; <a href=3D"https://www.ietf.org/mailman/listinfo/dc" target=3D"_blank"=
>https://www.ietf.org/mailman/listinfo/dc</a><br>
<br>
--<br>
=C0=EE=BF=C2=EE=A3<br>
Check my PGP key here: <a href=3D"https://www.asgaard.org/~cdl/cdl.asc" tar=
get=3D"_blank">https://www.asgaard.org/~cdl/cdl.asc</a><br>
Current vCard here: <a href=3D"https://www.asgaard.org/~cdl/cdl.vcf" target=
=3D"_blank">https://www.asgaard.org/~cdl/cdl.vcf</a><br>
<br>
_______________________________________________<br>
dc mailing list<br>
<a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>
<a href=3D"https://www.ietf.org/mailman/listinfo/dc" target=3D"_blank">http=
s://www.ietf.org/mailman/listinfo/dc</a><br>
</blockquote></div><br></div>

--f46d043c7cf67ec18d04b5e22b70--

From linda.dunbar@huawei.com  Fri Jan  6 13:49:44 2012
Return-Path: <linda.dunbar@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5606921F85AE for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 13:49:44 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.567
X-Spam-Level: 
X-Spam-Status: No, score=-2.567 tagged_above=-999 required=5 tests=[AWL=0.031,  BAYES_00=-2.599, HTML_MESSAGE=0.001]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YryXv6nJ1c4l for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 13:49:41 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id C2D1621F858D for <dc@ietf.org>; Fri,  6 Jan 2012 13:49:41 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml202-edg.china.huawei.com) ([172.18.9.243]) by dfwrg02-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACD70371; Fri, 06 Jan 2012 16:49:41 -0500 (EST)
Received: from DFWEML403-HUB.china.huawei.com (10.193.5.151) by dfweml202-edg.china.huawei.com (172.18.9.108) with Microsoft SMTP Server (TLS) id 14.1.323.3; Fri, 6 Jan 2012 13:48:31 -0800
Received: from DFWEML505-MBX.china.huawei.com ([10.124.31.100]) by dfweml403-hub.china.huawei.com ([10.193.5.151]) with mapi id 14.01.0323.003; Fri, 6 Jan 2012 13:48:23 -0800
From: Linda Dunbar <linda.dunbar@huawei.com>
To: Ronald Bonica <rbonica@juniper.net>, "dc@ietf.org" <dc@ietf.org>
Thread-Topic: DC Work Plan
Thread-Index: AczMpBVlzok2urS5RMGeXcLx/H8AYQAEo8Gw
Date: Fri, 6 Jan 2012 21:48:21 +0000
Message-ID: <4A95BA014132FF49AE685FAB4B9F17F62A4E65BD@dfweml505-mbx>
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net>
In-Reply-To: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net>
Accept-Language: en-US, zh-CN
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.192.11.97]
Content-Type: multipart/alternative; boundary="_000_4A95BA014132FF49AE685FAB4B9F17F62A4E65BDdfweml505mbx_"
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 21:49:44 -0000

--_000_4A95BA014132FF49AE685FAB4B9F17F62A4E65BDdfweml505mbx_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Ron,

It is a good idea to have a baseline to start with.


Section 4: "Data Center Characteristics" should at least add the following =
sub-sections:
*       A section on how location of load-balancing, firewall, and other mi=
ddleware boxes determine where VMs/hosts in different segments can be aggre=
gated/exchanged.
*       A section on how applications are instantiated in data center. Appl=
ications servers (e.g. Oracle's middleware WebLogicServer) instantiate mult=
iple instances of one application, assign them the IP addresses, and VM/ser=
ver managers place them onto a server rack.
*       A section to describe how backend data/storage are separated from t=
he front end service networks.

Section 5 of  draft-dalela-dc-requirements needs a lot of changes.

*       Many statements in Section 5.1 "Basic Forwarding Problem"  are not =
accurate. E.g. "the need to massively scale virtualized hosts" may break th=
e FDB (or FIB) on switches/routers, but it doesn't really break the L2/L3 t=
echnologies traditionally used in data center. L3 networks can't support ho=
st mobility is not true either. Host based routing can support mobility.
*       Section 5.1 is really about "basic forwarding for segmented network=
s".
*       Section 5.2 (DC Inter-connectivity problem) is loaded. Should be br=
oken into multiple (sub)sections:
o       Problems associated with various networks in interconnecting data c=
enters (VPN, straight IP, or straight optical)
o       Multi-data center Service interconnection, Layer 4-7 load balancing=
 issues:  i.e. the description on how service needs to be closer to clients=
, mapping services to location of data centers, etc.
o       Middleware function association in Multi-data center scenario
*       Section 5.3 "Multi-tenancy Problem" should at least include issues =
associated with non-VLAN partition (e.g. IEEE802.1ah's ISID)
*       New sub-section in Section 5 on traffic pattern dictated by policy =
and middleware boxes location.

Linda Dunbar


> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Ronald Bonica
> Sent: Friday, January 06, 2012 12:51 PM
> To: dc@ietf.org
> Subject: [dc] DC Work Plan
>
> Folks,
>
> The goal of this mailing list and the interim meeting which it supports
> is to articulate data center requirements. While there has been a
> lively exchange on the mailing list, we are not significantly closer to
> our goal than we were a month ago. Therefore, I would like to recommend
> the following work plan:
>
> 1. adopt draft-dalela-dc-requirements as a very rough baseline
>
> 2. Rework Section 4
> 2.1 Rename Section 4 to "Data Center Characteristics"
> 2.2 Initiate a mailing list thread (Subject: DCREQ Section 4 Outline)
> to determine whether Sections 4.1, 4.2 and 4.3 represent the salient
> characteristics of the data center. Should section headers be added,
> changed or deleted?
> 2.3 Form a design team (or design teams) that will present proposed
> content for each subsection
>
> 3. Rework Section 5
> 3.1 Initiate a mailing list thread (Subject: DCREQ Section 5 Outline)
> to determine whether Sections 5.1 - 5.10 represent the salient
> components of the data center problem space. Should section headers be
> added, changed or deleted?
> 3.2 Form a design team (or design teams) that will present proposed
> content for each subsection
>
> 4. Discuss the presentations / form consensus
>
> 5. Generate new text for Sections 4 and 5
>
> 6. Submit to IESG for publication as INFORMATIONAL
>
> Please comment on this work plan ASAP.
>
>
> --------------------------
> Ron Bonica
> vcard:       www.bonica.org/ron/ronbonica.vcf
>
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


--_000_4A95BA014132FF49AE685FAB4B9F17F62A4E65BDdfweml505mbx_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Exchange Server">
<!-- converted from rtf -->
<style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left:=
 #800000 2px solid; } --></style>
</head>
<body>
<font face=3D"Consolas" size=3D"2"><span style=3D"font-size:10.5pt;">
<div>Ron, </div>
<div>&nbsp;</div>
<div>It is a good idea to have a baseline to start with.  </div>
<div>&nbsp;</div>
<div><font face=3D"Times New Roman">&nbsp;</font></div>
<div>Section 4: &quot;Data Center Characteristics&quot; should at least add=
 the following sub-sections:</div>
<ul style=3D"margin:0;padding-left:36pt;">
<li>A section on how location of load-balancing, firewall, and other middle=
ware boxes determine where VMs/hosts in different segments can be aggregate=
d/exchanged.</li><li>A section on how applications are instantiated in data=
 center. Applications servers (e.g. Oracle&#8217;s middleware WebLogicServe=
r) instantiate multiple instances of one application, assign them the IP ad=
dresses, and VM/server managers place them onto a server
rack. </li><li>A section to describe how backend data/storage are separated=
 from the front end service networks.    </li></ul>
<div><font face=3D"Times New Roman">&nbsp;</font></div>
<div>Section 5 of  draft-dalela-dc-requirements needs a lot of changes. </d=
iv>
<div>&nbsp;</div>
<ul style=3D"margin:0;padding-left:36pt;">
<li>Many statements in Section 5.1 &quot;Basic Forwarding Problem&quot;  ar=
e not accurate. E.g. &#8220;the need to massively scale virtualized hosts&#=
8221; may break the FDB (or FIB) on switches/routers, but it doesn&#8217;t =
really break the L2/L3 technologies traditionally used in data
center. L3 networks can&#8217;t support host mobility is not true either. H=
ost based routing can support mobility.&nbsp;   </li><li>Section 5.1 is rea=
lly about &#8220;basic forwarding for segmented networks&#8221;. </li><li>S=
ection 5.2 (DC Inter-connectivity problem) is loaded. Should be broken into=
 multiple (sub)sections:</li></ul>
<ul style=3D"margin:0;padding-left:72pt;">
<li>Problems associated with various networks in interconnecting data cente=
rs (VPN, straight IP, or straight optical)</li><li>Multi-data center Servic=
e interconnection, Layer 4-7 load balancing issues:  i.e. the description o=
n how service needs to be closer to clients, mapping services to location o=
f data centers, etc.</li><li>Middleware function association in Multi-data =
center scenario</li></ul>
<ul style=3D"margin:0;padding-left:36pt;">
<li>Section 5.3 &#8220;Multi-tenancy Problem&#8221; should at least include=
 issues associated with non-VLAN partition (e.g. IEEE802.1ah&#8217;s ISID)<=
/li><li>New sub-section in Section 5 on traffic pattern dictated by policy =
and middleware boxes location.  </li></ul>
<div><font face=3D"Times New Roman">&nbsp;</font></div>
<div>Linda Dunbar</div>
<div><font face=3D"Times New Roman">&nbsp;</font></div>
<div><font face=3D"Times New Roman">&nbsp;</font></div>
<div>&gt; -----Original Message-----</div>
<div>&gt; From: dc-bounces@ietf.org [<a href=3D"mailto:dc-bounces@ietf.org"=
>mailto:dc-bounces@ietf.org</a>] On Behalf Of</div>
<div>&gt; Ronald Bonica</div>
<div>&gt; Sent: Friday, January 06, 2012 12:51 PM</div>
<div>&gt; To: dc@ietf.org</div>
<div>&gt; Subject: [dc] DC Work Plan</div>
<div>&gt; </div>
<div>&gt; Folks,</div>
<div>&gt; </div>
<div>&gt; The goal of this mailing list and the interim meeting which it su=
pports</div>
<div>&gt; is to articulate data center requirements. While there has been a=
</div>
<div>&gt; lively exchange on the mailing list, we are not significantly clo=
ser to</div>
<div>&gt; our goal than we were a month ago. Therefore, I would like to rec=
ommend</div>
<div>&gt; the following work plan:</div>
<div>&gt; </div>
<div>&gt; 1. adopt draft-dalela-dc-requirements as a very rough baseline</d=
iv>
<div>&gt; </div>
<div>&gt; 2. Rework Section 4</div>
<div>&gt; 2.1 Rename Section 4 to &quot;Data Center Characteristics&quot;</=
div>
<div>&gt; 2.2 Initiate a mailing list thread (Subject: DCREQ Section 4 Outl=
ine)</div>
<div>&gt; to determine whether Sections 4.1, 4.2 and 4.3 represent the sali=
ent</div>
<div>&gt; characteristics of the data center. Should section headers be add=
ed,</div>
<div>&gt; changed or deleted?</div>
<div>&gt; 2.3 Form a design team (or design teams) that will present propos=
ed</div>
<div>&gt; content for each subsection</div>
<div>&gt; </div>
<div>&gt; 3. Rework Section 5</div>
<div>&gt; 3.1 Initiate a mailing list thread (Subject: DCREQ Section 5 Outl=
ine)</div>
<div>&gt; to determine whether Sections 5.1 - 5.10 represent the salient</d=
iv>
<div>&gt; components of the data center problem space. Should section heade=
rs be</div>
<div>&gt; added, changed or deleted?</div>
<div>&gt; 3.2 Form a design team (or design teams) that will present propos=
ed</div>
<div>&gt; content for each subsection</div>
<div>&gt; </div>
<div>&gt; 4. Discuss the presentations / form consensus</div>
<div>&gt; </div>
<div>&gt; 5. Generate new text for Sections 4 and 5</div>
<div>&gt; </div>
<div>&gt; 6. Submit to IESG for publication as INFORMATIONAL</div>
<div>&gt; </div>
<div>&gt; Please comment on this work plan ASAP.</div>
<div>&gt; </div>
<div>&gt; </div>
<div>&gt; --------------------------</div>
<div>&gt; Ron Bonica</div>
<div>&gt; vcard:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href=3D"http://www.=
bonica.org/ron/ronbonica.vcf">www.bonica.org/ron/ronbonica.vcf</a></div>
<div>&gt; </div>
<div>&gt; </div>
<div>&gt; _______________________________________________</div>
<div>&gt; dc mailing list</div>
<div>&gt; dc@ietf.org</div>
<div>&gt; <a href=3D"https://www.ietf.org/mailman/listinfo/dc">https://www.=
ietf.org/mailman/listinfo/dc</a></div>
<div><font face=3D"Times New Roman">&nbsp;</font></div>
</span></font>
</body>
</html>

--_000_4A95BA014132FF49AE685FAB4B9F17F62A4E65BDdfweml505mbx_--

From narten@us.ibm.com  Fri Jan  6 14:02:09 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F030521F8540 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 14:02:09 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.587
X-Spam-Level: 
X-Spam-Status: No, score=-106.587 tagged_above=-999 required=5 tests=[AWL=0.012, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UmsoQC7Gww5K for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 14:02:09 -0800 (PST)
Received: from e36.co.us.ibm.com (e36.co.us.ibm.com [32.97.110.154]) by ietfa.amsl.com (Postfix) with ESMTP id 2FC5021F853B for <dc@ietf.org>; Fri,  6 Jan 2012 14:02:09 -0800 (PST)
Received: from /spool/local by e36.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Fri, 6 Jan 2012 15:02:07 -0700
Received: from d03relay02.boulder.ibm.com (9.17.195.227) by e36.co.us.ibm.com (192.168.1.136) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Fri, 6 Jan 2012 15:01:38 -0700
Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q06M1Ptf166998 for <dc@ietf.org>; Fri, 6 Jan 2012 15:01:33 -0700
Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q06M1OGB007912 for <dc@ietf.org>; Fri, 6 Jan 2012 15:01:24 -0700
Received: from cichlid.raleigh.ibm.com (sig-9-65-211-102.mts.ibm.com [9.65.211.102]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q06M129X004098 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 6 Jan 2012 15:01:03 -0700
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q06M0xkO008099; Fri, 6 Jan 2012 17:01:00 -0500
Message-Id: <201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com>
To: Ronald Bonica <rbonica@juniper.net>
In-reply-to: <13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net>
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4F07488E.2070103@raszuk.net> <13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net>
Comments: In-reply-to Ronald Bonica <rbonica@juniper.net> message dated "Fri, 06 Jan 2012 15:06:26 -0500."
Date: Fri, 06 Jan 2012 17:00:59 -0500
From: Thomas Narten <narten@us.ibm.com>
x-cbid: 12010622-3352-0000-0000-000001C46ED1
Cc: "dc@ietf.org" <dc@ietf.org>, "robert@raszuk.net" <robert@raszuk.net>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 22:02:10 -0000

Hi Ron.

> Considering how unstructured our discussion has been to date, a
>  little scaffolding might be helpful.

I agree.

But let me posit the following. Normally in the IETF, we have meetings
to discuss concrete work. I.e., you have to have existing documents to
justify the face-to-face meeting. And there has to be agreement that
there is enough "meat" in those documents, and that having a
face-to-face meeting will move the discussion forward. Think about the
requirements before a BOF is approved, for example.

Right now, we just don't have such a document (or documents). We have
a lot of documents, but no real agreement, even on general
areas. (Note: I'm specifically excluding NVO3 and SDN when I say this,
because I think both efforts can easily continue on their own as
separate efforts to create WGs -- but outside of those efforts, I'm
really struggling to understand what additional work the IETF is being
asked to do in the "DC area".)

There is real work to iterating on the problem areas that just hasn't
been done yet.

Can such work happen before the interim? Sure. Maybe. But there are no
guarantees, and based on the discussions in the last month, there is
room for doubt.

The proposed dates previously mentioned were Feb 22-23. That is a mere
4 1/2 weeks from the Paris IETF. Assuming we had a successful interim
meeting, that wouldn't allow much time to revise documents and prep
for the Paris meeting.

So, given the work plan, and doubts on whether we can deliver on it
prior to an interim meeting, the timing of the proposed meeting
relative to the next IETF, do we really need to have an interim
meeting at all?

Wouldn't it be good enough to attempt to execute on the plan, with an
expectation that there will be some sort of session in Paris (if
justified)?

What do others think?

Thomas


From linda.dunbar@huawei.com  Fri Jan  6 14:34:35 2012
Return-Path: <linda.dunbar@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8B3B221F851D for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 14:34:35 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.573
X-Spam-Level: 
X-Spam-Status: No, score=-2.573 tagged_above=-999 required=5 tests=[AWL=0.026,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Par40Eczzwwa for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 14:34:35 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id EDD4621F8518 for <dc@ietf.org>; Fri,  6 Jan 2012 14:34:34 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml202-edg.china.huawei.com) ([172.18.9.243]) by dfwrg01-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACK76927; Fri, 06 Jan 2012 17:34:34 -0500 (EST)
Received: from DFWEML404-HUB.china.huawei.com (10.193.5.203) by dfweml202-edg.china.huawei.com (172.18.9.108) with Microsoft SMTP Server (TLS) id 14.1.323.3; Fri, 6 Jan 2012 14:34:02 -0800
Received: from DFWEML505-MBX.china.huawei.com ([10.124.31.100]) by dfweml404-hub.china.huawei.com ([10.193.5.203]) with mapi id 14.01.0323.003; Fri, 6 Jan 2012 14:33:38 -0800
From: Linda Dunbar <linda.dunbar@huawei.com>
To: "dc@ietf.org" <dc@ietf.org>
Thread-Topic: DCREQ Section 4 Outline
Thread-Index: AczMwzqSmKDf1zT6SlWzZwPqD8IVPQ==
Date: Fri, 6 Jan 2012 22:33:36 +0000
Message-ID: <4A95BA014132FF49AE685FAB4B9F17F62A4E6640@dfweml505-mbx>
Accept-Language: en-US, zh-CN
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.192.11.97]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Ronald Bonica <rbonica@juniper.net>
Subject: [dc] DCREQ Section 4 Outline
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 22:34:35 -0000

Section 4: "Data Center Characteristics" should at least add the following =
sub-sections:
	- Types of data centers: homogeneous content provider data centers, multi-=
tenant data centers (small/medium sized data centers, and mega sized data c=
enters), Network service providers' data centers, etc. =20
	Each of them has different characteristics. E.g. homogeneous content provi=
der data center might not address fragmentation issues, but may have other =
issues. =20

	-  A section on how location of load-balancing, firewall, and other middle=
ware boxes determine where VMs/hosts in different segments can be aggregate=
d/exchanged.

	- A section on how applications are instantiated in data center. Applicati=
ons servers (e.g. Oracle's middleware WebLogicServer) instantiate multiple =
instances of one application, assign them the IP addresses, and VM/server m=
anagers place them onto a server rack.=20

	- A section to describe how backend data/storage are separated from the fr=
ont end service networks.   =20

Linda

From linda.dunbar@huawei.com  Fri Jan  6 14:39:15 2012
Return-Path: <linda.dunbar@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id AEC4621F8627 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 14:39:15 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.576
X-Spam-Level: 
X-Spam-Status: No, score=-2.576 tagged_above=-999 required=5 tests=[AWL=0.023,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id cIuTIxch+18u for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 14:39:15 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id D061121F861F for <dc@ietf.org>; Fri,  6 Jan 2012 14:38:58 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml202-edg.china.huawei.com) ([172.18.9.243]) by dfwrg02-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACD72544; Fri, 06 Jan 2012 17:38:58 -0500 (EST)
Received: from DFWEML404-HUB.china.huawei.com (10.193.5.203) by dfweml202-edg.china.huawei.com (172.18.9.108) with Microsoft SMTP Server (TLS) id 14.1.323.3; Fri, 6 Jan 2012 14:37:58 -0800
Received: from DFWEML505-MBX.china.huawei.com ([10.124.31.100]) by dfweml404-hub.china.huawei.com ([10.193.5.203]) with mapi id 14.01.0323.003; Fri, 6 Jan 2012 14:37:47 -0800
From: Linda Dunbar <linda.dunbar@huawei.com>
To: "dc@ietf.org" <dc@ietf.org>
Thread-Topic: DCREQ Section 5 Outline
Thread-Index: AczMw88oqJdSU2a5TiWfYpubu8yM3g==
Date: Fri, 6 Jan 2012 22:37:46 +0000
Message-ID: <4A95BA014132FF49AE685FAB4B9F17F62A4E6655@dfweml505-mbx>
Accept-Language: en-US, zh-CN
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.192.11.97]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Ronald Bonica <rbonica@juniper.net>
Subject: [dc] DCREQ Section 5 Outline
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 22:39:15 -0000

Section 5 of  draft-dalela-dc-requirements needs a lot of changes.=20

	- Many statements in Section 5.1 "Basic Forwarding Problem"  are not accur=
ate. E.g. "the need to massively scale virtualized hosts" may break the FDB=
 (or FIB) on switches/routers, but it doesn't really break the L2/L3 techno=
logies traditionally used in data center. L3 networks can't support host mo=
bility is not true either. Host based routing can support mobility. =20
 =20
	- Section 5.1 is really about "basic forwarding for segmented networks"
=20
	- Section 5.2 (DC Inter-connectivity problem) is loaded. Should be broken =
into multiple (sub)sections:

		- Problems associated with various networks in interconnecting data cente=
rs (VPN, straight IP, or straight optical)

		- Multi-data center Service interconnection, Layer 4-7 load balancing iss=
ues:  i.e. the description on how service needs to be closer to clients, ma=
pping services to location of data centers, etc.

		- Middleware function association in Multi-data center scenario

	- Section 5.3 "Multi-tenancy Problem" should at least include issues assoc=
iated with double VLAN tagging, non-VLAN partition (e.g. IEEE802.1ah's ISID=
), etc

	- New sub-section in Section 5 on traffic pattern dictated by policy and m=
iddleware boxes functions and locations. =20

Linda Dunbar


From stbryant@cisco.com  Fri Jan  6 14:45:19 2012
Return-Path: <stbryant@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4FA7621F869E for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 14:45:19 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -110.658
X-Spam-Level: 
X-Spam-Status: No, score=-110.658 tagged_above=-999 required=5 tests=[AWL=-0.059, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 9ljYjDuw5QRQ for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 14:45:18 -0800 (PST)
Received: from ams-iport-1.cisco.com (ams-iport-1.cisco.com [144.254.224.140]) by ietfa.amsl.com (Postfix) with ESMTP id 43A3B21F8694 for <dc@ietf.org>; Fri,  6 Jan 2012 14:45:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=stbryant@cisco.com; l=729; q=dns/txt; s=iport; t=1325889918; x=1327099518; h=message-id:date:from:reply-to:mime-version:to:cc:subject: references:in-reply-to:content-transfer-encoding; bh=BgaKkipN94dY0RXX2X++wR1IZbThv+pyantwua9zCDM=; b=gMmXWc5aHyHCL3+ZwFrh+jB+iq6vYc5Tw2ozbhfvo+rCT+drF2n+rG1B t5hPKloVM/ljZ0mPNsxu9bTtyzzDQf/lUFo3G7WEtuDEIRLYv1PJTA2XJ NY8XFgkOpGJJRjpGS8wmDwlaXpmsVTuTq9ZgL88LgQjZ692EDTDYBgxQ+ M=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Av8EAE55B0+Q/khN/2dsb2JhbABDrEKBBYFyAQEBBBIBAgEiQAEQCyEWDwkDAgECAUUGDQEHAQEVAgefdAGDMA8BmkuMEQSVB5I1
X-IronPort-AV: E=Sophos;i="4.71,470,1320624000"; d="scan'208";a="125683430"
Received: from ams-core-4.cisco.com ([144.254.72.77]) by ams-iport-1.cisco.com with ESMTP; 06 Jan 2012 22:45:17 +0000
Received: from cisco.com (mrwint.cisco.com [64.103.70.36]) by ams-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q06MjHP1005709; Fri, 6 Jan 2012 22:45:17 GMT
Received: from stbryant-mac2.lan (localhost [127.0.0.1]) by cisco.com (8.14.4+Sun/8.8.8) with ESMTP id q06MjEDL024153; Fri, 6 Jan 2012 22:45:16 GMT
Message-ID: <4F07797A.3090907@cisco.com>
Date: Fri, 06 Jan 2012 22:45:14 +0000
From: Stewart Bryant <stbryant@cisco.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:8.0) Gecko/20111105 Thunderbird/8.0
MIME-Version: 1.0
To: Thomas Narten <narten@us.ibm.com>
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4F07488E.2070103@raszuk.net> <13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net> <201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com>
In-Reply-To: <201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: Ronald Bonica <rbonica@juniper.net>, "robert@raszuk.net" <robert@raszuk.net>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: stbryant@cisco.com
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 22:45:19 -0000

Thomas

 > (Note: I'm specifically excluding NVO3 and SDN when I say this,
 > because I think both efforts can easily continue on their own as
 > separate efforts to create WGs -- but outside of those efforts, I'm
 > really struggling to understand what additional work the IETF is
 > being asked to do in the "DC area".)

I think that SDN has wider applicability than DC and thus has a
life (or death) of its own.

NVO3 is a candidate solution to the DC problem, but from the
discussion on the list I am yet to be convinced that we have either
the right problem statement to endorse it as the approach for L2,
let alone to determine whether we need a L2 solution, a L3
solution or a mixed solution.

- Stewart


From narten@us.ibm.com  Fri Jan  6 15:05:54 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 02BAA1F0C57 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 15:05:54 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.588
X-Spam-Level: 
X-Spam-Status: No, score=-106.588 tagged_above=-999 required=5 tests=[AWL=0.011, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NkWVLzkLk7Hq for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 15:05:53 -0800 (PST)
Received: from e4.ny.us.ibm.com (e4.ny.us.ibm.com [32.97.182.144]) by ietfa.amsl.com (Postfix) with ESMTP id 4C0511F0C3F for <dc@ietf.org>; Fri,  6 Jan 2012 15:05:53 -0800 (PST)
Received: from /spool/local by e4.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Fri, 6 Jan 2012 18:05:51 -0500
Received: from d01relay01.pok.ibm.com (9.56.227.233) by e4.ny.us.ibm.com (192.168.1.104) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Fri, 6 Jan 2012 18:05:47 -0500
Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay01.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q06N5lNb281122 for <dc@ietf.org>; Fri, 6 Jan 2012 18:05:47 -0500
Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q06N5kJn017961 for <dc@ietf.org>; Fri, 6 Jan 2012 18:05:47 -0500
Received: from cichlid.raleigh.ibm.com (sig-9-65-211-102.mts.ibm.com [9.65.211.102]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q06N5jiF017510 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 6 Jan 2012 18:05:46 -0500
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q06N5i8o008460; Fri, 6 Jan 2012 18:05:44 -0500
Message-Id: <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com>
To: stbryant@cisco.com
In-reply-to: <4F07797A.3090907@cisco.com>
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4F07488E.2070103@raszuk.net> <13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net> <201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com> <4F07797A.3090907@cisco.com>
Comments: In-reply-to Stewart Bryant <stbryant@cisco.com> message dated "Fri, 06 Jan 2012 22:45:14 +0000."
Date: Fri, 06 Jan 2012 18:05:43 -0500
From: Thomas Narten <narten@us.ibm.com>
x-cbid: 12010623-3534-0000-0000-0000044C9C76
Cc: Ronald Bonica <rbonica@juniper.net>, "robert@raszuk.net" <robert@raszuk.net>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 23:05:54 -0000

Hi Stewart.

> I think that SDN has wider applicability than DC and thus has a
> life (or death) of its own.

It's not just that it has wider applicability, it's that that effort
seems somewhat self-contained and I can see how one might carve out an
effort in this space.

> NVO3 is a candidate solution to the DC problem,

This is probably just quick typing, but I think it is worth saying
that we are not helping ourselves by thinking there is *one* DC
problem.

There are a number of possible problems. Some relate to each others,
some more so, some less so. But I think it is not helpful to try and
view this area as having *one* problem that needs sorting out.

I think it's much more helpful (and IETF tradition!) to try and
identify individual problems and where possible, solve problems
individually (i.e., divide and conquer). Of course, there can be
interdependencies between problem and solution spaces, but we are
likely going to flail if we try to view this as one big problem
needing one overall solution approach. Or that we can't work on sub
problems unless we understand the entire problem space.

> but from the discussion on the list I am yet to be convinced that we
> have either the right problem statement to endorse it as the
> approach for L2, let alone to determine whether we need a L2
> solution, a L3 solution or a mixed solution.

NVO3 is aimed at a subset of the DC "problem area". The bullet point
summary is that it's different way of providing multi-tenancy in the
DC. It is not the *only* way. But I think the Taipai session showed
there was significant support for this approach.

Thomas


From robert@raszuk.net  Fri Jan  6 15:49:12 2012
Return-Path: <robert@raszuk.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 214D821F8818 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 15:49:12 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.239
X-Spam-Level: 
X-Spam-Status: No, score=-2.239 tagged_above=-999 required=5 tests=[AWL=-0.240, BAYES_00=-2.599, J_CHICKENPOX_14=0.6]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UISAmumEXkzL for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 15:49:11 -0800 (PST)
Received: from mail1310.opentransfer.com (mail1310.opentransfer.com [76.162.254.103]) by ietfa.amsl.com (Postfix) with ESMTP id 6047821F8816 for <dc@ietf.org>; Fri,  6 Jan 2012 15:49:11 -0800 (PST)
Received: (qmail 5661 invoked by uid 399); 6 Jan 2012 23:49:10 -0000
Received: from unknown (HELO ?192.168.1.57?) (83.9.129.196) by mail1310.opentransfer.com with ESMTP; 6 Jan 2012 23:49:10 -0000
X-Originating-IP: 83.9.129.196
Message-ID: <4F078876.3030804@raszuk.net>
Date: Sat, 07 Jan 2012 00:49:10 +0100
From: Robert Raszuk <robert@raszuk.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: Thomas Narten <narten@us.ibm.com>
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4F07488E.2070103@raszuk.net> <13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net> <201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com> <4F07797A.3090907@cisco.com> <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com>
In-Reply-To: <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: Ronald Bonica <rbonica@juniper.net>, "dc@ietf.org" <dc@ietf.org>, stbryant@cisco.com
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: robert@raszuk.net
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Jan 2012 23:49:12 -0000

Thomas,

 > I think it's much more helpful (and IETF tradition!) to try and
 > identify individual problems and where possible, solve problems
 > individually (i.e., divide and conquer).

My personal observation leads me to believe that in the IETF DC effort 
we are facing three main issues ....

#1 - Today DC consists of hosts/VMs, orchestration part and network 
part. However IETF traditionally were not about defining how to run 
operating systems or how to manage them - it was about engineering 
network protocols part. But let's notice that number of solutions 
already proposed do enter space how to manage hypervisors or host kernel 
space. It is highly unclear to me that no matter what IETF will come up 
with it will be accepted in the industry. Leave architecture how to run 
appliances (firewall/loadbalancers/etc) alone.

#2 - Second thought is to observe that perhaps some problems could be 
(are) solved already in existing working groups. Considering NVO3, 
L2VPNs, L3VPNs, LISP as examples of areas where especially in inter-dc 
communication could be used to work on them. Even L2 over L3 seems 
already happening in mentioned WGs.

#3 - There is also another spin to this. The solutions proposed should 
be compatible and in symbiosis with overall industry trends. Those are 
happening outside of IETF and I rather see not much interaction so far, 
but rather a bit of "not invented here" syndrome.

Best regards,
R.


> Hi Stewart.
>
>> I think that SDN has wider applicability than DC and thus has a
>> life (or death) of its own.
>
> It's not just that it has wider applicability, it's that that effort
> seems somewhat self-contained and I can see how one might carve out an
> effort in this space.
>
>> NVO3 is a candidate solution to the DC problem,
>
> This is probably just quick typing, but I think it is worth saying
> that we are not helping ourselves by thinking there is *one* DC
> problem.
>
> There are a number of possible problems. Some relate to each others,
> some more so, some less so. But I think it is not helpful to try and
> view this area as having *one* problem that needs sorting out.
>
> I think it's much more helpful (and IETF tradition!) to try and
> identify individual problems and where possible, solve problems
> individually (i.e., divide and conquer). Of course, there can be
> interdependencies between problem and solution spaces, but we are
> likely going to flail if we try to view this as one big problem
> needing one overall solution approach. Or that we can't work on sub
> problems unless we understand the entire problem space.
>
>> but from the discussion on the list I am yet to be convinced that we
>> have either the right problem statement to endorse it as the
>> approach for L2, let alone to determine whether we need a L2
>> solution, a L3 solution or a mixed solution.
>
> NVO3 is aimed at a subset of the DC "problem area". The bullet point
> summary is that it's different way of providing multi-tenancy in the
> DC. It is not the *only* way. But I think the Taipai session showed
> there was significant support for this approach.
>
> Thomas
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>
>


From aldrin.isaac@gmail.com  Fri Jan  6 17:23:53 2012
Return-Path: <aldrin.isaac@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id DC6A321F855F for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 17:23:53 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.233
X-Spam-Level: 
X-Spam-Status: No, score=-3.233 tagged_above=-999 required=5 tests=[AWL=0.365,  BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Y8J413pKkuhH for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 17:23:52 -0800 (PST)
Received: from mail-qw0-f44.google.com (mail-qw0-f44.google.com [209.85.216.44]) by ietfa.amsl.com (Postfix) with ESMTP id 7A29B21F8556 for <dc@ietf.org>; Fri,  6 Jan 2012 17:23:52 -0800 (PST)
Received: by qadb15 with SMTP id b15so7829qad.10 for <dc@ietf.org>; Fri, 06 Jan 2012 17:23:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :message-id:references:to:x-mailer; bh=U4qiqnQNAddDChmBP530hfkQPUi9pK7hjDRwrCCE1Z0=; b=Ydz7BqEqcoQfadrUB20ExCTzdDvgwhhUpTSMOX8T1j5blfLu7yTuS3Td0NReDensA7 mhypL8S9NaSSOYoJROI7OgsLkogCvsSixA76AJ9su3sqUGbb8ZjqbOB01pYgNM21MFCo EeYQ+GoDDSH7mMwiQpNBPIYerJIvIwYlYX0Pg=
Received: by 10.224.196.66 with SMTP id ef2mr10313989qab.94.1325899430924; Fri, 06 Jan 2012 17:23:50 -0800 (PST)
Received: from mymac.home (ool-44c1c730.dyn.optonline.net. [68.193.199.48]) by mx.google.com with ESMTPS id z13sm12648885qap.13.2012.01.06.17.23.48 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 06 Jan 2012 17:23:49 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: multipart/alternative; boundary="Apple-Mail=_EF76F379-C2A3-460F-AA6F-3993A8C01CD7"
From: Aldrin Isaac <aldrin.isaac@gmail.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25D23@XMB-BGL-416.cisco.com>
Date: Fri, 6 Jan 2012 20:23:48 -0500
Message-Id: <1BB0814E-F569-43E7-AC6C-315508E29C60@gmail.com>
References: <AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net> <27D5DF12-DA16-42C4-A33B-84EBFFFC3A45@gmail.com> <618BE8B40039924EB9AED233D4A09C5102B25D23@XMB-BGL-416.cisco.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Pedro Marques <pedro.r.marques@gmail.com>, david.black@emc.com, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 01:23:54 -0000

--Apple-Mail=_EF76F379-C2A3-460F-AA6F-3993A8C01CD7
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=windows-1252

Hi Ashish,

The first three sentences are by Warren Kumari -- I seem to have lost =
the leading ">" quote character in my response. =20

My questions to the folks on this list are:

Is the issue that needs to be resolved for server-based overlays only =
regarding the number of ARPs that need to be managed by the gateway?  =
Could this issue not be resolved operationally by say having more =
gateways? =20
=20
What other issues exist and need to be resolved for server-based =
overlays, besides deciding on a tunneling encapsulation?  Is the IETF =
expected to standardize control protocols for server-based overlays?
=20
There is clearly a need for server-oriented virtual networks as well as =
a need for scalable network-oriented Ethernet virtual networks for DC.  =
Shouldn't these be separate conversations?

Cheers -- aldrin


On Jan 6, 2012, at 12:03 AM, Ashish Dalela (adalela) wrote:

> Aldrin,
> =20
> I like the way you say =96 =93the only thing the network needs to =
know=94 - :-) The same could be said of the hypervisor as well :-)
> =20
> The problem is that once you start pushing the intelligence into the =
server stack, you will have to keep pushing more and more  - e.g. =
firewalls, multicast, broadcast, packet inspection, flow mgmt, etc. You =
will find that you are re-inventing on the overlay everything that =
exists on the underlay. The challenges are stacked up against the =
overlay, not the underlay.
> =20
> You will also find that you need to communicate between the overlay =
and the underlay to get the desired bandwidth, QoS, flow mgmt, =
multipathing, tree optimization, and that is never going to be easy. You =
will also find that hardware accelerated networks perform better, =
deliver the requisite high-availability, and consume less energy, as =
compared to doing the same thing in the hypervisor.
> =20
> Having said that, I recognize there are two models of virtualization =
that we know of. The =93overlay=94 model is like a network hypervisor in =
which individual customers are like VMs. The =93multiplexed=94 model is =
like a multi-user OS. Both multi-user OS and the multi-VM hypervisor are =
isolated equally well. But they have different use-cases.
> =20
> The main use-case for the VM hypervisor model is to multiplex multiple =
different OS/tools/application =93environments=94 into the same HW, =
letting each VM the ability to shut itself down, reboot or whatever it =
wants to do. In case of network, that =93environment=94 is a set of =
protocols. So, if you have one tenant running OSPF another one running =
IS-IS and yet another running BGP, and they want to keep playing with =
their network configuration, it makes sense to run these in the overlay =
or the VM mode.
> =20
> If everyone has the same environment (i.e. runs the same protocols, =
and expects common controls), it makes more sense to run them in the =
multi-user (multiplexed) mode rather than multi-VM (overlay) mode. The =
multi-VM model delegates administration to the VM owner. The multi-user =
model owns the administration while letting a =93tenant=94 use the =
network. This is a conscious choice =96 is cloud going to open up the =
configuration of network like they open up the administration of VM? =
=46rom what I know, the answer is =93no=94.
> =20
> These architectural models exist outside the networking domain, and I =
would reach out into another domain to borrow the intuitions, use-cases =
and challenges. Architecturally, you will find the same challenges in =
managing the overlay model as exist in the multi-VM model. If there are =
no benefits to be gained from that, you are better off in the =
multiplexed mode.
> =20
> But then again =96 beauty lies in the eyes of the beholder.
> =20
> Thanks, Ashish
> =20
> =20
> =20
> From: Aldrin Isaac [mailto:aldrin.isaac@gmail.com]=20
> Sent: Friday, January 06, 2012 9:07 AM
> Cc: Ashish Dalela (adalela); Pedro Marques; david.black@emc.com; =
dc@ietf.org
> Subject: Re: [dc] [armd] IP over IP solution for data center =
interconnect
> =20
>=20
>=20
> The only thing that the network needs to know is the routes to the =
hypervisors / physical machines -- this is a solved problem.
> The VM addresses and routes are only visible to the [gateways, =
hypervisors with VMs in that overlay, other VMs in the same overlay, =
mapping server].
>=20
> For a really old overview: =
http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00
> =20
> Is the issue that needs to be resolved for overlays only regarding the =
number of ARPs that need to be managed by the gateway?  Could this issue =
not be resolved operationally by say having more gateways? =20
> =20
> What other issues exist and need to be resolved for overlays, besides =
deciding on a tunneling encapsulation?  Is the IETF expected to =
standardize control protocols for overlays?
> =20
> There is clearly a need for server-based virtual networks as well as a =
need for scalable network-based Ethernet virtual networks for DC.  =
Shouldn't these be separate conversations?


--Apple-Mail=_EF76F379-C2A3-460F-AA6F-3993A8C01CD7
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=windows-1252

<html><head><base href=3D"x-msg://3883/"></head><body style=3D"word-wrap: =
break-word; -webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space; ">Hi Ashish,<div><br></div><div><div><div>The first =
three sentences are by Warren Kumari -- I seem to have lost the leading =
"&gt;" quote character in my response. =
&nbsp;</div><div><br></div><div>My questions to the folks on this list =
are:</div><div><br></div><div><div>Is the issue that needs to be =
resolved for server-based overlays only regarding the number of ARPs =
that need to be managed by the gateway? &nbsp;Could this issue not be =
resolved operationally by say having more gateways? =
&nbsp;</div><div>&nbsp;</div><div>What other issues exist and need to be =
resolved for server-based overlays, besides deciding on a tunneling =
encapsulation? &nbsp;Is the IETF expected to standardize control =
protocols for server-based overlays?</div><div>&nbsp;</div><div>There is =
clearly a need for server-oriented virtual networks as well as a need =
for scalable network-oriented Ethernet virtual networks for DC. =
&nbsp;Shouldn't these be separate =
conversations?</div></div><div><br></div><div>Cheers -- =
aldrin</div><div><br></div><div><br></div><div>On Jan 6, 2012, at 12:03 =
AM, Ashish Dalela (adalela) wrote:</div><br =
class=3D"Apple-interchange-newline"><blockquote type=3D"cite"><div =
lang=3D"EN-US" link=3D"blue" vlink=3D"purple" style=3D"word-wrap: =
break-word; -webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space; "><div class=3D"WordSection1" style=3D"page: =
WordSection1; "><div style=3D"margin-top: 0in; margin-right: 0in; =
margin-left: 0in; margin-bottom: 0.0001pt; font-size: 12pt; font-family: =
'Times New Roman', serif; "><span style=3D"font-size: 11pt; font-family: =
Calibri, sans-serif; color: rgb(31, 73, 125); =
">Aldrin,<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">I =
like the way you say =96 =93the only thing the network needs to know=94 =
- :-) The same could be said of the hypervisor as well =
:-)<o:p></o:p></span></div><div style=3D"margin-top: 0in; margin-right: =
0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: 12pt; =
font-family: 'Times New Roman', serif; "><span style=3D"font-size: 11pt; =
font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">The =
problem is that once you start pushing the intelligence into the server =
stack, you will have to keep pushing more and more&nbsp; - e.g. =
firewalls, multicast, broadcast, packet inspection, flow mgmt, etc. You =
will find that you are re-inventing on the overlay everything that =
exists on the underlay. The challenges are stacked up against the =
overlay, not the underlay.<o:p></o:p></span></div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><span style=3D"font-size: 11pt; font-family: Calibri, =
sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">You =
will also find that you need to communicate between the overlay and the =
underlay to get the desired bandwidth, QoS, flow mgmt, multipathing, =
tree optimization, and that is never going to be easy. You will also =
find that hardware accelerated networks perform better, deliver the =
requisite high-availability, and consume less energy, as compared to =
doing the same thing in the hypervisor.<o:p></o:p></span></div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><span style=3D"font-size: 11pt; font-family: Calibri, =
sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
">Having said that, I recognize there are two models of virtualization =
that we know of. The =93overlay=94 model is like a network hypervisor in =
which individual customers are like VMs. The =93multiplexed=94 model is =
like a multi-user OS. Both multi-user OS and the multi-VM hypervisor are =
isolated equally well. But they have different =
use-cases.<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">The =
main use-case for the VM hypervisor model is to multiplex multiple =
different OS/tools/application =93environments=94 into the same HW, =
letting each VM the ability to shut itself down, reboot or whatever it =
wants to do. In case of network, that =93environment=94 is a set of =
protocols. So, if you have one tenant running OSPF another one running =
IS-IS and yet another running BGP, and they want to keep playing with =
their network configuration, it makes sense to run these in the overlay =
or the VM mode.<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">If =
everyone has the same environment (i.e. runs the same protocols, and =
expects common controls), it makes more sense to run them in the =
multi-user (multiplexed) mode rather than multi-VM (overlay) mode. The =
multi-VM model delegates administration to the VM owner. The multi-user =
model owns the administration while letting a =93tenant=94 use the =
network. This is a conscious choice =96 is cloud going to open up the =
configuration of network like they open up the administration of VM? =
=46rom what I know, the answer is =93no=94.<o:p></o:p></span></div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><span style=3D"font-size: 11pt; font-family: Calibri, =
sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">These =
architectural models exist outside the networking domain, and I would =
reach out into another domain to borrow the intuitions, use-cases and =
challenges. Architecturally, you will find the same challenges in =
managing the overlay model as exist in the multi-VM model. If there are =
no benefits to be gained from that, you are better off in the =
multiplexed mode.<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); ">But =
then again =96 beauty lies in the eyes of the =
beholder.<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
">Thanks, Ashish<o:p></o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; "><span style=3D"font-size: =
11pt; font-family: Calibri, sans-serif; color: rgb(31, 73, 125); =
"><o:p>&nbsp;</o:p></span></div><div><div style=3D"border-right-style: =
none; border-bottom-style: none; border-left-style: none; border-width: =
initial; border-color: initial; border-top-style: solid; =
border-top-color: rgb(181, 196, 223); border-top-width: 1pt; =
padding-top: 3pt; padding-right: 0in; padding-bottom: 0in; padding-left: =
0in; "><div style=3D"margin-top: 0in; margin-right: 0in; margin-left: =
0in; margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><b><span style=3D"font-size: 10pt; font-family: Tahoma, =
sans-serif; ">From:</span></b><span style=3D"font-size: 10pt; =
font-family: Tahoma, sans-serif; "><span =
class=3D"Apple-converted-space">&nbsp;</span>Aldrin Isaac =
[mailto:aldrin.isaac@gmail.com]<span =
class=3D"Apple-converted-space">&nbsp;</span><br><b>Sent:</b><span =
class=3D"Apple-converted-space">&nbsp;</span>Friday, January 06, 2012 =
9:07 AM<br><b>Cc:</b><span =
class=3D"Apple-converted-space">&nbsp;</span>Ashish Dalela (adalela); =
Pedro Marques; <a =
href=3D"mailto:david.black@emc.com">david.black@emc.com</a>; <a =
href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br><b>Subject:</b><span =
class=3D"Apple-converted-space">&nbsp;</span>Re: [dc] [armd] IP over IP =
solution for data center =
interconnect<o:p></o:p></span></div></div></div><div style=3D"margin-top: =
0in; margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; =
font-size: 12pt; font-family: 'Times New Roman', serif; =
"><o:p>&nbsp;</o:p></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; =
"><br><br><o:p></o:p></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; ">The only thing that the =
network needs to know is the routes to the hypervisors / physical =
machines -- this is a solved problem.<br>The VM addresses and routes are =
only visible to the [gateways, hypervisors with VMs in that overlay, =
other VMs in the same overlay, mapping server].<br><br>For a really old =
overview:<span class=3D"Apple-converted-space">&nbsp;</span><a =
href=3D"http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00" =
style=3D"color: blue; text-decoration: underline; =
">http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00</a><o:p>=
</o:p></div></div></div><div style=3D"margin-top: 0in; margin-right: =
0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: 12pt; =
font-family: 'Times New Roman', serif; =
"><o:p>&nbsp;</o:p></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; ">Is the issue that needs =
to be resolved for overlays only regarding the number of ARPs that need =
to be managed by the gateway? &nbsp;Could this issue not be resolved =
operationally by say having more gateways? =
&nbsp;<o:p></o:p></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; =
"><o:p>&nbsp;</o:p></div></div><div><div style=3D"margin-top: 0in; =
margin-right: 0in; margin-left: 0in; margin-bottom: 0.0001pt; font-size: =
12pt; font-family: 'Times New Roman', serif; ">What other issues exist =
and need to be resolved for overlays, besides deciding on a tunneling =
encapsulation? &nbsp;Is the IETF expected to standardize control =
protocols for overlays?<o:p></o:p></div></div><div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; "><o:p>&nbsp;</o:p></div></div><div><div =
style=3D"margin-top: 0in; margin-right: 0in; margin-left: 0in; =
margin-bottom: 0.0001pt; font-size: 12pt; font-family: 'Times New =
Roman', serif; ">There is clearly a need for server-based virtual =
networks as well as a need for scalable network-based Ethernet virtual =
networks for DC. &nbsp;Shouldn't these be separate =
conversations?</div></div></div></div></blockquote></div><br></div></body>=
</html>=

--Apple-Mail=_EF76F379-C2A3-460F-AA6F-3993A8C01CD7--

From xuxiaohu@huawei.com  Fri Jan  6 17:29:39 2012
Return-Path: <xuxiaohu@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 18E6121F855F for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 17:29:39 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.426
X-Spam-Level: 
X-Spam-Status: No, score=-2.426 tagged_above=-999 required=5 tests=[AWL=-0.369, BAYES_00=-2.599, CN_BODY_35=0.339, MIME_BASE64_TEXT=1.753, MIME_CHARSET_FARAWAY=2.45, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id y7G10WM4AWeg for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 17:29:38 -0800 (PST)
Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [119.145.14.64]) by ietfa.amsl.com (Postfix) with ESMTP id 4814921F8558 for <dc@ietf.org>; Fri,  6 Jan 2012 17:29:38 -0800 (PST)
Received: from huawei.com (szxga05-in [172.24.2.49]) by szxga05-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXE003WHMT5KT@szxga05-in.huawei.com> for dc@ietf.org; Sat, 07 Jan 2012 09:29:29 +0800 (CST)
Received: from szxrg01-dlp.huawei.com ([172.24.2.119]) by szxga05-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXE006U2MT5WH@szxga05-in.huawei.com> for dc@ietf.org; Sat, 07 Jan 2012 09:29:29 +0800 (CST)
Received: from szxeml208-edg.china.huawei.com ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGG75742; Sat, 07 Jan 2012 09:29:28 +0800
Received: from SZXEML419-HUB.china.huawei.com (10.82.67.158) by szxeml208-edg.china.huawei.com (172.24.2.60) with Microsoft SMTP Server (TLS) id 14.1.323.3; Sat, 07 Jan 2012 09:29:24 +0800
Received: from SZXEML525-MBS.china.huawei.com ([169.254.8.55]) by szxeml419-hub.china.huawei.com ([10.82.67.158]) with mapi id 14.01.0323.003; Sat, 07 Jan 2012 09:29:18 +0800
Date: Sat, 07 Jan 2012 01:29:18 +0000
From: Xuxiaohu <xuxiaohu@huawei.com>
In-reply-to: <201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com>
X-Originating-IP: [10.108.4.80]
To: Thomas Narten <narten@us.ibm.com>, Ronald Bonica <rbonica@juniper.net>
Message-id: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE765D04@szxeml525-mbs.china.huawei.com>
MIME-version: 1.0
Content-type: text/plain; charset=gb2312
Content-language: zh-CN
Content-transfer-encoding: base64
Accept-Language: zh-CN, en-US
Thread-topic: [dc] DC Work Plan
Thread-index: AczMpBVlzok2urS5RMGeXcLx/H8AYf//gRwAgAAN8wCAACACgP//QT2w
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4F07488E.2070103@raszuk.net> <13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net> <201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com>
Cc: "robert@raszuk.net" <robert@raszuk.net>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 01:29:39 -0000

DQo+IC0tLS0t08q8/tStvP4tLS0tLQ0KPiC3orz+yMs6IGRjLWJvdW5jZXNAaWV0Zi5vcmcgW21h
aWx0bzpkYy1ib3VuY2VzQGlldGYub3JnXSC0+rHtIFRob21hcw0KPiBOYXJ0ZW4NCj4gt6LLzcqx
vOQ6IDIwMTLE6jHUwjfI1SA2OjAxDQo+IMrVvP7IyzogUm9uYWxkIEJvbmljYQ0KPiCzrcvNOiBk
Y0BpZXRmLm9yZzsgcm9iZXJ0QHJhc3p1ay5uZXQNCj4g1vfM4jogUmU6IFtkY10gREMgV29yayBQ
bGFuDQo+IA0KPiBIaSBSb24uDQo+IA0KPiA+IENvbnNpZGVyaW5nIGhvdyB1bnN0cnVjdHVyZWQg
b3VyIGRpc2N1c3Npb24gaGFzIGJlZW4gdG8gZGF0ZSwgYQ0KPiA+ICBsaXR0bGUgc2NhZmZvbGRp
bmcgbWlnaHQgYmUgaGVscGZ1bC4NCj4gDQo+IEkgYWdyZWUuDQo+IA0KPiBCdXQgbGV0IG1lIHBv
c2l0IHRoZSBmb2xsb3dpbmcuIE5vcm1hbGx5IGluIHRoZSBJRVRGLCB3ZSBoYXZlIG1lZXRpbmdz
DQo+IHRvIGRpc2N1c3MgY29uY3JldGUgd29yay4gSS5lLiwgeW91IGhhdmUgdG8gaGF2ZSBleGlz
dGluZyBkb2N1bWVudHMgdG8NCj4ganVzdGlmeSB0aGUgZmFjZS10by1mYWNlIG1lZXRpbmcuIEFu
ZCB0aGVyZSBoYXMgdG8gYmUgYWdyZWVtZW50IHRoYXQNCj4gdGhlcmUgaXMgZW5vdWdoICJtZWF0
IiBpbiB0aG9zZSBkb2N1bWVudHMsIGFuZCB0aGF0IGhhdmluZyBhDQo+IGZhY2UtdG8tZmFjZSBt
ZWV0aW5nIHdpbGwgbW92ZSB0aGUgZGlzY3Vzc2lvbiBmb3J3YXJkLiBUaGluayBhYm91dCB0aGUN
Cj4gcmVxdWlyZW1lbnRzIGJlZm9yZSBhIEJPRiBpcyBhcHByb3ZlZCwgZm9yIGV4YW1wbGUuDQo+
IA0KPiBSaWdodCBub3csIHdlIGp1c3QgZG9uJ3QgaGF2ZSBzdWNoIGEgZG9jdW1lbnQgKG9yIGRv
Y3VtZW50cykuIFdlIGhhdmUNCj4gYSBsb3Qgb2YgZG9jdW1lbnRzLCBidXQgbm8gcmVhbCBhZ3Jl
ZW1lbnQsIGV2ZW4gb24gZ2VuZXJhbA0KPiBhcmVhcy4gKE5vdGU6IEknbSBzcGVjaWZpY2FsbHkg
ZXhjbHVkaW5nIE5WTzMgYW5kIFNETiB3aGVuIEkgc2F5IHRoaXMsDQo+IGJlY2F1c2UgSSB0aGlu
ayBib3RoIGVmZm9ydHMgY2FuIGVhc2lseSBjb250aW51ZSBvbiB0aGVpciBvd24gYXMNCj4gc2Vw
YXJhdGUgZWZmb3J0cyB0byBjcmVhdGUgV0dzIC0tIGJ1dCBvdXRzaWRlIG9mIHRob3NlIGVmZm9y
dHMsIEknbQ0KPiByZWFsbHkgc3RydWdnbGluZyB0byB1bmRlcnN0YW5kIHdoYXQgYWRkaXRpb25h
bCB3b3JrIHRoZSBJRVRGIGlzIGJlaW5nDQo+IGFza2VkIHRvIGRvIGluIHRoZSAiREMgYXJlYSIu
KQ0KPiANCj4gVGhlcmUgaXMgcmVhbCB3b3JrIHRvIGl0ZXJhdGluZyBvbiB0aGUgcHJvYmxlbSBh
cmVhcyB0aGF0IGp1c3QgaGFzbid0DQo+IGJlZW4gZG9uZSB5ZXQuDQo+IA0KPiBDYW4gc3VjaCB3
b3JrIGhhcHBlbiBiZWZvcmUgdGhlIGludGVyaW0/IFN1cmUuIE1heWJlLiBCdXQgdGhlcmUgYXJl
IG5vDQo+IGd1YXJhbnRlZXMsIGFuZCBiYXNlZCBvbiB0aGUgZGlzY3Vzc2lvbnMgaW4gdGhlIGxh
c3QgbW9udGgsIHRoZXJlIGlzDQo+IHJvb20gZm9yIGRvdWJ0Lg0KPiANCj4gVGhlIHByb3Bvc2Vk
IGRhdGVzIHByZXZpb3VzbHkgbWVudGlvbmVkIHdlcmUgRmViIDIyLTIzLiBUaGF0IGlzIGEgbWVy
ZQ0KPiA0IDEvMiB3ZWVrcyBmcm9tIHRoZSBQYXJpcyBJRVRGLiBBc3N1bWluZyB3ZSBoYWQgYSBz
dWNjZXNzZnVsIGludGVyaW0NCj4gbWVldGluZywgdGhhdCB3b3VsZG4ndCBhbGxvdyBtdWNoIHRp
bWUgdG8gcmV2aXNlIGRvY3VtZW50cyBhbmQgcHJlcA0KPiBmb3IgdGhlIFBhcmlzIG1lZXRpbmcu
DQo+IA0KPiBTbywgZ2l2ZW4gdGhlIHdvcmsgcGxhbiwgYW5kIGRvdWJ0cyBvbiB3aGV0aGVyIHdl
IGNhbiBkZWxpdmVyIG9uIGl0DQo+IHByaW9yIHRvIGFuIGludGVyaW0gbWVldGluZywgdGhlIHRp
bWluZyBvZiB0aGUgcHJvcG9zZWQgbWVldGluZw0KPiByZWxhdGl2ZSB0byB0aGUgbmV4dCBJRVRG
LCBkbyB3ZSByZWFsbHkgbmVlZCB0byBoYXZlIGFuIGludGVyaW0NCj4gbWVldGluZyBhdCBhbGw/
DQo+IA0KPiBXb3VsZG4ndCBpdCBiZSBnb29kIGVub3VnaCB0byBhdHRlbXB0IHRvIGV4ZWN1dGUg
b24gdGhlIHBsYW4sIHdpdGggYW4NCj4gZXhwZWN0YXRpb24gdGhhdCB0aGVyZSB3aWxsIGJlIHNv
bWUgc29ydCBvZiBzZXNzaW9uIGluIFBhcmlzIChpZg0KPiBqdXN0aWZpZWQpPw0KPiANCj4gV2hh
dCBkbyBvdGhlcnMgdGhpbms/DQoNCkkgYWdyZWUgd2l0aCBUaG9tYXMncyBwb2ludHMgYXMgYWJv
dmUuIExldCdzIGNvbnRpbnVlIHRoZSBkaXNjdXNzaW9uIGFib3V0IHRoZSBwcm9ibGVtcyBhbmQg
ZnVydGhlciBldmFsdWF0ZSB3aGV0aGVyIHRoZSBhbHJlYWR5IGV4aXN0aW5nIFZQTiBzb2x1dGlv
bnMgY291bGQgYWRkcmVzcyB0aGVtIGZ1bGx5IG9yIHBhcnRpYWxseSBpbiB0aGUgbWFpbGluZy1s
aXN0IGFjY29yZGluZyB0byB0aGUgIkVsZXZhdG9yIFBpdGNoIiBwcmluY2lwbGUsIGlmIHdlIGNv
dWxkIGdldCBlbm91Z2ggdGltZS1zbG90cyBmb3IgZmFjZS10by1mYWNlIHRhbGsgYXQgdGhlIGZv
cnRoY29taW5nIFBhcmlzIElFVEYgbWVldGluZy4NCg0KQmVzdCByZWdhcmRzLA0KWGlhb2h1DQoN
Cj4gVGhvbWFzDQo+IA0KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fXw0KPiBkYyBtYWlsaW5nIGxpc3QNCj4gZGNAaWV0Zi5vcmcNCj4gaHR0cHM6Ly93d3cu
aWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9kYw0K

From aldrin.isaac@gmail.com  Fri Jan  6 17:35:38 2012
Return-Path: <aldrin.isaac@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B176021F85BD for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 17:35:38 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.967
X-Spam-Level: 
X-Spam-Status: No, score=-2.967 tagged_above=-999 required=5 tests=[AWL=0.032,  BAYES_00=-2.599, J_CHICKENPOX_13=0.6, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KFJnf9L8whvO for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 17:35:38 -0800 (PST)
Received: from mail-qy0-f172.google.com (mail-qy0-f172.google.com [209.85.216.172]) by ietfa.amsl.com (Postfix) with ESMTP id 2327021F85B7 for <dc@ietf.org>; Fri,  6 Jan 2012 17:35:38 -0800 (PST)
Received: by qcsf15 with SMTP id f15so1372356qcs.31 for <dc@ietf.org>; Fri, 06 Jan 2012 17:35:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; bh=G8kJafoO/5pT1HTAAi/1+9fQse4Tg2r6iHJTwvjresw=; b=YrgJMumWTmMnARVh7CHkVFjtz4KdFv3Acfjes8SaPnUbG5KVXNIpf5H4PGgJ+N8GLN C9acMfZa8U5UQ+Ya2gkRAQwhc27DvqpaSbMEL82BQ2ekTfI0HnXasyzhIMZXXxzFHRv8 Jhh+yXpN5svzeL1ytD3v1YeGIxsgnfljBIIEc=
Received: by 10.224.192.10 with SMTP id do10mr10521023qab.50.1325900136597; Fri, 06 Jan 2012 17:35:36 -0800 (PST)
Received: from mymac.home (ool-44c1c730.dyn.optonline.net. [68.193.199.48]) by mx.google.com with ESMTPS id el3sm62034151qab.8.2012.01.06.17.35.35 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 06 Jan 2012 17:35:35 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: text/plain; charset=us-ascii
From: Aldrin Isaac <aldrin.isaac@gmail.com>
In-Reply-To: <82D68180-B0A1-48B4-B1A5-5115D55AE2BD@asgaard.org>
Date: Fri, 6 Jan 2012 20:35:34 -0500
Content-Transfer-Encoding: quoted-printable
Message-Id: <8994E773-DD02-4935-9EA2-ED63314CB481@gmail.com>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <40F3FB9C-CBCB-41ED-A1E7-FB99DB3A928D@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE764022@szxeml525-mbs.china.huawei.com> <82D68180-B0A1-48B4-B1A5-5115D55AE2BD@asgaard.org>
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
X-Mailer: Apple Mail (2.1251.1)
Cc: Thomas Narten <narten@us.ibm.com>, Russ White <russw@riw.us>, Xuxiaohu <xuxiaohu@huawei.com>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 01:35:38 -0000

On Jan 6, 2012, at 12:20 PM, Christopher LILJENSTOLPE wrote:

> Greetings - in-line,
>=20
> On 03Jan2012, at 19.11, Xuxiaohu wrote:
>=20
>>=20
>> Hi Chris,
>>=20
>> I have no doubt about the possibility you mentioned above. However, =
once you are considering to optimize the forwarding path of inter-tenant =
traffic within the scope of L2VPN or L3VPN solutions, while taking the =
address space overlapping, firewall policy issues, etc. into account, =
you will find it is a much complex job. Whether or not it is worthwhile =
to do that optimization heavily depends on whether the volume of =
inter-tenant traffic is much significant.=20
>=20
> This is one that, if we miss, will come back and bite us.  If you have =
no way of getting traffic between "tenants" in a data centre, I have NO =
idea how you will deal with shared services (such as storage, backup, =
transit, etc) as well as the classical inter-tennant problem.  If the =
proposal is to just send it up the tree (as is done today), you will =
continue to have the same issues as we do now (that is driving all this =
work).  If you doubt this, ask yourself why there is a drive for greater =
cross-sectional bandwidth in the data center.  If you believe that the =
classical tree solution will work, then we are all chasing our tail =
here. =20
>=20
> 	Chris


100% agree.  Solution MUST have an efficient way to reach shared =
services, extranets, portals, inter-tenant transit, etc.  Solution is a =
non-starter without it.=

From xuxiaohu@huawei.com  Fri Jan  6 18:22:17 2012
Return-Path: <xuxiaohu@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E7C2D21F85D2 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 18:22:17 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.222
X-Spam-Level: 
X-Spam-Status: No, score=-4.222 tagged_above=-999 required=5 tests=[AWL=1.462,  BAYES_00=-2.599, J_CHICKENPOX_13=0.6, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id J6mF7B+tVFQi for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 18:22:16 -0800 (PST)
Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [119.145.14.66]) by ietfa.amsl.com (Postfix) with ESMTP id 8EFBE21F85CC for <dc@ietf.org>; Fri,  6 Jan 2012 18:22:16 -0800 (PST)
Received: from huawei.com (szxga03-in [172.24.2.9]) by szxga03-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXE00G1AP8S1N@szxga03-in.huawei.com> for dc@ietf.org; Sat, 07 Jan 2012 10:22:04 +0800 (CST)
Received: from szxrg01-dlp.huawei.com ([172.24.2.119]) by szxga03-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXE00IPVP8RXF@szxga03-in.huawei.com> for dc@ietf.org; Sat, 07 Jan 2012 10:22:04 +0800 (CST)
Received: from szxeml201-edg.china.huawei.com ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGG77981; Sat, 07 Jan 2012 10:22:01 +0800
Received: from SZXEML419-HUB.china.huawei.com (10.82.67.158) by szxeml201-edg.china.huawei.com (172.24.2.39) with Microsoft SMTP Server (TLS) id 14.1.323.3; Sat, 07 Jan 2012 10:21:51 +0800
Received: from SZXEML525-MBS.china.huawei.com ([169.254.8.55]) by szxeml419-hub.china.huawei.com ([10.82.67.158]) with mapi id 14.01.0323.003; Sat, 07 Jan 2012 10:21:55 +0800
Date: Sat, 07 Jan 2012 02:21:54 +0000
From: Xuxiaohu <xuxiaohu@huawei.com>
In-reply-to: <82D68180-B0A1-48B4-B1A5-5115D55AE2BD@asgaard.org>
X-Originating-IP: [10.108.4.80]
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
Message-id: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE765D2D@szxeml525-mbs.china.huawei.com>
MIME-version: 1.0
Content-type: text/plain; charset=utf-8
Content-language: zh-CN
Content-transfer-encoding: base64
Accept-Language: zh-CN, en-US
Thread-topic: [dc] Elevator Pitch
Thread-index: AQHMw1wuBgE5G7BjGEyEvGVWBnVRbZXw96YAgAF/L4CAABiKgIABPS+QgAacgwCAAStPwIADjbAAgAEZOMA=
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <40F3FB9C-CBCB-41ED-A1E7-FB99DB3A928D@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE764022@szxeml525-mbs.china.huawei.com> <82D68180-B0A1-48B4-B1A5-5115D55AE2BD@asgaard.org>
Cc: Thomas Narten <narten@us.ibm.com>, Russ White <russw@riw.us>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 02:22:18 -0000

DQo+IC0tLS0t6YKu5Lu25Y6f5Lu2LS0tLS0NCj4g5Y+R5Lu25Lq6OiBDaHJpc3RvcGhlciBMSUxK
RU5TVE9MUEUgW21haWx0bzpjZGxAYXNnYWFyZC5vcmddDQo+IOWPkemAgeaXtumXtDogMjAxMuW5
tDHmnIg35pelIDE6MjANCj4g5pS25Lu25Lq6OiBYdXhpYW9odQ0KPiDmioTpgIE6IFRob21hcyBO
YXJ0ZW47IFJ1c3MgV2hpdGU7IGRjQGlldGYub3JnDQo+IOS4u+mimDogUmU6IFtkY10gRWxldmF0
b3IgUGl0Y2gNCj4gDQo+IEdyZWV0aW5ncyAtIGluLWxpbmUsDQo+IA0KPiBPbiAwM0phbjIwMTIs
IGF0IDE5LjExLCBYdXhpYW9odSB3cm90ZToNCj4gDQo+ID4NCj4gPj4gLS0tLS3pgq7ku7bljp/k
u7YtLS0tLQ0KPiA+PiDlj5Hku7bkuro6IENocmlzdG9waGVyIExJTEpFTlNUT0xQRSBbbWFpbHRv
OmNkbEBhc2dhYXJkLm9yZ10NCj4gPj4g5Y+R6YCB5pe26Ze0OiAyMDEy5bm0MeaciDTml6UgMTox
Mw0KPiA+PiDmlLbku7bkuro6IFh1eGlhb2h1DQo+ID4+IOaKhOmAgTogVGhvbWFzIE5hcnRlbjsg
UnVzcyBXaGl0ZTsgZGNAaWV0Zi5vcmcNCj4gPj4g5Li76aKYOiBSZTogW2RjXSBFbGV2YXRvciBQ
aXRjaA0KPiA+Pg0KPiA+PiBHcmVldGluZ3MsDQo+ID4+DQo+ID4+DQo+ID4+IE9uIDI5RGVjMjAx
MSwgYXQgMjAuMDEsIFh1eGlhb2h1IHdyb3RlOg0KPiA+Pg0KPiA+Pj4NCj4gPj4+PiAtLS0tLemC
ruS7tuWOn+S7ti0tLS0tDQo+ID4+Pj4g5Y+R5Lu25Lq6OiBDaHJpc3RvcGhlciBMSUxKRU5TVE9M
UEUgW21haWx0bzpjZGxAYXNnYWFyZC5vcmddDQo+ID4+Pj4g5Y+R6YCB5pe26Ze0OiAyMDEx5bm0
MTLmnIgzMOaXpSAxOjIwDQo+ID4+Pj4g5pS25Lu25Lq6OiBYdXhpYW9odQ0KPiA+Pj4+IOaKhOmA
gTogVGhvbWFzIE5hcnRlbjsgUnVzcyBXaGl0ZTsgZGNAaWV0Zi5vcmcNCj4gPj4+PiDkuLvpopg6
IFJlOiBbZGNdIEVsZXZhdG9yIFBpdGNoDQo+ID4+Pj4NCj4gPj4+PiBHcmVldGluZ3MgWHV4aWFv
aHUsDQo+ID4+Pj4NCj4gPj4+PiBPbiAyOURlYzIwMTEsIGF0IDAwLjU1LCBYdXhpYW9odSB3cm90
ZToNCj4gPj4+Pg0KPiA+Pj4+PiBIaSBUaG9tYXMsDQo+ID4+Pj4+DQo+ID4+Pj4+PiAtLS0tLemC
ruS7tuWOn+S7ti0tLS0tDQo+ID4+Pj4+PiDlj5Hku7bkuro6IGRjLWJvdW5jZXNAaWV0Zi5vcmcg
W21haWx0bzpkYy1ib3VuY2VzQGlldGYub3JnXSDku6PooagNCj4gPj4gVGhvbWFzDQo+ID4+Pj4+
PiBOYXJ0ZW4NCj4gPj4+Pj4+IOWPkemAgeaXtumXtDogMjAxMeW5tDEy5pyIMjnml6UgMTowMQ0K
PiA+Pj4+Pj4g5pS25Lu25Lq6OiBSdXNzIFdoaXRlDQo+ID4+Pj4+PiDmioTpgIE6IGRjQGlldGYu
b3JnDQo+ID4+Pj4+PiDkuLvpopg6IFJlOiBbZGNdIEVsZXZhdG9yIFBpdGNoDQo+ID4+Pj4+Pg0K
PiA+PiA8c25pcD4NCj4gPj4NCj4gPj4+IEhpIENocmlzLA0KPiA+Pj4NCj4gPj4+IFdvdWxkIHlv
dSBwbGVhc2UgZ2l2ZSBhIGNvbmNyZXRlIGV4YW1wbGUgd2hlcmUgdGhlIGNvbW11bmljYXRpb24N
Cj4gPj4gYmV0d2VlbiBkaWZmZXJlbnQgdGVuYW50cyBpcyB2ZXJ5IGNvbW1vbiBpbiB0aGUgbXVs
dGktdGVuYW50IGNsb3VkIGRhdGENCj4gPj4gY2VudGVyPw0KPiA+Pg0KPiA+PiBIZXJlJ3MgdGhl
IHF1ZXN0aW9uIC0gaWYgeW91IGFyZSBhIHRlbmFudCBpbiBhIGRhdGEgY2VudGVyLCBhbmQgeW91
IGFyZSB3cml0aW5nDQo+IGENCj4gPj4gbWFzaC11cCBhcHBsaWNhdGlvbiBhZ2FpbnN0IHNvbWUg
b3RoZXIgY29udGVudCBwcm92aWRlciwgaG93IGRvIHlvdSBrbm93DQo+IGlmDQo+ID4+IHRoZXkg
YXJlIGluIHRoZSBzYW1lIGRhdGEgY2VudGVyLCBvciBub3Q/ICBNeSBndWVzcyBpcyB0aGF0IHRo
ZXJlIGlzIHF1aXRlIGENCj4gYml0DQo+ID4+IG9mIHRyYWZmaWMgYmV0d2VlbiB0ZW5hbnRzIG9m
IEVDMiwgYnR3LiAgSSBrbm93IHRoYXQgdGhhdCB3YXMgdGhlIGludGVudCBhdA0KPiBteQ0KPiA+
PiBsYXN0IGdpZyAtIHdlIHdhbnRlZCBTYWFTLWxpa2UgcHJvdmlkZXJzIHRvIGxpdmUgaW4gb3Vy
IERDJ3MgYW5kIGRldmVsb3AgYW4NCj4gPj4gZWNvLXN5c3RlbSBhcm91bmQgb3VyIGNvcmUgc2Vy
dmljZXMuICBPdGhlciBtYWpvciBjcm9zcy1kYyB0cmFmZmljIC0gaG93DQo+ID4+IGFib3V0IGFs
bCBvZiBteSBzZXJ2aWNlcyBsaWtlIHNwYW0gZmlsdGVyaW5nLCBiYWNrdXAsIGV0Yy4gIEluIGEg
REMsIEkgbWF5IGNhbGwNCj4gPj4gdGhlbSAiY29yZSBzZXJ2aWNlcyIgYnV0IHRoZXkgYXJlLCBp
biBmYWN0LCBhbm90aGVyIHRlbmFudC4gIEhvdyBhYm91dA0KPiBsYXJnZQ0KPiA+PiBzY2FsZSBj
b250ZW50IHByb3ZpZGVycyB0aGF0IG1hc2gtdXAgYmV0d2VlbiB0aGVpciBvd24gb2ZmZXJpbmdz
LiAgTWFueQ0KPiBvZg0KPiA+PiB0aG9zZSBwcm9wZXJ0aWVzIGFyZSB2aWV3ZWQgYXMgInNlcGFy
YXRlIGN1c3RvbWVycyIgYnkgdGhlIGluZnJhc3RydWN0dXJlDQo+ID4+IHRlYW1zIChjYW4ndCBu
YW1lIG5hbWVzIGhlcmUpLiAgQW55IGludGVyLW9mZmVyaW5nIG1hc2gtdXBzIHdvdWxkDQo+IGRl
ZmluaXRlbHkNCj4gPj4gYmUgY3Jvc3MtZGMuDQo+ID4NCj4gPiBIaSBDaHJpcywNCj4gPg0KPiA+
IEkgaGF2ZSBubyBkb3VidCBhYm91dCB0aGUgcG9zc2liaWxpdHkgeW91IG1lbnRpb25lZCBhYm92
ZS4gSG93ZXZlciwgb25jZQ0KPiB5b3UgYXJlIGNvbnNpZGVyaW5nIHRvIG9wdGltaXplIHRoZSBm
b3J3YXJkaW5nIHBhdGggb2YgaW50ZXItdGVuYW50IHRyYWZmaWMNCj4gd2l0aGluIHRoZSBzY29w
ZSBvZiBMMlZQTiBvciBMM1ZQTiBzb2x1dGlvbnMsIHdoaWxlIHRha2luZyB0aGUgYWRkcmVzcyBz
cGFjZQ0KPiBvdmVybGFwcGluZywgZmlyZXdhbGwgcG9saWN5IGlzc3VlcywgZXRjLiBpbnRvIGFj
Y291bnQsIHlvdSB3aWxsIGZpbmQgaXQgaXMgYSBtdWNoDQo+IGNvbXBsZXggam9iLiBXaGV0aGVy
IG9yIG5vdCBpdCBpcyB3b3J0aHdoaWxlIHRvIGRvIHRoYXQgb3B0aW1pemF0aW9uIGhlYXZpbHkN
Cj4gZGVwZW5kcyBvbiB3aGV0aGVyIHRoZSB2b2x1bWUgb2YgaW50ZXItdGVuYW50IHRyYWZmaWMg
aXMgbXVjaCBzaWduaWZpY2FudC4NCj4gDQo+IFRoaXMgaXMgb25lIHRoYXQsIGlmIHdlIG1pc3Ms
IHdpbGwgY29tZSBiYWNrIGFuZCBiaXRlIHVzLiAgSWYgeW91IGhhdmUgbm8gd2F5IG9mDQo+IGdl
dHRpbmcgdHJhZmZpYyBiZXR3ZWVuICJ0ZW5hbnRzIiBpbiBhIGRhdGEgY2VudHJlLCBJIGhhdmUg
Tk8gaWRlYSBob3cgeW91IHdpbGwNCg0KSGkgQ2hyaXMsDQoNCkknbSBub3Qgc2F5aW5nIHRyYWZm
aWMgYmV0d2VlbiAidGVuYW50cyIgc2hvdWxkIG5vdCBiZSBhbGxvd2VkIGluIGEgZGF0YSBjZW50
ZXIuIElNSE8sIHRoZSBpbnRlci10ZW5hbnQgdHJhZmZpYyBvcmlnaW5hdGVkIGZyb20gb25lIHRl
bmFudCBjb3VsZCBiZSBmb3J3YXJkZWQgdG93YXJkcyBpdHMgZ2F0ZXdheSB3aGljaCB3aWxsIGlu
IHR1cm4gZm9yd2FyZCB0aGUgdHJhZmZpYyB0byB0aGUgZGVzdGluYXRpb24gd2l0aGluIGFub3Ro
ZXIgdGVuYW50LiBJZiB5b3Ugd2FudCB0aGUgVG9ScyAoYWN0aW5nIGFzIGluZ3Jlc3MgUEUgcm91
dGVycykgdG8gcGVyZm9ybSB0aGUgaW50ZXItdGVuYW50IHJvdXRpbmcgc28gYXMgdG8gYWNoaWV2
ZSBwYXRoIG9wdGltaXphdGlvbiwgdGhlcmUgd291bGQgYmUgYSBsb3Qgb2YgaGFyZCBwcm9ibGVt
cyB0byBiZSBjb25zaWRlcmVkOiwgMSkgdGhlIGZvcndhcmRpbmcgdGFibGUgc2NhbGFiaWxpdHkg
aXNzdWUgd2lsbCBiZSB3b3JzZW4sIGZvciBleGFtcGxlLCBpbiB0aGUgSVAgb3ZlciBJUCBzb2x1
dGlvbiwgdGhlIFZSRiByb3V0aW5nIHRhYmxlIHNpemUgb24gdGhlIFRvUiB3aWxsIGJlIGluY3Jl
YXNlZCBzaW5jZSB0aGUgcm91dGVzIGZyb20gb3RoZXIgdGVuYW50cyAoZS5nLiwgVlJGcykgd291
bGQgaGF2ZSB0byBiZSBpbXBvcnRlZCwgaW4gdGhlIE1BQyBvdmVyIElQIHNvbHV0aW9uLCB0aGUg
VG9SIHdvdWxkIGhhdmUgdG8gc3VwcG9ydCBJUkIgKGludGVncmF0ZWQgcm91dGluZyBhbmQgYnJp
ZGdpbmcpIGZ1bmN0aW9ucyB3aGljaCByZXF1aXJlZCB0aGUgVG9SIHRvIGhhdmUgYSBNQUMgdGFi
bGUgZm9yIGludHJhLXRlbmFudCB0cmFmZmljIGZvcndhcmRpbmcgYW5kIGEgcm91dGluZyB0YWJs
ZSBmb3IgaW50ZXItdGVuYW50IHRyYWZmaWMgZm9yd2FyZGluZywgdGhlIHJvdXRpbmcgdGFibGUg
c2l6ZSB3aWxsIGJlIG11Y2ggbGFyZ2VyIGR1ZSB0byB0aGUgc2FtZSByZWFzb24gYXMgYWJvdmUg
OyAyKSB0aGUgQUNMcyBmb3IgaW50ZXItdGVuYW50IGFjY2VzcyBjb250cm9sIHdvdWxkIGhhdmUg
dG8gYmUgY29uZmlndXJlZCBpZGVudGljYWxseSBvbiBhbGwgb2YgdGhlIFBFIHJvdXRlcnMgKGku
ZS4sIFRvUnMpIGJlbG9uZ2luZyB0byB0aGUgc2FtZSB0ZW5hbnQ7IDMpIGlmIHRoZSBhZGRyZXNz
IHNwYWNlcyBvZiBkaWZmZXJlbnQgdGVuYW50cyBhcmUgb3ZlcmxhcHBlZCwgeW91IHdvdWxkIGhh
dmUgdG8gY29uZmlndXJlIE5BVCBzZXJ2aWNlIG9uIGFsbCBvZiB0aGUgUEUgcm91dGVycyAoaS5l
LiwgVG9ScykgYmVsb25naW5nIHRvIHRoZSBzYW1lIHRlbmFudCwgaW4gYWRkaXRpb24sIHRoZXJl
IHdpbGwgYSBsb3Qgb2YgdW5jZXJ0YWluIGlzc3VlcyBuZWVkIHRvIGJlIGNvbnNpZGVyZWQuLi4g
SW4gYSB3b3JkLCBhbGwgb2YgdGhvc2Ugd2lsbCBtYWtlIHRoZSBwcm92aXNpb25pbmcgYW5kIG1h
bmFnZW1lbnQgbXVjaCBtb3JlIGNvbXBsaWNhdGVkLg0KDQpCZXN0IHJlZ2FyZHMsDQpYaWFvaHUg
IA0KDQo+IGRlYWwgd2l0aCBzaGFyZWQgc2VydmljZXMgKHN1Y2ggYXMgc3RvcmFnZSwgYmFja3Vw
LCB0cmFuc2l0LCBldGMpIGFzIHdlbGwgYXMgdGhlDQo+IGNsYXNzaWNhbCBpbnRlci10ZW5uYW50
IHByb2JsZW0uICBJZiB0aGUgcHJvcG9zYWwgaXMgdG8ganVzdCBzZW5kIGl0IHVwIHRoZSB0cmVl
DQo+IChhcyBpcyBkb25lIHRvZGF5KSwgeW91IHdpbGwgY29udGludWUgdG8gaGF2ZSB0aGUgc2Ft
ZSBpc3N1ZXMgYXMgd2UgZG8gbm93DQo+ICh0aGF0IGlzIGRyaXZpbmcgYWxsIHRoaXMgd29yayku
ICBJZiB5b3UgZG91YnQgdGhpcywgYXNrIHlvdXJzZWxmIHdoeSB0aGVyZSBpcyBhDQo+IGRyaXZl
IGZvciBncmVhdGVyIGNyb3NzLXNlY3Rpb25hbCBiYW5kd2lkdGggaW4gdGhlIGRhdGEgY2VudGVy
LiAgSWYgeW91IGJlbGlldmUNCj4gdGhhdCB0aGUgY2xhc3NpY2FsIHRyZWUgc29sdXRpb24gd2ls
bCB3b3JrLCB0aGVuIHdlIGFyZSBhbGwgY2hhc2luZyBvdXIgdGFpbCBoZXJlLg0KPiANCj4gCUNo
cmlzDQo+IA0KPiA+DQo+ID4gQmVzdCByZWdhcmRzLA0KPiA+IFhpYW9odQ0KPiA+DQo+ID4+PiBC
ZXN0IHJlZ2FyZHMsDQo+ID4+PiBYaWFvaHUNCj4gPj4+DQo+ID4+Pj4+DQo+ID4+Pj4+DQo+ID4+
Pj4+PiBPciB3aWxsIHRoZXkgd2FudCBhbiBhbHRlcm5hdGl2ZSBhcHByb2FjaD8NCj4gPj4+Pj4+
DQo+ID4+Pj4+Pj4gMi4gV2h5IGRvZXMgdGhpcyBtb2JpbGl0eSBuZWVkIHRvIGJlIGF0IGxheWVy
IDIgc3BlY2lmaWNhbGx5PyBBcmUgd2UNCj4gPj4+Pj4+PiBhc3N1bWluZyBERE5TIGFuZCBvdGhl
ciBzb3J0cyBvZiBzb2x1dGlvbnMgaW4gdGhpcyBzcGFjZSB3aWxsIHNpbXBseQ0KPiA+Pj4+Pj4+
IG5ldmVyIGJlIGZhc3QgZW5vdWdoL3NjYWxlIGZhciBlbm91Z2gvZXRjPw0KPiA+Pj4+Pj4NCj4g
Pj4+Pj4+IExpa2UgaXQgb3Igbm90LCB0aGUga2V5IHJlcXVpcmVtZW50IGZvciBWTSBtb2JpbGl0
eSBpcyB0aGF0IHRoZSBWTSdzDQo+ID4+Pj4+PiBJUCBhZGRyZXNzIGRvZXMgbm90IGNoYW5nZS4g
VGhhdCBtZWFucyB0aGUgVk0gY2FuJ3QgcmVhbGx5IG1vdmUgZnJvbQ0KPiA+Pj4+Pj4gb25lIElQ
IHN1Ym5ldCB0byBhbm90aGVyLiBUaGF0IG1lYW5zIGVpdGhlciBtb3ZpbmcgdG8gYmlnZ2VyIGFu
ZA0KPiA+Pj4+Pj4gYmlnZ2VyIEwycyAoYWxsIHVuZGVyIG9uZSBJUCBzdWJuZXQpIGFzIHRoZSBE
QyBleHBhbmRzIG9yIHRoZSBuZWVkIHRvDQo+ID4+Pj4+PiBpbmplY3QgLzMyIGhvc3Qgcm91dGVz
Lg0KPiA+Pj4+Pg0KPiA+Pj4+PiBJbiB0aGUgRENJIHNjZW5hcmlvIHdoZXJlIHRoZSBQRSByb3V0
ZXJzIGFyZSB1c3VhbGx5IHBlcmZvcm1lZCBhdCB0aGUNCj4gPj4+PiBhZ2dyZWdhdGlvbiBTV3Mg
b3IgZXZlbiBjb3JlIFNXcywgdGhlIFBFIHJvdXRlcnMgd291bGQgbmVlZCBhIG11Y2gNCj4gbGFy
Z2UNCj4gPj4+PiBmb3J3YXJkaW5nIHRhYmxlLiBQcm92aWRlZCB0aGUgcm91dGluZyB0YWJsZSBj
b250YWluaW5nIG1pbGxpb25zIG9mIGVudHJpZXMsDQo+ID4+Pj4gd2hpY2ggaXMgYXZhaWxhYmxl
IG9uIG1vc3QgdG9kYXkncyBoaWdoLWVuZCByb3V0ZXJzLCB3YXMgc3RpbGwgbm90IGxhcmdlDQo+
ID4+IGVub3VnaCwNCj4gPj4+PiB0aGUgb24tZGVtYW5kIEZJQiBpbnN0YWxsYXRpb24gb3Igb24t
ZGVtYW5kIHJvdXRlIGFubm91bmNlbWVudA0KPiA+Pj4+IG1lY2hhbmlzbXMgY2FuIGJlIHVzZWQg
ZnVydGhlciB0byBzY2FsZSB0aGUgc29sdXRpb24uIE5vdGUgdGhhdCB0aGUNCj4gdHJpZ2dlcg0K
PiA+PiBmb3INCj4gPj4+PiB0aGUgRklCIGluc3RhbGxhdGlvbiBvciByb3V0ZSBhbm5vdW5jZW1l
bnQgaXMgQVJQIHJlcXVlc3QgcGFja2V0cyByYXRoZXINCj4gPj4gdGhhbg0KPiA+Pj4+IGRhdGEg
cGFja2V0cy4gSGVuY2UgaXQgd2lsbCBub3QgY2F1c2UgdGhlIHNvLWNhbGxlZCBpbml0aWFsIHBh
Y2tldCBsb3NzIG9yDQo+ID4+IGxhdGVuY3kNCj4gPj4+PiBpc3N1ZS4NCj4gPj4+Pj4NCj4gPj4+
Pj4+IE5laXRoZXIgb2YgdGhvc2UgYXBwcm9hY2hlcyBzZWVtcyBwYXJ0aWN1bGFybHkgc2NhbGFi
bGUvZGVzaXJhYmxlIGlmDQo+ID4+Pj4+PiB5b3UgbG9vayAxMCB5ZWFycyBkb3duIHRoZSByb2Fk
IGFuZCB0aGluayBvZiAxTSsgcGh5c2ljYWwgbWFjaGluZXMgaW4NCj4gPj4+Pj4+IGEgREMuDQo+
ID4+Pj4+DQo+ID4+Pj4+IE1heWJlIHdlIHNob3VsZCBhbHNvIHRha2UgdGhlIGRldmVsb3BtZW50
IHNwZWVkIG9mIHJvdXRpbmcvc3dpdGNoaW5nDQo+ID4+IGNoaXANCj4gPj4+PiBhbmQgQ1BVIHRl
Y2hub2xvZ2llcyBpbnRvIGFjY291bnQ6KQ0KPiA+Pj4+DQo+ID4+Pj4gSXQncyBtb3JlIGEgcXVl
c3Rpb24gb2YgY29zdC9wZXJmb3JtYW5jZSBvbiBvZmYtY2hpcCBtZW1vcnkvVENBTXMuDQo+ID4+
IFRoYXQgaXMNCj4gPj4+PiBhIHNsaWdodGx5IGRpZmZlcmVudCBjdXJ2ZSA6KQ0KPiA+Pj4+DQo+
ID4+Pj4gCUNocmlzDQo+ID4+Pj4NCj4gPj4+Pj4NCj4gPj4+Pj4gQmVzdCByZWdhcmRzLA0KPiA+
Pj4+PiBYaWFvaHUNCj4gPj4+Pj4NCj4gPj4+Pj4+IFRob21hcw0KPiA+Pj4+Pj4NCj4gPj4+Pj4+
IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+ID4+Pj4+
PiBkYyBtYWlsaW5nIGxpc3QNCj4gPj4+Pj4+IGRjQGlldGYub3JnDQo+ID4+Pj4+PiBodHRwczov
L3d3dy5pZXRmLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2RjDQo+ID4+Pj4+IF9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+ID4+Pj4+IGRjIG1haWxpbmcgbGlz
dA0KPiA+Pj4+PiBkY0BpZXRmLm9yZw0KPiA+Pj4+PiBodHRwczovL3d3dy5pZXRmLm9yZy9tYWls
bWFuL2xpc3RpbmZvL2RjDQo+ID4+Pj4NCj4gPj4+PiAtLQ0KPiA+Pj4+IOadjuafr+edvw0KPiA+
Pj4+IENoZWNrIG15IFBHUCBrZXkgaGVyZTogaHR0cHM6Ly93d3cuYXNnYWFyZC5vcmcvfmNkbC9j
ZGwuYXNjDQo+ID4+Pj4gQ3VycmVudCB2Q2FyZCBoZXJlOiBodHRwczovL3d3dy5hc2dhYXJkLm9y
Zy9+Y2RsL2NkbC52Y2YNCj4gPj4+DQo+ID4+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fXw0KPiA+Pj4gZGMgbWFpbGluZyBsaXN0DQo+ID4+PiBkY0BpZXRm
Lm9yZw0KPiA+Pj4gaHR0cHM6Ly93d3cuaWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9kYw0KPiA+
Pg0KPiA+PiAtLQ0KPiA+PiDmnY7mn6/nnb8NCj4gPj4gQ2hlY2sgbXkgUEdQIGtleSBoZXJlOiBo
dHRwczovL3d3dy5hc2dhYXJkLm9yZy9+Y2RsL2NkbC5hc2MNCj4gPj4gQ3VycmVudCB2Q2FyZCBo
ZXJlOiBodHRwczovL3d3dy5hc2dhYXJkLm9yZy9+Y2RsL2NkbC52Y2YNCj4gPg0KPiA+IF9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQo+ID4gZGMgbWFpbGlu
ZyBsaXN0DQo+ID4gZGNAaWV0Zi5vcmcNCj4gPiBodHRwczovL3d3dy5pZXRmLm9yZy9tYWlsbWFu
L2xpc3RpbmZvL2RjDQo+IA0KPiAtLQ0KPiDmnY7mn6/nnb8NCj4gQ2hlY2sgbXkgUEdQIGtleSBo
ZXJlOiBodHRwczovL3d3dy5hc2dhYXJkLm9yZy9+Y2RsL2NkbC5hc2MNCj4gQ3VycmVudCB2Q2Fy
ZCBoZXJlOiBodHRwczovL3d3dy5hc2dhYXJkLm9yZy9+Y2RsL2NkbC52Y2YNCg0K

From adalela@cisco.com  Fri Jan  6 20:07:49 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 437C421F8680 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 20:07:49 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.437
X-Spam-Level: 
X-Spam-Status: No, score=-2.437 tagged_above=-999 required=5 tests=[AWL=0.162,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 7XpRAgxJGsg3 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 20:07:48 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id DB03621F8681 for <dc@ietf.org>; Fri,  6 Jan 2012 20:07:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2024; q=dns/txt; s=iport; t=1325909268; x=1327118868; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=rhYizjwu2t/7VqQGVS/laO8CQLeRqqLCspG00X9wwgI=; b=iCXhenieeD+rTu3Y2lhDe4sF1a1CCe8DUf7Zsyuo7soAqzuLbktnRVKU Cjmd2TPrfGPmI4fgVUuNtAqOM0X5ktpULubSHtKShibGwjr+RrMbAXe/T OKco2eKdLKe1z1lj7lC0y2sH0wOt4TtmbU7Vyj4EDB4m8kDOq3PgOv1lq Y=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ap8EAIzEB09Io8UY/2dsb2JhbABErUOBcgEBAQQBAQEPAR0KNAIJDAQCAQgRBAEBCwYXAQYBJh8JCAEBBAEKCAgah2CXdgGeCQSLLmMEiDefEA
X-IronPort-AV: E=Sophos;i="4.71,472,1320624000";  d="scan'208";a="2944901"
Received: from vla196-nat.cisco.com (HELO bgl-core-1.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 07 Jan 2012 04:07:46 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q0747kVp008758; Sat, 7 Jan 2012 04:07:46 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Sat, 7 Jan 2012 09:37:45 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Sat, 7 Jan 2012 09:37:42 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25F2B@XMB-BGL-416.cisco.com>
In-Reply-To: <4A95BA014132FF49AE685FAB4B9F17F62A4E6640@dfweml505-mbx>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] DCREQ Section 4 Outline
Thread-Index: AczMwzqSmKDf1zT6SlWzZwPqD8IVPQALcU3g
References: <4A95BA014132FF49AE685FAB4B9F17F62A4E6640@dfweml505-mbx>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Linda Dunbar" <linda.dunbar@huawei.com>, <dc@ietf.org>
X-OriginalArrivalTime: 07 Jan 2012 04:07:45.0965 (UTC) FILETIME=[E9443DD0:01CCCCF1]
Cc: Ronald Bonica <rbonica@juniper.net>
Subject: Re: [dc] DCREQ Section 4 Outline
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 04:07:49 -0000

Linda, Please see inline. Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Linda Dunbar
Sent: Saturday, January 07, 2012 4:04 AM
To: dc@ietf.org
Cc: Ronald Bonica
Subject: [dc] DCREQ Section 4 Outline


Section 4: "Data Center Characteristics" should at least add the
following sub-sections:
	- Types of data centers: homogeneous content provider data
centers, multi-tenant data centers (small/medium sized data centers, and
mega sized data centers), Network service providers' data centers, etc.


[AD] What is the difference between a multi-tenant DC and a NSP's DC?
How about including enterprise datacenters as part of hybrid scenarios?

	Each of them has different characteristics. E.g. homogeneous
content provider data center might not address fragmentation issues, but
may have other issues. =20

[AD] Do web/content providers need to have VM's? If not, how does the
scale and mobility problem change for them?

	-  A section on how location of load-balancing, firewall, and
other middleware boxes determine where VMs/hosts in different segments
can be aggregated/exchanged.

[AD] You are talking about a L2-L3 boundary? Or something else? What
happens when the firewall is in the Hypervisor?

	- A section on how applications are instantiated in data center.
Applications servers (e.g. Oracle's middleware WebLogicServer)
instantiate multiple instances of one application, assign them the IP
addresses, and VM/server managers place them onto a server rack.=20

[AD] How does that affect what problems we are discussing? E.g. isn't
this a form of orchestration?

	- A section to describe how backend data/storage are separated
from the front end service networks.   =20

[AD] Isn't this adequately covered by the "Data is Immobile" section?
What additions do you propose?

Linda
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From adalela@cisco.com  Fri Jan  6 20:25:38 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9435421F86EC for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 20:25:38 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.441
X-Spam-Level: 
X-Spam-Status: No, score=-2.441 tagged_above=-999 required=5 tests=[AWL=0.158,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id c1Ox-G-4ybQW for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 20:25:37 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 6D0DB21F8688 for <dc@ietf.org>; Fri,  6 Jan 2012 20:25:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=3051; q=dns/txt; s=iport; t=1325910337; x=1327119937; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=on89DV3TYeaq8O3MU11l9Ii1GkNQowQyStB+8DxW2lw=; b=NSBw6LZX+y2nbJ5+fHeVFVEkKFBh1Npd+94CSws9VOoFnj4DkTSigcWR Xa2ZOZNVR2Aq2WMPmzMU/nPnjRWLxzawzB05JjEj8uWuTqzhk17Bh7dw5 knWjj2ataqldNaNLDCTm4IaQTwvWUjp7tVYSekVQL1fpTo9mV31F8Ddib 0=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ap8EAMrIB09Io8UY/2dsb2JhbABErUOBcgEBAQQBAQEPAR0KNAsMBAIBCBEEAQELBhcBBgEmHwkIAQEEAQoICBqHYJd0AZ4GBIsuYwSIN58Q
X-IronPort-AV: E=Sophos;i="4.71,472,1320624000";  d="scan'208";a="2950967"
Received: from vla196-nat.cisco.com (HELO bgl-core-1.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 07 Jan 2012 04:25:35 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q074PZx8011629; Sat, 7 Jan 2012 04:25:35 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Sat, 7 Jan 2012 09:55:35 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Sat, 7 Jan 2012 09:55:33 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25F2C@XMB-BGL-416.cisco.com>
In-Reply-To: <4A95BA014132FF49AE685FAB4B9F17F62A4E6655@dfweml505-mbx>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] DCREQ Section 5 Outline
Thread-Index: AczMw88oqJdSU2a5TiWfYpubu8yM3gALiFJA
References: <4A95BA014132FF49AE685FAB4B9F17F62A4E6655@dfweml505-mbx>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Linda Dunbar" <linda.dunbar@huawei.com>, <dc@ietf.org>
X-OriginalArrivalTime: 07 Jan 2012 04:25:35.0702 (UTC) FILETIME=[66E12760:01CCCCF4]
Cc: Ronald Bonica <rbonica@juniper.net>
Subject: Re: [dc] DCREQ Section 5 Outline
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 04:25:38 -0000

Hi Linda, Please see inline. Thanks, Ashish

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Linda Dunbar
Sent: Saturday, January 07, 2012 4:08 AM
To: dc@ietf.org
Cc: Ronald Bonica
Subject: [dc] DCREQ Section 5 Outline

Section 5 of  draft-dalela-dc-requirements needs a lot of changes.=20

	- Many statements in Section 5.1 "Basic Forwarding Problem"  are
not accurate. E.g. "the need to massively scale virtualized hosts" may
break the FDB (or FIB) on switches/routers, but it doesn't really break
the L2/L3 technologies traditionally used in data center.=20

L3 networks can't support host mobility is not true either. Host based
routing can support mobility.=20

[AD] Ok, there is no debate here. It's a matter of rephrasing.=20
 =20
	- Section 5.1 is really about "basic forwarding for segmented
networks"

[AD] That wasn't obvious to me. The problems will be true I think even
if there is no segmentation. E.g. if we had a massive L2 domain or a
large L3 network with host routes (no segmentation).
=20
	- Section 5.2 (DC Inter-connectivity problem) is loaded. Should
be broken into multiple (sub)sections:

		- Problems associated with various networks in
interconnecting data centers (VPN, straight IP, or straight optical)

[AD] How are the cases you described above different from each other?
Isn't optical transport transparent to MAC/IP layers and above?

		- Multi-data center Service interconnection, Layer 4-7
load balancing issues:  i.e. the description on how service needs to be
closer to clients, mapping services to location of data centers, etc.

[AD] Isn't the "service" here is a DNS mapped to an IP address? So,
eventually to know the location of a service, we need to know the
location of the IP? I'm trying to understand what are we saying beyond
"workload may be moving between datacenters".

		- Middleware function association in Multi-data center
scenario

[AD] Can you elaborate? Isn't this about an application / middleware
being able to a broadcast/multicast discovery or other such middleware?
In which case, will the issue be that datacenters must support multicast
and broadcast?

	- Section 5.3 "Multi-tenancy Problem" should at least include
issues associated with double VLAN tagging, non-VLAN partition (e.g.
IEEE802.1ah's ISID), etc

[AD] Multiple approaches to address each of these problems are already
mentioned in the "approaches" draft. Would we benefit from duplicating
and/or moving the information from that place to here? The concern might
be that we analyzed double VLAN but ignored GRE. But, I agree that it
has to be somewhere, just not sure if we need to put it here.=20

	- New sub-section in Section 5 on traffic pattern dictated by
policy and middleware boxes functions and locations. =20

[AD] This wasn't clear to me. Can you elaborate?

Linda Dunbar

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From adalela@cisco.com  Fri Jan  6 21:38:53 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CF37421F856F for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 21:38:52 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.445
X-Spam-Level: 
X-Spam-Status: No, score=-2.445 tagged_above=-999 required=5 tests=[AWL=0.154,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Hgyx2t9njzkL for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 21:38:51 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 853ED21F8574 for <dc@ietf.org>; Fri,  6 Jan 2012 21:38:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=4145; q=dns/txt; s=iport; t=1325914727; x=1327124327; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=w4QODAabn9Hsemvzc7I5yXbqMHVrFDvW1+fvHhTa8rA=; b=dhgXB/E9xvssfNhmHHdz3KtueSezDgktB5LPguioVldwYLit8WNAr9ew AYHWRjy/qjkkSXsEQEQNBtxPhswOkTQjAV+iEp/+jo35lvjDNuonA+WV7 xcu98eeb8L6BX8kL6dLCes+5qjHqa6PW8RvmhyJxemwqyzbjMgJ0afil0 4=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqAEACXZB09Io8UY/2dsb2JhbAA6Cq1EgXIBAQEDAQEBAQ8BHQo0CwUHBAIBCBEEAQELBhcBBgEmHwkIAQEEAQoICBECB4dYCJdvAZ4GBIhWglhjBIg3nxA
X-IronPort-AV: E=Sophos;i="4.71,472,1320624000";  d="scan'208";a="2952394"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 07 Jan 2012 05:38:40 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q075ceoW021537; Sat, 7 Jan 2012 05:38:40 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Sat, 7 Jan 2012 11:08:40 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Sat, 7 Jan 2012 11:08:37 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com>
In-Reply-To: <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] DC Work Plan
Thread-Index: AczMx8Uehz9tTZwlQBi1/eQTndZ9sgAMhyjQ
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net><4F07488E.2070103@raszuk.net><13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net><201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com><4F07797A.3090907@cisco.com> <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Thomas Narten" <narten@us.ibm.com>, "Stewart Bryant (stbryant)" <stbryant@cisco.com>
X-OriginalArrivalTime: 07 Jan 2012 05:38:40.0374 (UTC) FILETIME=[9C590160:01CCCCFE]
Cc: Ronald Bonica <rbonica@juniper.net>, dc@ietf.org, robert@raszuk.net
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 05:38:53 -0000

Thomas,

>> I think it's much more helpful (and IETF tradition!) to try and
identify individual problems and where possible, solve problems
individually (i.e., divide and conquer).

It is indeed IETF tradition to solve individual problems individually.
What makes it different in this case is that a customer spans multiple
datacenters. That means VM mobility, segmentation, broadcast, multicast,
unicast, policy (e.g. bandwidth) are spanning across both intra and
inter-DC.=20

If we follow the IETF tradition here, it will mean that mobility,
segmentation, broadcast, multicast, policy will potentially all be done
in two sets of ways across and between datacenters, because these can
indeed be stated as separate problems. Now, you have a huge issue in
mapping one type of scheme into another, given the many possible ways to
do each of mobility, segmentation, broadcast, multicast, policy,
themselves.

Added to the above is the fact that there are divergent approaches - L2
vs. L3, host-based vs. network-based, overlay vs. flat. Just to solve
address resolution, there is more than one approach - e.g. directory
based or control plane based.=20

The total number of permutations and combinations arising from these
many alternatives will eventually mean that things don't work together.=20
=09
If intra-dc is directory based, then it doesn't help inter-dc to be
control plane based. If intra-dc uses VLAN for segmentation then it
doesn't help inter-dc to use GRE. If the unicast is multi-path then it
doesn't help if multicast and broadcast are static trees. If one DC is
L2 encap then it doesn't help if another DC is L3 encap. Note that this
is already discounting the fact that this has to interoperate with
classic L2/L3 networks. That may mean a L3 OSPF routing table entry into
a L2 IS-IS forwarding entry. Proliferation of divergent approaches just
worsens interoperability.=20

I understand your concern, and that it is different than IETF tradition.
But we need to choose between tradition and interoperability.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Thomas Narten
Sent: Saturday, January 07, 2012 4:36 AM
To: Stewart Bryant (stbryant)
Cc: Ronald Bonica; robert@raszuk.net; dc@ietf.org
Subject: Re: [dc] DC Work Plan

Hi Stewart.

> I think that SDN has wider applicability than DC and thus has a
> life (or death) of its own.

It's not just that it has wider applicability, it's that that effort
seems somewhat self-contained and I can see how one might carve out an
effort in this space.

> NVO3 is a candidate solution to the DC problem,

This is probably just quick typing, but I think it is worth saying
that we are not helping ourselves by thinking there is *one* DC
problem.

There are a number of possible problems. Some relate to each others,
some more so, some less so. But I think it is not helpful to try and
view this area as having *one* problem that needs sorting out.

I think it's much more helpful (and IETF tradition!) to try and
identify individual problems and where possible, solve problems
individually (i.e., divide and conquer). Of course, there can be
interdependencies between problem and solution spaces, but we are
likely going to flail if we try to view this as one big problem
needing one overall solution approach. Or that we can't work on sub
problems unless we understand the entire problem space.

> but from the discussion on the list I am yet to be convinced that we
> have either the right problem statement to endorse it as the
> approach for L2, let alone to determine whether we need a L2
> solution, a L3 solution or a mixed solution.

NVO3 is aimed at a subset of the DC "problem area". The bullet point
summary is that it's different way of providing multi-tenancy in the
DC. It is not the *only* way. But I think the Taipai session showed
there was significant support for this approach.

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From xuxiaohu@huawei.com  Fri Jan  6 23:52:22 2012
Return-Path: <xuxiaohu@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 513F221F8550 for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 23:52:22 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.175
X-Spam-Level: 
X-Spam-Status: No, score=-2.175 tagged_above=-999 required=5 tests=[AWL=-0.718, BAYES_00=-2.599, CN_BODY_35=0.339, J_CHICKENPOX_13=0.6, MIME_BASE64_TEXT=1.753, MIME_CHARSET_FARAWAY=2.45, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4Xa56PhX4Emb for <dc@ietfa.amsl.com>; Fri,  6 Jan 2012 23:52:21 -0800 (PST)
Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [119.145.14.64]) by ietfa.amsl.com (Postfix) with ESMTP id 1A1B621F854E for <dc@ietf.org>; Fri,  6 Jan 2012 23:52:21 -0800 (PST)
Received: from huawei.com (szxga05-in [172.24.2.49]) by szxga05-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXF00DU94IYXF@szxga05-in.huawei.com> for dc@ietf.org; Sat, 07 Jan 2012 15:52:10 +0800 (CST)
Received: from szxrg02-dlp.huawei.com ([172.24.2.119]) by szxga05-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXF00EOS4IYOJ@szxga05-in.huawei.com> for dc@ietf.org; Sat, 07 Jan 2012 15:52:10 +0800 (CST)
Received: from szxeml207-edg.china.huawei.com ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGE53921; Sat, 07 Jan 2012 15:52:09 +0800
Received: from SZXEML414-HUB.china.huawei.com (10.82.67.153) by szxeml207-edg.china.huawei.com (172.24.2.59) with Microsoft SMTP Server (TLS) id 14.1.323.3; Sat, 07 Jan 2012 15:52:05 +0800
Received: from SZXEML525-MBS.china.huawei.com ([169.254.8.55]) by SZXEML414-HUB.china.huawei.com ([10.82.67.153]) with mapi id 14.01.0323.003; Sat, 07 Jan 2012 15:52:01 +0800
Date: Sat, 07 Jan 2012 07:52:00 +0000
From: Xuxiaohu <xuxiaohu@huawei.com>
In-reply-to: <618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com>
X-Originating-IP: [10.108.4.80]
To: "Ashish Dalela (adalela)" <adalela@cisco.com>, Thomas Narten <narten@us.ibm.com>, "Stewart Bryant (stbryant)" <stbryant@cisco.com>
Message-id: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE765D9E@szxeml525-mbs.china.huawei.com>
MIME-version: 1.0
Content-type: text/plain; charset=gb2312
Content-language: zh-CN
Content-transfer-encoding: base64
Accept-Language: zh-CN, en-US
Thread-topic: [dc] DC Work Plan
Thread-index: AczMpBVlzok2urS5RMGeXcLx/H8AYf//gRwAgAAN8wCAACACgIAADF0AgAAFuYCAAG3GgP//XTcg
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4F07488E.2070103@raszuk.net> <13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net> <201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com> <4F07797A.3090907@cisco.com> <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com>
Cc: Ronald Bonica <rbonica@juniper.net>, "robert@raszuk.net" <robert@raszuk.net>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 07:52:22 -0000

DQo+IC0tLS0t08q8/tStvP4tLS0tLQ0KPiC3orz+yMs6IGRjLWJvdW5jZXNAaWV0Zi5vcmcgW21h
aWx0bzpkYy1ib3VuY2VzQGlldGYub3JnXSC0+rHtIEFzaGlzaA0KPiBEYWxlbGEgKGFkYWxlbGEp
DQo+ILeiy83KsbzkOiAyMDEyxOox1MI3yNUgMTM6MzkNCj4gytW8/sjLOiBUaG9tYXMgTmFydGVu
OyBTdGV3YXJ0IEJyeWFudCAoc3RicnlhbnQpDQo+ILOty806IFJvbmFsZCBCb25pY2E7IGRjQGll
dGYub3JnOyByb2JlcnRAcmFzenVrLm5ldA0KPiDW98ziOiBSZTogW2RjXSBEQyBXb3JrIFBsYW4N
Cj4gDQo+IA0KPiBUaG9tYXMsDQo+IA0KPiA+PiBJIHRoaW5rIGl0J3MgbXVjaCBtb3JlIGhlbHBm
dWwgKGFuZCBJRVRGIHRyYWRpdGlvbiEpIHRvIHRyeSBhbmQNCj4gaWRlbnRpZnkgaW5kaXZpZHVh
bCBwcm9ibGVtcyBhbmQgd2hlcmUgcG9zc2libGUsIHNvbHZlIHByb2JsZW1zDQo+IGluZGl2aWR1
YWxseSAoaS5lLiwgZGl2aWRlIGFuZCBjb25xdWVyKS4NCj4gDQo+IEl0IGlzIGluZGVlZCBJRVRG
IHRyYWRpdGlvbiB0byBzb2x2ZSBpbmRpdmlkdWFsIHByb2JsZW1zIGluZGl2aWR1YWxseS4NCj4g
V2hhdCBtYWtlcyBpdCBkaWZmZXJlbnQgaW4gdGhpcyBjYXNlIGlzIHRoYXQgYSBjdXN0b21lciBz
cGFucyBtdWx0aXBsZQ0KPiBkYXRhY2VudGVycy4gVGhhdCBtZWFucyBWTSBtb2JpbGl0eSwgc2Vn
bWVudGF0aW9uLCBicm9hZGNhc3QsIG11bHRpY2FzdCwNCj4gdW5pY2FzdCwgcG9saWN5IChlLmcu
IGJhbmR3aWR0aCkgYXJlIHNwYW5uaW5nIGFjcm9zcyBib3RoIGludHJhIGFuZA0KPiBpbnRlci1E
Qy4NCg0KSGkgQXNoaXNoLA0KDQpJIGFncmVlIHRoYXQgaXQgaXMgbm90IGEgbXVjaCBlZmZpY2ll
bnQgd2F5IHRvIHNvbHZlIHRoZSBwcm9ibGVtcyBpbmRpdmlkdWFsbHkuIFRha2VuIHRoZSBzcGFu
bmluZy10cmVlIHByb3RvY29sIGFzIGFuIGV4YW1wbGUsIGl0IGhhcyBzZXZlcmFsIHByb2JsZW1z
IGluY2x1ZGluZyBzbG93IGNvbnZlcmdlbmNlLCBzdWItb3B0aW1hbCBiYW5kd2lkdGggdXRpbGl6
YXRpb24gZHVlIHRvIHRoZSBsYWNrIG9mIEVDTVAgYW5kIHNob3J0ZXN0IHBhdGggZm9yd2FyZGlu
ZywgNGsgVkxBTiBsaW1pdC4uLiBJdCBpcyBub3QgcmVhc29uYWJsZSB0byBzZXQgdXAgaW5kaXZp
ZHVhbCBXR3MgZm9yIHNvbHZpbmcgdGhlc2UgcHJvYmxlbXMgaW5kaXZpZHVhbGx5LiBJbiBmYWN0
LCBMMlZQTiBhbmQgTDNWUE4gdGVjaG5vbG9naWVzIGNvdWxkIGFscmVhZHkgc29sdmUgbW9zdCBv
ZiB0aGUgcHJvYmxlbXMgYXMgbWVudGlvbmVkIGFib3ZlLiBJdCdzIGVmZmljaWVudCBmb3IgdXMg
dG8gZXZhbHVhdGUgd2hhdCBwcm9ibGVtcyB0aGVzZSBleGlzdGluZyBWUE4gdGVjaG5vbG9naWVz
IGNvdWxkIGFscmVhZHkgYWRkcmVzc2VzLCB3aGF0IHByb2JsZW1zIGVsc2Ugbm90LiBJbiB0aGlz
IHdheSwgd2UgY2FuIHF1aWNrbHkgaWRlbnRpZnkgd2hhdCB0aGUgSUVURiBuZWVkcyB0byBkby4N
Cg0KSG93ZXZlciwgSSBkb26hr3QgYWdyZWUgdGhhdCB0aGUgcmVhc29uIGZvciBjb21wcmVoZW5z
aXZlbHkgY29uc2lkZXJpbmcgYWxsIG9mIHRoZSBwcm9ibGVtcyBpcyBkdWUgdG8gdGhlIGZhY3Qg
eW91IGRlc2NyaWJlZC4gSW4gY29udHJhc3QsIEkgYmVsaWV2ZSB0aGVyZSBhcmUgc29tZSBvYnZp
b3VzIGRpZmZlcmVuY2VzIGJldHdlZW4gRENJIHJlcXVpcmVtZW50cyBhbmQgRENOIHJlcXVpcmVt
ZW50cy4gVGFrZW4gdGhlIEwyVlBOIHNvbHV0aW9uIChlLmcuLCBWUExTKSBhcyBhbiBleGFtcGxl
LCBpdHMgYXNzb2NpYXRlZCBmZWF0dXJlcyBzdWNoIGFzIGZ1bGwtbWVzaCBvZiBQVywgaW5ncmVz
cyByZXBsaWNhdGlvbiBhbmQgcGVlciBjb25maWd1cmUgYXJlIG5vdCBtdWNoIGJpZyBjb25jZXJu
cyB3aGVuIGNvbnNpZGVyaW5nIFZQTFMgYXMgYSBEQ0kgc29sdXRpb24sIHNpbmNlIHRoZXJlIHdv
dWxkIG5vdCBiZSB0b28gbWFueSBEQyBzaXRlcyB0byBiZSBjb25uZWN0ZWQgdG9nZXRoZXIuIEhv
d2V2ZXIsIHdoZW4geW91IGNvbnNpZGVyIHRvIGRlcGxveSBWUExTIGluIGEgZGF0YSBjZW50ZXIs
IGVzcGVjaWFsbHkgcGVyZm9ybWluZyBQRSBmdW5jdGlvbnMgYXQgaHVuZHJlZHMgZXZlbiB0aG91
c2FuZHMgb2YgVG9SIHN3aXRjaGVzLCB0aGUgYWJvdmUgZmVhdHVyZXMgd2lsbCBiZWNvbWUgbXVj
aCBzZXJpb3VzIHByb2JsZW1zLiBJbiBhZGRpdGlvbiwgcGF0aCBvcHRpbWl6YXRpb24gZm9yIGJv
dGggVlBOIGFjY2VzcyBhbmQgSW50ZXJuZXQgYWNjZXNzIGlzIGEgdmVyeSBpbXBvcnRhbnQgcmVx
dWlyZW1lbnQgd2hpY2ggaXMgc3BlY2lmaWMgdG8gRENJLCByYXRoZXIgdGhhbiB0byBEQ04sIElN
SE8uIEluIGEgd29yZCwgSSBzdWdnZXN0IEl0J2QgYmV0dGVyIG5vdCB0byBjb21wbGV0ZWx5IG1p
eCB0aGUgcmVxdWlyZW1lbnRzIGZvciBEQ04gYW5kIHRob3NlIGZvciBEQ0kgYWx0aG91Z2ggdGhl
cmUgZXhpc3Qgc29tZSBvdmVybGFwcGluZyBwYXJ0cyBiZXR3ZWVuIHRoZW0uDQoNCkJlc3QgcmVn
YXJkcywNClhpYW9odQ0KDQo+IElmIHdlIGZvbGxvdyB0aGUgSUVURiB0cmFkaXRpb24gaGVyZSwg
aXQgd2lsbCBtZWFuIHRoYXQgbW9iaWxpdHksDQo+IHNlZ21lbnRhdGlvbiwgYnJvYWRjYXN0LCBt
dWx0aWNhc3QsIHBvbGljeSB3aWxsIHBvdGVudGlhbGx5IGFsbCBiZSBkb25lDQo+IGluIHR3byBz
ZXRzIG9mIHdheXMgYWNyb3NzIGFuZCBiZXR3ZWVuIGRhdGFjZW50ZXJzLCBiZWNhdXNlIHRoZXNl
IGNhbg0KPiBpbmRlZWQgYmUgc3RhdGVkIGFzIHNlcGFyYXRlIHByb2JsZW1zLiBOb3csIHlvdSBo
YXZlIGEgaHVnZSBpc3N1ZSBpbg0KPiBtYXBwaW5nIG9uZSB0eXBlIG9mIHNjaGVtZSBpbnRvIGFu
b3RoZXIsIGdpdmVuIHRoZSBtYW55IHBvc3NpYmxlIHdheXMgdG8NCj4gZG8gZWFjaCBvZiBtb2Jp
bGl0eSwgc2VnbWVudGF0aW9uLCBicm9hZGNhc3QsIG11bHRpY2FzdCwgcG9saWN5LA0KPiB0aGVt
c2VsdmVzLg0KPiANCj4gQWRkZWQgdG8gdGhlIGFib3ZlIGlzIHRoZSBmYWN0IHRoYXQgdGhlcmUg
YXJlIGRpdmVyZ2VudCBhcHByb2FjaGVzIC0gTDINCj4gdnMuIEwzLCBob3N0LWJhc2VkIHZzLiBu
ZXR3b3JrLWJhc2VkLCBvdmVybGF5IHZzLiBmbGF0LiBKdXN0IHRvIHNvbHZlDQo+IGFkZHJlc3Mg
cmVzb2x1dGlvbiwgdGhlcmUgaXMgbW9yZSB0aGFuIG9uZSBhcHByb2FjaCAtIGUuZy4gZGlyZWN0
b3J5DQo+IGJhc2VkIG9yIGNvbnRyb2wgcGxhbmUgYmFzZWQuDQo+IA0KPiBUaGUgdG90YWwgbnVt
YmVyIG9mIHBlcm11dGF0aW9ucyBhbmQgY29tYmluYXRpb25zIGFyaXNpbmcgZnJvbSB0aGVzZQ0K
PiBtYW55IGFsdGVybmF0aXZlcyB3aWxsIGV2ZW50dWFsbHkgbWVhbiB0aGF0IHRoaW5ncyBkb24n
dCB3b3JrIHRvZ2V0aGVyLg0KPiANCj4gSWYgaW50cmEtZGMgaXMgZGlyZWN0b3J5IGJhc2VkLCB0
aGVuIGl0IGRvZXNuJ3QgaGVscCBpbnRlci1kYyB0byBiZQ0KPiBjb250cm9sIHBsYW5lIGJhc2Vk
LiBJZiBpbnRyYS1kYyB1c2VzIFZMQU4gZm9yIHNlZ21lbnRhdGlvbiB0aGVuIGl0DQo+IGRvZXNu
J3QgaGVscCBpbnRlci1kYyB0byB1c2UgR1JFLiBJZiB0aGUgdW5pY2FzdCBpcyBtdWx0aS1wYXRo
IHRoZW4gaXQNCj4gZG9lc24ndCBoZWxwIGlmIG11bHRpY2FzdCBhbmQgYnJvYWRjYXN0IGFyZSBz
dGF0aWMgdHJlZXMuIElmIG9uZSBEQyBpcw0KPiBMMiBlbmNhcCB0aGVuIGl0IGRvZXNuJ3QgaGVs
cCBpZiBhbm90aGVyIERDIGlzIEwzIGVuY2FwLiBOb3RlIHRoYXQgdGhpcw0KPiBpcyBhbHJlYWR5
IGRpc2NvdW50aW5nIHRoZSBmYWN0IHRoYXQgdGhpcyBoYXMgdG8gaW50ZXJvcGVyYXRlIHdpdGgN
Cj4gY2xhc3NpYyBMMi9MMyBuZXR3b3Jrcy4gVGhhdCBtYXkgbWVhbiBhIEwzIE9TUEYgcm91dGlu
ZyB0YWJsZSBlbnRyeSBpbnRvDQo+IGEgTDIgSVMtSVMgZm9yd2FyZGluZyBlbnRyeS4gUHJvbGlm
ZXJhdGlvbiBvZiBkaXZlcmdlbnQgYXBwcm9hY2hlcyBqdXN0DQo+IHdvcnNlbnMgaW50ZXJvcGVy
YWJpbGl0eS4NCj4gDQo+IEkgdW5kZXJzdGFuZCB5b3VyIGNvbmNlcm4sIGFuZCB0aGF0IGl0IGlz
IGRpZmZlcmVudCB0aGFuIElFVEYgdHJhZGl0aW9uLg0KPiBCdXQgd2UgbmVlZCB0byBjaG9vc2Ug
YmV0d2VlbiB0cmFkaXRpb24gYW5kIGludGVyb3BlcmFiaWxpdHkuDQo+IA0KPiBUaGFua3MsIEFz
aGlzaA0KPiANCj4gDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IGRjLWJv
dW5jZXNAaWV0Zi5vcmcgW21haWx0bzpkYy1ib3VuY2VzQGlldGYub3JnXSBPbiBCZWhhbGYgT2YN
Cj4gVGhvbWFzIE5hcnRlbg0KPiBTZW50OiBTYXR1cmRheSwgSmFudWFyeSAwNywgMjAxMiA0OjM2
IEFNDQo+IFRvOiBTdGV3YXJ0IEJyeWFudCAoc3RicnlhbnQpDQo+IENjOiBSb25hbGQgQm9uaWNh
OyByb2JlcnRAcmFzenVrLm5ldDsgZGNAaWV0Zi5vcmcNCj4gU3ViamVjdDogUmU6IFtkY10gREMg
V29yayBQbGFuDQo+IA0KPiBIaSBTdGV3YXJ0Lg0KPiANCj4gPiBJIHRoaW5rIHRoYXQgU0ROIGhh
cyB3aWRlciBhcHBsaWNhYmlsaXR5IHRoYW4gREMgYW5kIHRodXMgaGFzIGENCj4gPiBsaWZlIChv
ciBkZWF0aCkgb2YgaXRzIG93bi4NCj4gDQo+IEl0J3Mgbm90IGp1c3QgdGhhdCBpdCBoYXMgd2lk
ZXIgYXBwbGljYWJpbGl0eSwgaXQncyB0aGF0IHRoYXQgZWZmb3J0DQo+IHNlZW1zIHNvbWV3aGF0
IHNlbGYtY29udGFpbmVkIGFuZCBJIGNhbiBzZWUgaG93IG9uZSBtaWdodCBjYXJ2ZSBvdXQgYW4N
Cj4gZWZmb3J0IGluIHRoaXMgc3BhY2UuDQo+IA0KPiA+IE5WTzMgaXMgYSBjYW5kaWRhdGUgc29s
dXRpb24gdG8gdGhlIERDIHByb2JsZW0sDQo+IA0KPiBUaGlzIGlzIHByb2JhYmx5IGp1c3QgcXVp
Y2sgdHlwaW5nLCBidXQgSSB0aGluayBpdCBpcyB3b3J0aCBzYXlpbmcNCj4gdGhhdCB3ZSBhcmUg
bm90IGhlbHBpbmcgb3Vyc2VsdmVzIGJ5IHRoaW5raW5nIHRoZXJlIGlzICpvbmUqIERDDQo+IHBy
b2JsZW0uDQo+IA0KPiBUaGVyZSBhcmUgYSBudW1iZXIgb2YgcG9zc2libGUgcHJvYmxlbXMuIFNv
bWUgcmVsYXRlIHRvIGVhY2ggb3RoZXJzLA0KPiBzb21lIG1vcmUgc28sIHNvbWUgbGVzcyBzby4g
QnV0IEkgdGhpbmsgaXQgaXMgbm90IGhlbHBmdWwgdG8gdHJ5IGFuZA0KPiB2aWV3IHRoaXMgYXJl
YSBhcyBoYXZpbmcgKm9uZSogcHJvYmxlbSB0aGF0IG5lZWRzIHNvcnRpbmcgb3V0Lg0KPiANCj4g
SSB0aGluayBpdCdzIG11Y2ggbW9yZSBoZWxwZnVsIChhbmQgSUVURiB0cmFkaXRpb24hKSB0byB0
cnkgYW5kDQo+IGlkZW50aWZ5IGluZGl2aWR1YWwgcHJvYmxlbXMgYW5kIHdoZXJlIHBvc3NpYmxl
LCBzb2x2ZSBwcm9ibGVtcw0KPiBpbmRpdmlkdWFsbHkgKGkuZS4sIGRpdmlkZSBhbmQgY29ucXVl
cikuIE9mIGNvdXJzZSwgdGhlcmUgY2FuIGJlDQo+IGludGVyZGVwZW5kZW5jaWVzIGJldHdlZW4g
cHJvYmxlbSBhbmQgc29sdXRpb24gc3BhY2VzLCBidXQgd2UgYXJlDQo+IGxpa2VseSBnb2luZyB0
byBmbGFpbCBpZiB3ZSB0cnkgdG8gdmlldyB0aGlzIGFzIG9uZSBiaWcgcHJvYmxlbQ0KPiBuZWVk
aW5nIG9uZSBvdmVyYWxsIHNvbHV0aW9uIGFwcHJvYWNoLiBPciB0aGF0IHdlIGNhbid0IHdvcmsg
b24gc3ViDQo+IHByb2JsZW1zIHVubGVzcyB3ZSB1bmRlcnN0YW5kIHRoZSBlbnRpcmUgcHJvYmxl
bSBzcGFjZS4NCj4gDQo+ID4gYnV0IGZyb20gdGhlIGRpc2N1c3Npb24gb24gdGhlIGxpc3QgSSBh
bSB5ZXQgdG8gYmUgY29udmluY2VkIHRoYXQgd2UNCj4gPiBoYXZlIGVpdGhlciB0aGUgcmlnaHQg
cHJvYmxlbSBzdGF0ZW1lbnQgdG8gZW5kb3JzZSBpdCBhcyB0aGUNCj4gPiBhcHByb2FjaCBmb3Ig
TDIsIGxldCBhbG9uZSB0byBkZXRlcm1pbmUgd2hldGhlciB3ZSBuZWVkIGEgTDINCj4gPiBzb2x1
dGlvbiwgYSBMMyBzb2x1dGlvbiBvciBhIG1peGVkIHNvbHV0aW9uLg0KPiANCj4gTlZPMyBpcyBh
aW1lZCBhdCBhIHN1YnNldCBvZiB0aGUgREMgInByb2JsZW0gYXJlYSIuIFRoZSBidWxsZXQgcG9p
bnQNCj4gc3VtbWFyeSBpcyB0aGF0IGl0J3MgZGlmZmVyZW50IHdheSBvZiBwcm92aWRpbmcgbXVs
dGktdGVuYW5jeSBpbiB0aGUNCj4gREMuIEl0IGlzIG5vdCB0aGUgKm9ubHkqIHdheS4gQnV0IEkg
dGhpbmsgdGhlIFRhaXBhaSBzZXNzaW9uIHNob3dlZA0KPiB0aGVyZSB3YXMgc2lnbmlmaWNhbnQg
c3VwcG9ydCBmb3IgdGhpcyBhcHByb2FjaC4NCj4gDQo+IFRob21hcw0KPiANCj4gX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4gZGMgbWFpbGluZyBsaXN0
DQo+IGRjQGlldGYub3JnDQo+IGh0dHBzOi8vd3d3LmlldGYub3JnL21haWxtYW4vbGlzdGluZm8v
ZGMNCj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4g
ZGMgbWFpbGluZyBsaXN0DQo+IGRjQGlldGYub3JnDQo+IGh0dHBzOi8vd3d3LmlldGYub3JnL21h
aWxtYW4vbGlzdGluZm8vZGMNCg==

From narten@us.ibm.com  Sat Jan  7 05:35:11 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C70C921F84B4 for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 05:35:11 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.051
X-Spam-Level: 
X-Spam-Status: No, score=-106.051 tagged_above=-999 required=5 tests=[AWL=-0.052, BAYES_00=-2.599, J_CHICKENPOX_13=0.6, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id gNNnPFCr3cTk for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 05:35:11 -0800 (PST)
Received: from e3.ny.us.ibm.com (e3.ny.us.ibm.com [32.97.182.143]) by ietfa.amsl.com (Postfix) with ESMTP id C5A2A21F84B2 for <dc@ietf.org>; Sat,  7 Jan 2012 05:35:10 -0800 (PST)
Received: from /spool/local by e3.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Sat, 7 Jan 2012 08:35:08 -0500
Received: from d01relay04.pok.ibm.com (9.56.227.236) by e3.ny.us.ibm.com (192.168.1.103) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Sat, 7 Jan 2012 08:34:28 -0500
Received: from d01av03.pok.ibm.com (d01av03.pok.ibm.com [9.56.224.217]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q07DYSb7344192 for <dc@ietf.org>; Sat, 7 Jan 2012 08:34:28 -0500
Received: from d01av03.pok.ibm.com (loopback [127.0.0.1]) by d01av03.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q07DYRFj022226 for <dc@ietf.org>; Sat, 7 Jan 2012 11:34:28 -0200
Received: from cichlid.raleigh.ibm.com (sig-9-48-47-78.mts.ibm.com [9.48.47.78]) by d01av03.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q07DYRcQ022205 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sat, 7 Jan 2012 11:34:27 -0200
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q07DYPIb004819; Sat, 7 Jan 2012 08:34:25 -0500
Message-Id: <201201071334.q07DYPIb004819@cichlid.raleigh.ibm.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
In-reply-to: <618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com>
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net><4F07488E.2070103@raszuk.net><13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net><201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com><4F07797A.3090907@cisco.com> <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com>
Comments: In-reply-to "Ashish Dalela (adalela)" <adalela@cisco.com> message dated "Sat, 07 Jan 2012 11:08:37 +0530."
Date: Sat, 07 Jan 2012 08:34:25 -0500
From: Thomas Narten <narten@us.ibm.com>
x-cbid: 12010713-8974-0000-0000-000004F99AC2
Cc: Ronald Bonica <rbonica@juniper.net>, dc@ietf.org, robert@raszuk.net, "Stewart Bryant \(stbryant\)" <stbryant@cisco.com>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 13:35:11 -0000

Hi Ashish.

> >> I think it's much more helpful (and IETF tradition!) to try and
> identify individual problems and where possible, solve problems
> individually (i.e., divide and conquer).

> It is indeed IETF tradition to solve individual problems individually.
> What makes it different in this case is that a customer spans multiple
> datacenters. That means VM mobility, segmentation, broadcast, multicast,
> unicast, policy (e.g. bandwidth) are spanning across both intra and
> inter-DC.

Sorry, but I do not believe that things are different in this
case. The IETF always grapples between problem areas whose impacts
potentially span everything vs. scoping a problem down into one or
more manageable units, where it can be worked on in a more standalone
fashion vs. trying to boil the ocean.

Indeed, any large project works that way. You have to have an overall
architecture/vision of how things fit together, and then individual
components that fit into the overall architecture.

I have not seen evidence that the current "DC" problem area is any
different.

Indeed, I believe that the reason why the various "problem statements"
have gotten so little traction is because they come across as too big,
too high-level and aren't scoped into small enough pieces (or
sub-pieces) so that folk can look at a particular part (or sub-part)
and say "yes, I understand what is being proposed, I agree that this
is a problem that would be useful to solve and I think that we can
solve it".

> If we follow the IETF tradition here, it will mean that mobility,
> segmentation, broadcast, multicast, policy will potentially all be done
> in two sets of ways across and between datacenters, because these can
> indeed be stated as separate problems.

They already are being done in different ways, to some extent, and
this is for reasons well outside of the lack of "one unifying
standard". E.g., some DCs are L2 based. Some are more L3 based. That
is not going to change. There are valid (local) reasons for choosing a
particular approach in a given DC. Likewise, there are multiple ways
of gluing DCs together across the WAN (L2VPN, L3VPN, IPsec,
etc.). That is not going to change anytime soon either.

> Now, you have a huge issue in mapping one type of scheme into
> another, given the many possible ways to do each of mobility,
> segmentation, broadcast, multicast, policy, themselves.

The above is a great example of how the "problem space" we are talking
about sounds (to me) like boiling the ocean.

Do you believe that what this group needs to do is come up with one
set of protocols/standards for doing all of the above that can be used
everywhere, so that one standard is used everywhere across the
Internet?

> Added to the above is the fact that there are divergent approaches - L2
> vs. L3, host-based vs. network-based, overlay vs. flat. Just to solve
> address resolution, there is more than one approach - e.g. directory
> based or control plane based.

Are you saying you think this group can and should come up with a
one-size-fits all approach that applies everywhere, including data
centers and across the WAN?

> The total number of permutations and combinations arising from these
> many alternatives will eventually mean that things don't work
> together.

We are dreaming if we think we will not have permutations and
variations. They exist already today for local business and other
reasons and will not magically disappear anytime soon. And having lots
of permutations is not necessarily a problem if the interfaces between
components (e.g., like between the DC and the WAN) are clean and well
defined, so that each "permutation" fits in with the existing model,
so that interfacing between the two systems is straightforward and
routine.

As one example, today many DCs use VLANs for multi-tenancy internally,
but use VPLS when going across the WAN. At the boundary between VLANs
and L2VPN, you have to map VLAN identifiers into the VPLS
equivalent. This is the well-defined interface between the two
systems.

If instead of VLANs, the DC used something like NVO3/VXLAN/NVGRE,
you'd have to do the same: map from the NVO3/VXLAN/NVGRE tenant
identifier into the corresponding L2VPN identifier. Have we added some
new complexity here? Not really. We've just added a new mapping, that
is conceptuatlly the same as the one already being used today with
VLANs. We have not made the "mapping problem" any more complex than it
already was.

> If intra-dc is directory based, then it doesn't help inter-dc to be
> control plane based. If intra-dc uses VLAN for segmentation then it
> doesn't help inter-dc to use GRE. If the unicast is multi-path then it
> doesn't help if multicast and broadcast are static trees. If one DC is
> L2 encap then it doesn't help if another DC is L3 encap. Note that this
> is already discounting the fact that this has to interoperate with
> classic L2/L3 networks. That may mean a L3 OSPF routing table entry into
> a L2 IS-IS forwarding entry. Proliferation of divergent approaches just
> worsens interoperability.

Sounds to me like you think we can somehow magically wave away
diversity that already exists today, and replace it with one overall
standard. Is that what you think this group needs to do?

Thomas


From adalela@cisco.com  Sat Jan  7 05:42:47 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C612921F854B for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 05:42:47 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.149
X-Spam-Level: 
X-Spam-Status: No, score=-2.149 tagged_above=-999 required=5 tests=[AWL=-0.150, BAYES_00=-2.599, J_CHICKENPOX_13=0.6]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id MJtbW48KTpOY for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 05:42:47 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 2E3F221F8543 for <dc@ietf.org>; Sat,  7 Jan 2012 05:42:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=6060; q=dns/txt; s=iport; t=1325943766; x=1327153366; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=dSRWECPvPHZq+OJrhzHziNz+yYBvzFEZFoDYoOFwvPY=; b=iRI2PWvln+akKy6z4JABXxNZ3In752pn2TqgsNFXXBViZoVrRCTExfg9 cePiTu9VorCzeIWUFcJ/jOWJFzzPWgG2l4HAaxd8719OXueUad7A8JnXv XOz2necoMxcGfv3WdJBi1NQo0eMqqvCm2Hn+gRn7l87J/bx3Nrorn923d Q=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ap8EAA1LCE9Io8UY/2dsb2JhbAA5Cq1IgXIBAQEDARIBHQo/BQcEAgEIEQEDAQELBhcBBgFFAwYIAQEECwgIEweHWJdxAZ4CiFaCWGMEiDefEw
X-IronPort-AV: E=Sophos;i="4.71,472,1320624000";  d="scan'208";a="2959338"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 07 Jan 2012 13:42:44 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q07Dgi80021692; Sat, 7 Jan 2012 13:42:44 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Sat, 7 Jan 2012 19:12:44 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Sat, 7 Jan 2012 19:12:41 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25F46@XMB-BGL-416.cisco.com>
In-Reply-To: <201201071334.q07DYPIb004819@cichlid.raleigh.ibm.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] DC Work Plan
Thread-Index: AczNQR4VXVCpPl8dQcmPs67Y46mX6wAANbXQ
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net><4F07488E.2070103@raszuk.net><13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net><201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com><4F07797A.3090907@cisco.com> <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com> <201201071334.q07DYPIb004819@cichlid.raleigh.ibm.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Thomas Narten" <narten@us.ibm.com>
X-OriginalArrivalTime: 07 Jan 2012 13:42:44.0631 (UTC) FILETIME=[3C12EA70:01CCCD42]
Cc: Ronald Bonica <rbonica@juniper.net>, dc@ietf.org, robert@raszuk.net, "Stewart Bryant \(stbryant\)" <stbryant@cisco.com>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 13:42:47 -0000

Thomas,

>> You have to have an overall architecture/vision of how things fit
together, and then individual components that fit into the overall
architecture.

Yes, that's what should happen. Can you say where that
architecture/vision is described, vs. the individual components? Once
that architecture has been agreed, then people can go into "smaller"
problems and their solutions.=20

Thanks, Ashish


-----Original Message-----
From: Thomas Narten [mailto:narten@us.ibm.com]=20
Sent: Saturday, January 07, 2012 7:04 PM
To: Ashish Dalela (adalela)
Cc: Stewart Bryant (stbryant); Ronald Bonica; robert@raszuk.net;
dc@ietf.org
Subject: Re: [dc] DC Work Plan

Hi Ashish.

> >> I think it's much more helpful (and IETF tradition!) to try and
> identify individual problems and where possible, solve problems
> individually (i.e., divide and conquer).

> It is indeed IETF tradition to solve individual problems individually.
> What makes it different in this case is that a customer spans multiple
> datacenters. That means VM mobility, segmentation, broadcast,
multicast,
> unicast, policy (e.g. bandwidth) are spanning across both intra and
> inter-DC.

Sorry, but I do not believe that things are different in this
case. The IETF always grapples between problem areas whose impacts
potentially span everything vs. scoping a problem down into one or
more manageable units, where it can be worked on in a more standalone
fashion vs. trying to boil the ocean.

Indeed, any large project works that way. You have to have an overall
architecture/vision of how things fit together, and then individual
components that fit into the overall architecture.

I have not seen evidence that the current "DC" problem area is any
different.

Indeed, I believe that the reason why the various "problem statements"
have gotten so little traction is because they come across as too big,
too high-level and aren't scoped into small enough pieces (or
sub-pieces) so that folk can look at a particular part (or sub-part)
and say "yes, I understand what is being proposed, I agree that this
is a problem that would be useful to solve and I think that we can
solve it".

> If we follow the IETF tradition here, it will mean that mobility,
> segmentation, broadcast, multicast, policy will potentially all be
done
> in two sets of ways across and between datacenters, because these can
> indeed be stated as separate problems.

They already are being done in different ways, to some extent, and
this is for reasons well outside of the lack of "one unifying
standard". E.g., some DCs are L2 based. Some are more L3 based. That
is not going to change. There are valid (local) reasons for choosing a
particular approach in a given DC. Likewise, there are multiple ways
of gluing DCs together across the WAN (L2VPN, L3VPN, IPsec,
etc.). That is not going to change anytime soon either.

> Now, you have a huge issue in mapping one type of scheme into
> another, given the many possible ways to do each of mobility,
> segmentation, broadcast, multicast, policy, themselves.

The above is a great example of how the "problem space" we are talking
about sounds (to me) like boiling the ocean.

Do you believe that what this group needs to do is come up with one
set of protocols/standards for doing all of the above that can be used
everywhere, so that one standard is used everywhere across the
Internet?

> Added to the above is the fact that there are divergent approaches -
L2
> vs. L3, host-based vs. network-based, overlay vs. flat. Just to solve
> address resolution, there is more than one approach - e.g. directory
> based or control plane based.

Are you saying you think this group can and should come up with a
one-size-fits all approach that applies everywhere, including data
centers and across the WAN?

> The total number of permutations and combinations arising from these
> many alternatives will eventually mean that things don't work
> together.

We are dreaming if we think we will not have permutations and
variations. They exist already today for local business and other
reasons and will not magically disappear anytime soon. And having lots
of permutations is not necessarily a problem if the interfaces between
components (e.g., like between the DC and the WAN) are clean and well
defined, so that each "permutation" fits in with the existing model,
so that interfacing between the two systems is straightforward and
routine.

As one example, today many DCs use VLANs for multi-tenancy internally,
but use VPLS when going across the WAN. At the boundary between VLANs
and L2VPN, you have to map VLAN identifiers into the VPLS
equivalent. This is the well-defined interface between the two
systems.

If instead of VLANs, the DC used something like NVO3/VXLAN/NVGRE,
you'd have to do the same: map from the NVO3/VXLAN/NVGRE tenant
identifier into the corresponding L2VPN identifier. Have we added some
new complexity here? Not really. We've just added a new mapping, that
is conceptuatlly the same as the one already being used today with
VLANs. We have not made the "mapping problem" any more complex than it
already was.

> If intra-dc is directory based, then it doesn't help inter-dc to be
> control plane based. If intra-dc uses VLAN for segmentation then it
> doesn't help inter-dc to use GRE. If the unicast is multi-path then it
> doesn't help if multicast and broadcast are static trees. If one DC is
> L2 encap then it doesn't help if another DC is L3 encap. Note that
this
> is already discounting the fact that this has to interoperate with
> classic L2/L3 networks. That may mean a L3 OSPF routing table entry
into
> a L2 IS-IS forwarding entry. Proliferation of divergent approaches
just
> worsens interoperability.

Sounds to me like you think we can somehow magically wave away
diversity that already exists today, and replace it with one overall
standard. Is that what you think this group needs to do?

Thomas


From adalela@cisco.com  Sat Jan  7 05:44:49 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 31B7E21F84D5 for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 05:44:49 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.751
X-Spam-Level: 
X-Spam-Status: No, score=-0.751 tagged_above=-999 required=5 tests=[AWL=-1.541, BAYES_00=-2.599, CN_BODY_35=0.339, J_CHICKENPOX_13=0.6, MIME_CHARSET_FARAWAY=2.45]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id n8dBIZ29HWdS for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 05:44:48 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 6B6F121F84AF for <dc@ietf.org>; Sat,  7 Jan 2012 05:44:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=7122; q=dns/txt; s=iport; t=1325943887; x=1327153487; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=91X19cZKdF4oEbZzl+14aaqiqRBdXrSvtezXcpiNsWw=; b=VYfpe6GUOBVjKp3MuBH4J4IcDETXyIeMmOTj/Zi/94AykJ5icNSeXCqf qYuzOj65oWu/AR3M9c5ydwVzpMu5xCwEaORjRoTzsFmsBVk4C8n4LMtfJ 8v8JY6iUMoE35HSQYnUMkUFwrgaeMyp6OXNCubBpxwiXlupa2UBYypjzH A=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqAEAA5MCE9Io8UY/2dsb2JhbAA5CoUPqDmBcgEBAQMBAQEBDwEdPgsFBwQCAQYCEQQBAQUGBhcBBAIBJh8JCAIEAQoICBECB4dYCJdqAYxaCJEcBIErhyuCITdjBIg3nxM
X-IronPort-AV: E=Sophos;i="4.71,472,1320624000";  d="scan'208";a="2959353"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 07 Jan 2012 13:44:41 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q07DifJV021889; Sat, 7 Jan 2012 13:44:41 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Sat, 7 Jan 2012 19:14:41 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: quoted-printable
Date: Sat, 7 Jan 2012 19:14:39 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25F4A@XMB-BGL-416.cisco.com>
In-Reply-To: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE765D9E@szxeml525-mbs.china.huawei.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] DC Work Plan
Thread-Index: AczMpBVlzok2urS5RMGeXcLx/H8AYf//gRwAgAAN8wCAACACgIAADF0AgAAFuYCAAG3GgP//XTcg//6tx5A=
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4F07488E.2070103@raszuk.net> <13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net> <201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com> <4F07797A.3090907@cisco.com> <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE765D9E@szxeml525-mbs.china.huawei.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Xuxiaohu" <xuxiaohu@huawei.com>, "Thomas Narten" <narten@us.ibm.com>, "Stewart Bryant (stbryant)" <stbryant@cisco.com>
X-OriginalArrivalTime: 07 Jan 2012 13:44:41.0332 (UTC) FILETIME=[81A21340:01CCCD42]
Cc: Ronald Bonica <rbonica@juniper.net>, robert@raszuk.net, dc@ietf.org
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 13:44:49 -0000

Hi Xuxiaohu,

Obviously, DCI will have a different forwarding plane. But, DCI could =
use the same control plane as DCN.

Thanks, Ashish


-----Original Message-----
From: Xuxiaohu [mailto:xuxiaohu@huawei.com]=20
Sent: Saturday, January 07, 2012 1:22 PM
To: Ashish Dalela (adalela); Thomas Narten; Stewart Bryant (stbryant)
Cc: Ronald Bonica; dc@ietf.org; robert@raszuk.net
Subject: re: [dc] DC Work Plan


> -----=D3=CA=BC=FE=D4=AD=BC=FE-----
> =B7=A2=BC=FE=C8=CB: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] =
=B4=FA=B1=ED Ashish
> Dalela (adalela)
> =B7=A2=CB=CD=CA=B1=BC=E4: 2012=C4=EA1=D4=C27=C8=D5 13:39
> =CA=D5=BC=FE=C8=CB: Thomas Narten; Stewart Bryant (stbryant)
> =B3=AD=CB=CD: Ronald Bonica; dc@ietf.org; robert@raszuk.net
> =D6=F7=CC=E2: Re: [dc] DC Work Plan
>=20
>=20
> Thomas,
>=20
> >> I think it's much more helpful (and IETF tradition!) to try and
> identify individual problems and where possible, solve problems
> individually (i.e., divide and conquer).
>=20
> It is indeed IETF tradition to solve individual problems individually.
> What makes it different in this case is that a customer spans multiple
> datacenters. That means VM mobility, segmentation, broadcast, =
multicast,
> unicast, policy (e.g. bandwidth) are spanning across both intra and
> inter-DC.

Hi Ashish,

I agree that it is not a much efficient way to solve the problems =
individually. Taken the spanning-tree protocol as an example, it has =
several problems including slow convergence, sub-optimal bandwidth =
utilization due to the lack of ECMP and shortest path forwarding, 4k =
VLAN limit... It is not reasonable to set up individual WGs for solving =
these problems individually. In fact, L2VPN and L3VPN technologies could =
already solve most of the problems as mentioned above. It's efficient =
for us to evaluate what problems these existing VPN technologies could =
already addresses, what problems else not. In this way, we can quickly =
identify what the IETF needs to do.

However, I don=A1=AFt agree that the reason for comprehensively =
considering all of the problems is due to the fact you described. In =
contrast, I believe there are some obvious differences between DCI =
requirements and DCN requirements. Taken the L2VPN solution (e.g., VPLS) =
as an example, its associated features such as full-mesh of PW, ingress =
replication and peer configure are not much big concerns when =
considering VPLS as a DCI solution, since there would not be too many DC =
sites to be connected together. However, when you consider to deploy =
VPLS in a data center, especially performing PE functions at hundreds =
even thousands of ToR switches, the above features will become much =
serious problems. In addition, path optimization for both VPN access and =
Internet access is a very important requirement which is specific to =
DCI, rather than to DCN, IMHO. In a word, I suggest It'd better not to =
completely mix the requirements for DCN and those for DCI although there =
exist some overlapping parts between them.

Best regards,
Xiaohu

> If we follow the IETF tradition here, it will mean that mobility,
> segmentation, broadcast, multicast, policy will potentially all be =
done
> in two sets of ways across and between datacenters, because these can
> indeed be stated as separate problems. Now, you have a huge issue in
> mapping one type of scheme into another, given the many possible ways =
to
> do each of mobility, segmentation, broadcast, multicast, policy,
> themselves.
>=20
> Added to the above is the fact that there are divergent approaches - =
L2
> vs. L3, host-based vs. network-based, overlay vs. flat. Just to solve
> address resolution, there is more than one approach - e.g. directory
> based or control plane based.
>=20
> The total number of permutations and combinations arising from these
> many alternatives will eventually mean that things don't work =
together.
>=20
> If intra-dc is directory based, then it doesn't help inter-dc to be
> control plane based. If intra-dc uses VLAN for segmentation then it
> doesn't help inter-dc to use GRE. If the unicast is multi-path then it
> doesn't help if multicast and broadcast are static trees. If one DC is
> L2 encap then it doesn't help if another DC is L3 encap. Note that =
this
> is already discounting the fact that this has to interoperate with
> classic L2/L3 networks. That may mean a L3 OSPF routing table entry =
into
> a L2 IS-IS forwarding entry. Proliferation of divergent approaches =
just
> worsens interoperability.
>=20
> I understand your concern, and that it is different than IETF =
tradition.
> But we need to choose between tradition and interoperability.
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Thomas Narten
> Sent: Saturday, January 07, 2012 4:36 AM
> To: Stewart Bryant (stbryant)
> Cc: Ronald Bonica; robert@raszuk.net; dc@ietf.org
> Subject: Re: [dc] DC Work Plan
>=20
> Hi Stewart.
>=20
> > I think that SDN has wider applicability than DC and thus has a
> > life (or death) of its own.
>=20
> It's not just that it has wider applicability, it's that that effort
> seems somewhat self-contained and I can see how one might carve out an
> effort in this space.
>=20
> > NVO3 is a candidate solution to the DC problem,
>=20
> This is probably just quick typing, but I think it is worth saying
> that we are not helping ourselves by thinking there is *one* DC
> problem.
>=20
> There are a number of possible problems. Some relate to each others,
> some more so, some less so. But I think it is not helpful to try and
> view this area as having *one* problem that needs sorting out.
>=20
> I think it's much more helpful (and IETF tradition!) to try and
> identify individual problems and where possible, solve problems
> individually (i.e., divide and conquer). Of course, there can be
> interdependencies between problem and solution spaces, but we are
> likely going to flail if we try to view this as one big problem
> needing one overall solution approach. Or that we can't work on sub
> problems unless we understand the entire problem space.
>=20
> > but from the discussion on the list I am yet to be convinced that we
> > have either the right problem statement to endorse it as the
> > approach for L2, let alone to determine whether we need a L2
> > solution, a L3 solution or a mixed solution.
>=20
> NVO3 is aimed at a subset of the DC "problem area". The bullet point
> summary is that it's different way of providing multi-tenancy in the
> DC. It is not the *only* way. But I think the Taipai session showed
> there was significant support for this approach.
>=20
> Thomas
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From adalela@cisco.com  Sat Jan  7 06:06:35 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 42FBE21F854C for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 06:06:35 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.409
X-Spam-Level: 
X-Spam-Status: No, score=-2.409 tagged_above=-999 required=5 tests=[AWL=0.189,  BAYES_00=-2.599, HTML_MESSAGE=0.001]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qWipBcHOi9OX for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 06:06:31 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 935AA21F8535 for <dc@ietf.org>; Sat,  7 Jan 2012 06:06:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=23438; q=dns/txt; s=iport; t=1325945189; x=1327154789; h=mime-version:subject:date:message-id:in-reply-to: references:from:to:cc; bh=l5D6S9l+oNyy1Z5ouebOF27gxRyBSUNWWPlT4MNyL7k=; b=mVMJCSlZ/GkwDoLBYrgq0+kP4cI5UlzVciU/AGOMVowrbdowJx51Qg6S WfXOtkfJGAknbnpexrXGGR6EK+5ppF7iaA9IOJd870W7OojWt/wdp0Iaz 27mz60LSgd5y6CYk+uwODh9xsx5gceBerdp/yV8jQddY+/mbSe+U8qDf4 w=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AsIEACxRCE9Io8UY/2dsb2JhbABDgk6hZwGJEoFyAQEBBBIBCREDPA0QAgEIDgMEAQELBhAHAQYBICUJCAEBBAEKCAgTB4dgl20BngOLLmMEiDeXRodN
X-IronPort-AV: E=Sophos;i="4.71,472,1320624000"; d="scan'208,217";a="2953989"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 07 Jan 2012 14:06:20 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q07E6KZ6024227; Sat, 7 Jan 2012 14:06:20 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Sat, 7 Jan 2012 19:36:19 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CCCD45.8777D2BB"
Date: Sat, 7 Jan 2012 19:36:17 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B25F58@XMB-BGL-416.cisco.com>
In-Reply-To: <F9947EE081154C47BA2D281D8F3FAE8E0371A21D4F@EX07-SJC-MBX1.force10networks.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] [armd] IP over IP solution for data center interconnect
Thread-Index: AczMJHkIMDhYiFQ0TFiF0gJ5cDLwJwABfVnwACB2rn4AJiFMUA==
References: <AF48CEB4-18A6-45CE-891B-ACFE599C8FB4@kumari.net><27D5DF12-DA16-42C4-A33B-84EBFFFC3A45@gmail.com>, <618BE8B40039924EB9AED233D4A09C5102B25D23@XMB-BGL-416.cisco.com> <F9947EE081154C47BA2D281D8F3FAE8E0371A21D4F@EX07-SJC-MBX1.force10networks.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Brad Hedlund" <bhedlund@force10networks.com>, "Aldrin Isaac" <aldrin.isaac@gmail.com>
X-OriginalArrivalTime: 07 Jan 2012 14:06:19.0988 (UTC) FILETIME=[87B14140:01CCCD45]
Cc: Pedro Marques <pedro.r.marques@gmail.com>, david.black@emc.com, dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center interconnect
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 14:06:35 -0000

This is a multi-part message in MIME format.

------_=_NextPart_001_01CCCD45.8777D2BB
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable

Brad,

=20

I'm not saying it makes a sense, but here are some use-cases. One of the
things that SDN proposed as part of network virtualization was the
ability to run multiple protocols in the same network. Their goals were
experimental, but there could be other goals as well. E.g. if a cloud
customer is running OSPF in their private network but the cloud provider
was running IS-IS, an overlay could (hypothetically) want to run OSPF in
their "segment". I'm not saying it makes a lot of sense, but that is an
example of when the "overlay" VM style model of the network makes sense
for virtualization. Other overlay models IMO are trying to fix basic
issues with IP/Ethernet - namely lack of location awareness - which can
be solved in other ways without an overlay. So, if the only reason to
run an overlay is to solve the location independence, then that's not
justification enough. But if there are other reasons, that make sense to
me. I haven't heard any other reasons.

=20

Thanks, Ashish

=20

=20

From: Brad Hedlund [mailto:bhedlund@force10networks.com]=20
Sent: Saturday, January 07, 2012 1:29 AM
To: Ashish Dalela (adalela); Aldrin Isaac
Cc: Pedro Marques; david.black@emc.com; dc@ietf.org
Subject: RE: [dc] [armd] IP over IP solution for data center
interconnect

=20

	"So, if you have one tenant running OSPF another one running
IS-IS and yet another running BGP, and they want to keep playing with
their network configuration, it makes sense to run these in the overlay
or the VM mode. "

This didn't make much sense to me. Why would a customer of an overal
model be managing or configuring a routing protocol?  The overlay
simplifies the customer's topology view to that of a single logical
segment.

=20

Cheers,

Brad

=20

________________________________

From: dc-bounces@ietf.org [dc-bounces@ietf.org] On Behalf Of Ashish
Dalela (adalela) [adalela@cisco.com]
Sent: Thursday, January 05, 2012 11:03 PM
To: Aldrin Isaac
Cc: Pedro Marques; david.black@emc.com; dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect

Aldrin,

=20

I like the way you say - "the only thing the network needs to know" -
:-) The same could be said of the hypervisor as well :-)

=20

The problem is that once you start pushing the intelligence into the
server stack, you will have to keep pushing more and more  - e.g.
firewalls, multicast, broadcast, packet inspection, flow mgmt, etc. You
will find that you are re-inventing on the overlay everything that
exists on the underlay. The challenges are stacked up against the
overlay, not the underlay.

=20

You will also find that you need to communicate between the overlay and
the underlay to get the desired bandwidth, QoS, flow mgmt, multipathing,
tree optimization, and that is never going to be easy. You will also
find that hardware accelerated networks perform better, deliver the
requisite high-availability, and consume less energy, as compared to
doing the same thing in the hypervisor.

=20

Having said that, I recognize there are two models of virtualization
that we know of. The "overlay" model is like a network hypervisor in
which individual customers are like VMs. The "multiplexed" model is like
a multi-user OS. Both multi-user OS and the multi-VM hypervisor are
isolated equally well. But they have different use-cases.

=20

The main use-case for the VM hypervisor model is to multiplex multiple
different OS/tools/application "environments" into the same HW, letting
each VM the ability to shut itself down, reboot or whatever it wants to
do. In case of network, that "environment" is a set of protocols. So, if
you have one tenant running OSPF another one running IS-IS and yet
another running BGP, and they want to keep playing with their network
configuration, it makes sense to run these in the overlay or the VM
mode.=20

=20

If everyone has the same environment (i.e. runs the same protocols, and
expects common controls), it makes more sense to run them in the
multi-user (multiplexed) mode rather than multi-VM (overlay) mode. The
multi-VM model delegates administration to the VM owner. The multi-user
model owns the administration while letting a "tenant" use the network.
This is a conscious choice - is cloud going to open up the configuration
of network like they open up the administration of VM? From what I know,
the answer is "no".

=20

These architectural models exist outside the networking domain, and I
would reach out into another domain to borrow the intuitions, use-cases
and challenges. Architecturally, you will find the same challenges in
managing the overlay model as exist in the multi-VM model. If there are
no benefits to be gained from that, you are better off in the
multiplexed mode.

=20

But then again - beauty lies in the eyes of the beholder.

=20

Thanks, Ashish

=20

=20

=20

From: Aldrin Isaac [mailto:aldrin.isaac@gmail.com]=20
Sent: Friday, January 06, 2012 9:07 AM
Cc: Ashish Dalela (adalela); Pedro Marques; david.black@emc.com;
dc@ietf.org
Subject: Re: [dc] [armd] IP over IP solution for data center
interconnect

=20

=20

The only thing that the network needs to know is the routes to the
hypervisors / physical machines -- this is a solved problem.
The VM addresses and routes are only visible to the [gateways,
hypervisors with VMs in that overlay, other VMs in the same overlay,
mapping server].

For a really old overview:
http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00

=20

Is the issue that needs to be resolved for overlays only regarding the
number of ARPs that need to be managed by the gateway?  Could this issue
not be resolved operationally by say having more gateways? =20

=20

What other issues exist and need to be resolved for overlays, besides
deciding on a tunneling encapsulation?  Is the IETF expected to
standardize control protocols for overlays?

=20

There is clearly a need for server-based virtual networks as well as a
need for scalable network-based Ethernet virtual networks for DC.
Shouldn't these be separate conversations?


------_=_NextPart_001_01CCCD45.8777D2BB
Content-Type: text/html;
	charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" =
xmlns:p=3D"urn:schemas-microsoft-com:office:powerpoint" =
xmlns:a=3D"urn:schemas-microsoft-com:office:access" =
xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" =
xmlns:s=3D"uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" =
xmlns:rs=3D"urn:schemas-microsoft-com:rowset" xmlns:z=3D"#RowsetSchema" =
xmlns:b=3D"urn:schemas-microsoft-com:office:publisher" =
xmlns:ss=3D"urn:schemas-microsoft-com:office:spreadsheet" =
xmlns:c=3D"urn:schemas-microsoft-com:office:component:spreadsheet" =
xmlns:odc=3D"urn:schemas-microsoft-com:office:odc" =
xmlns:oa=3D"urn:schemas-microsoft-com:office:activation" =
xmlns:html=3D"http://www.w3.org/TR/REC-html40" =
xmlns:q=3D"http://schemas.xmlsoap.org/soap/envelope/" =
xmlns:rtc=3D"http://microsoft.com/officenet/conferencing" =
xmlns:D=3D"DAV:" xmlns:Repl=3D"http://schemas.microsoft.com/repl/" =
xmlns:mt=3D"http://schemas.microsoft.com/sharepoint/soap/meetings/" =
xmlns:x2=3D"http://schemas.microsoft.com/office/excel/2003/xml" =
xmlns:ppda=3D"http://www.passport.com/NameSpace.xsd" =
xmlns:ois=3D"http://schemas.microsoft.com/sharepoint/soap/ois/" =
xmlns:dir=3D"http://schemas.microsoft.com/sharepoint/soap/directory/" =
xmlns:ds=3D"http://www.w3.org/2000/09/xmldsig#" =
xmlns:dsp=3D"http://schemas.microsoft.com/sharepoint/dsp" =
xmlns:udc=3D"http://schemas.microsoft.com/data/udc" =
xmlns:xsd=3D"http://www.w3.org/2001/XMLSchema" =
xmlns:sub=3D"http://schemas.microsoft.com/sharepoint/soap/2002/1/alerts/"=
 xmlns:ec=3D"http://www.w3.org/2001/04/xmlenc#" =
xmlns:sp=3D"http://schemas.microsoft.com/sharepoint/" =
xmlns:sps=3D"http://schemas.microsoft.com/sharepoint/soap/" =
xmlns:xsi=3D"http://www.w3.org/2001/XMLSchema-instance" =
xmlns:udcs=3D"http://schemas.microsoft.com/data/udc/soap" =
xmlns:udcxf=3D"http://schemas.microsoft.com/data/udc/xmlfile" =
xmlns:udcp2p=3D"http://schemas.microsoft.com/data/udc/parttopart" =
xmlns:wf=3D"http://schemas.microsoft.com/sharepoint/soap/workflow/" =
xmlns:dsss=3D"http://schemas.microsoft.com/office/2006/digsig-setup" =
xmlns:dssi=3D"http://schemas.microsoft.com/office/2006/digsig" =
xmlns:mdssi=3D"http://schemas.openxmlformats.org/package/2006/digital-sig=
nature" =
xmlns:mver=3D"http://schemas.openxmlformats.org/markup-compatibility/2006=
" xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns:mrels=3D"http://schemas.openxmlformats.org/package/2006/relationshi=
ps" xmlns:spwp=3D"http://microsoft.com/sharepoint/webpartpages" =
xmlns:ex12t=3D"http://schemas.microsoft.com/exchange/services/2006/types"=
 =
xmlns:ex12m=3D"http://schemas.microsoft.com/exchange/services/2006/messag=
es" =
xmlns:pptsl=3D"http://schemas.microsoft.com/sharepoint/soap/SlideLibrary/=
" =
xmlns:spsl=3D"http://microsoft.com/webservices/SharePointPortalServer/Pub=
lishedLinksService" xmlns:Z=3D"urn:schemas-microsoft-com:" =
xmlns:st=3D"&#1;" xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 12 =
(filtered medium)"><!--[if !mso]><style>v\:* =
{behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
p
	{mso-style-priority:99;
	margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
p.msochpdefault, li.msochpdefault, div.msochpdefault
	{mso-style-name:msochpdefault;
	margin:0in;
	margin-bottom:.0001pt;
	font-size:10.0pt;
	font-family:"Times New Roman","serif";}
span.emailstyle17
	{mso-style-name:emailstyle17;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
span.EmailStyle20
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Brad,<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>I&#8217;m not saying it makes a sense, but here are some use-cases. =
One of the things that SDN proposed as part of network virtualization =
was the ability to run multiple protocols in the same network. Their =
goals were experimental, but there could be other goals as well. E.g. if =
a cloud customer is running OSPF in their private network but the cloud =
provider was running IS-IS, an overlay could (hypothetically) want to =
run OSPF in their &#8220;segment&#8221;. I&#8217;m not saying it makes a =
lot of sense, but that is an example of when the &#8220;overlay&#8221; =
VM style model of the network makes sense for virtualization. Other =
overlay models IMO are trying to fix basic issues with IP/Ethernet =
&#8211; namely lack of location awareness &#8211; which can be solved in =
other ways without an overlay. So, if the only reason to run an overlay =
is to solve the location independence, then that&#8217;s not =
justification enough. But if there are other reasons, that make sense to =
me. I haven&#8217;t heard any other reasons.<o:p></o:p></span></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Thanks, Ashish<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><div><div =
style=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in'><p class=3DMsoNormal><b><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>=
</b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> =
Brad Hedlund [mailto:bhedlund@force10networks.com] <br><b>Sent:</b> =
Saturday, January 07, 2012 1:29 AM<br><b>To:</b> Ashish Dalela =
(adalela); Aldrin Isaac<br><b>Cc:</b> Pedro Marques; =
david.black@emc.com; dc@ietf.org<br><b>Subject:</b> RE: [dc] [armd] IP =
over IP solution for data center =
interconnect<o:p></o:p></span></p></div></div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><blockquote =
style=3D'margin-top:5.0pt;margin-right:0in;margin-bottom:5.0pt'><div><p =
class=3DMsoNormal><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif";color:black'>=
&quot;</span><span =
style=3D'font-family:"Calibri","sans-serif";color:#1F497D'>So, if you =
have one tenant running OSPF another one running IS-IS and yet another =
running BGP, and they want to keep playing with their network =
configuration, it makes sense to run these in the overlay or the VM =
mode. &quot;</span><o:p></o:p></p></div></blockquote><div><p =
class=3DMsoNormal><span =
style=3D'font-family:"Calibri","sans-serif";color:#1F497D'>This didn't =
make much sense to me. Why would a customer of an overal model =
be&nbsp;managing or configuring&nbsp;a routing protocol?&nbsp; The =
overlay simplifies the customer's topology&nbsp;view to that of a single =
logical segment.</span><o:p></o:p></p></div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal><span =
style=3D'font-family:"Calibri","sans-serif";color:#1F497D'>Cheers,</span>=
<o:p></o:p></p></div><div><p class=3DMsoNormal><span =
style=3D'font-family:"Calibri","sans-serif";color:#1F497D'>Brad</span><o:=
p></o:p></p></div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div><div id=3DdivRpF600727><div =
class=3DMsoNormal align=3Dcenter style=3D'text-align:center'><hr =
size=3D2 width=3D"100%" align=3Dcenter></div><p class=3DMsoNormal =
style=3D'margin-bottom:12.0pt'><b><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>=
</b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> =
dc-bounces@ietf.org [dc-bounces@ietf.org] On Behalf Of Ashish Dalela =
(adalela) [adalela@cisco.com]<br><b>Sent:</b> Thursday, January 05, 2012 =
11:03 PM<br><b>To:</b> Aldrin Isaac<br><b>Cc:</b> Pedro Marques; =
david.black@emc.com; dc@ietf.org<br><b>Subject:</b> Re: [dc] [armd] IP =
over IP solution for data center =
interconnect</span><o:p></o:p></p></div><div><div><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Aldrin,</span><o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>I like the way you say &#8211; &#8220;the only thing the network =
needs to know&#8221; - :-) The same could be said of the hypervisor as =
well :-)</span><o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>The problem is that once you start pushing the intelligence into the =
server stack, you will have to keep pushing more and more&nbsp; - e.g. =
firewalls, multicast, broadcast, packet inspection, flow mgmt, etc. You =
will find that you are re-inventing on the overlay everything that =
exists on the underlay. The challenges are stacked up against the =
overlay, not the underlay.</span><o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>You will also find that you need to communicate between the overlay =
and the underlay to get the desired bandwidth, QoS, flow mgmt, =
multipathing, tree optimization, and that is never going to be easy. You =
will also find that hardware accelerated networks perform better, =
deliver the requisite high-availability, and consume less energy, as =
compared to doing the same thing in the =
hypervisor.</span><o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Having said that, I recognize there are two models of virtualization =
that we know of. The &#8220;overlay&#8221; model is like a network =
hypervisor in which individual customers are like VMs. The =
&#8220;multiplexed&#8221; model is like a multi-user OS. Both multi-user =
OS and the multi-VM hypervisor are isolated equally well. But they have =
different use-cases.</span><o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>The main use-case for the VM hypervisor model is to multiplex =
multiple different OS/tools/application &#8220;environments&#8221; into =
the same HW, letting each VM the ability to shut itself down, reboot or =
whatever it wants to do. In case of network, that =
&#8220;environment&#8221; is a set of protocols. So, if you have one =
tenant running OSPF another one running IS-IS and yet another running =
BGP, and they want to keep playing with their network configuration, it =
makes sense to run these in the overlay or the VM mode. =
</span><o:p></o:p></p><p class=3DMsoNormal>&nbsp;<o:p></o:p></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>If everyone has the same environment (i.e. runs the same protocols, =
and expects common controls), it makes more sense to run them in the =
multi-user (multiplexed) mode rather than multi-VM (overlay) mode. The =
multi-VM model delegates administration to the VM owner. The multi-user =
model owns the administration while letting a &#8220;tenant&#8221; use =
the network. This is a conscious choice &#8211; is cloud going to open =
up the configuration of network like they open up the administration of =
VM? From what I know, the answer is =
&#8220;no&#8221;.</span><o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>These architectural models exist outside the networking domain, and I =
would reach out into another domain to borrow the intuitions, use-cases =
and challenges. Architecturally, you will find the same challenges in =
managing the overlay model as exist in the multi-VM model. If there are =
no benefits to be gained from that, you are better off in the =
multiplexed mode.</span><o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>But then again &#8211; beauty lies in the eyes of the =
beholder.</span><o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Thanks, Ashish</span><o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><div><div =
style=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in'><p class=3DMsoNormal><b><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>=
</b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> =
Aldrin Isaac [mailto:aldrin.isaac@gmail.com] <br><b>Sent:</b> Friday, =
January 06, 2012 9:07 AM<br><b>Cc:</b> Ashish Dalela (adalela); Pedro =
Marques; david.black@emc.com; dc@ietf.org<br><b>Subject:</b> Re: [dc] =
[armd] IP over IP solution for data center =
interconnect</span><o:p></o:p></p></div></div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><div><p class=3DMsoNormal =
style=3D'margin-bottom:12.0pt'><o:p>&nbsp;</o:p></p><div><p =
class=3DMsoNormal>The only thing that the network needs to know is the =
routes to the hypervisors / physical machines -- this is a solved =
problem.<br>The VM addresses and routes are only visible to the =
[gateways, hypervisors with VMs in that overlay, other VMs in the same =
overlay, mapping server].<br><br>For a really old overview: <a =
href=3D"http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmobility-00" =
target=3D"_blank">http://tools.ietf.org/html/draft-wkumari-dcops-l3-vmmob=
ility-00</a><o:p></o:p></p></div></div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p><div><p class=3DMsoNormal>Is the =
issue that needs to be resolved for overlays only regarding the number =
of ARPs that need to be managed by the gateway? &nbsp;Could this issue =
not be resolved operationally by say having more gateways? =
&nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal>What other issues exist and need to be resolved for =
overlays, besides deciding on a tunneling encapsulation? &nbsp;Is the =
IETF expected to standardize control protocols for =
overlays?<o:p></o:p></p></div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal>There is clearly a need for server-based virtual =
networks as well as a need for scalable network-based Ethernet virtual =
networks for DC. &nbsp;Shouldn't these be separate =
conversations?<o:p></o:p></p></div></div></div></div></body></html>
------_=_NextPart_001_01CCCD45.8777D2BB--

From melinda.shore@gmail.com  Sat Jan  7 11:43:33 2012
Return-Path: <melinda.shore@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id EC11C21F84EA for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 11:43:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.599
X-Spam-Level: 
X-Spam-Status: No, score=-3.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6glc7+q9Yla4 for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 11:43:33 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id 4923721F84E7 for <dc@ietf.org>; Sat,  7 Jan 2012 11:43:33 -0800 (PST)
Received: by iabz21 with SMTP id z21so4932189iab.31 for <dc@ietf.org>; Sat, 07 Jan 2012 11:43:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=tl4p9qzYNECdlfxq8HAo5pGlcuDcJOJardrKxViS21A=; b=g/Z+T7+1g0EnBrA69qE7iu3L0waT5DRzSWfQeVpc7yNISTyhi0X8yb589QAY+6WpW6 NVKZnH4dORx0I2Tv1KOi3d3EsnZ3RSnEiyOEzvjM4mtK8BSylmGpZPaAjOrUzpQzEmlc BMbO4phFNYmJ7KGC7n3m+o4rwdSETQr2oDeJw=
Received: by 10.50.188.132 with SMTP id ga4mr3783519igc.4.1325965412979; Sat, 07 Jan 2012 11:43:32 -0800 (PST)
Received: from polypro.local (66-230-87-211-rb1.fai.dsl.dynamic.acsalaska.net. [66.230.87.211]) by mx.google.com with ESMTPS id yg2sm5291922igb.1.2012.01.07.11.43.30 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 07 Jan 2012 11:43:31 -0800 (PST)
Message-ID: <4F08A061.5020308@gmail.com>
Date: Sat, 07 Jan 2012 10:43:29 -0900
From: Melinda Shore <melinda.shore@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.25) Gecko/20111213 Lightning/1.0b2 Thunderbird/3.1.17
MIME-Version: 1.0
To: dc@ietf.org
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net><4F07488E.2070103@raszuk.net><13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net><201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com><4F07797A.3090907@cisco.com>	<201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com>	<618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com>	<201201071334.q07DYPIb004819@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102B25F46@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25F46@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 19:43:34 -0000

On 1/7/12 4:42 AM, Ashish Dalela (adalela) wrote:
> Yes, that's what should happen. Can you say where that
> architecture/vision is described, vs. the individual components? Once
> that architecture has been agreed, then people can go into "smaller"
> problems and their solutions.

I think that this has really not been a very common approach in
the IETF in the past, although you do see it a lot in other standards
bodies.  My personal comfort level with the way the "data center"/
cloud set of stuff has been approached is not that high because it
doesn't appear to have been driven by real-world problems but rather
by a vague sense that the IETF ought to be doing something in this
problem space (and "problem space" is overly vague, as well).
It appears reasonably clear to me that there's at least some small
set of specific problems that fall into this space and that can
be framed in a way that the IETF knows how to tackle.  Regardless
of the value of a very, very high-level architectural approach I
don't think that it could be done very well in this body and I would
be reluctant to try.

I also expect that if we told operators what we think their data
centers should look like we'd be putting ourselves on the path to
irrelevance.  They're going to do what they're going to do - I
think we have to be responsive to their needs.  They care about
service differentiation and competitive advantages, and if our
work can't support that (because we've got a high-level architecture
that must be conformed to for our individual technologies to be
useful) we've got a problem.

So, a good, short list of clearly-defined problems seems like a good
place to start, to me.  See how they relate to each other, if they
do, and how they relate to other work in the IETF.

Melinda

From vumip1@gmail.com  Sat Jan  7 13:52:33 2012
Return-Path: <vumip1@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7EA1E21F84FE for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 13:52:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.139
X-Spam-Level: 
X-Spam-Status: No, score=-3.139 tagged_above=-999 required=5 tests=[AWL=-0.141, BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_13=0.6, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Pdm8sEakPmio for <dc@ietfa.amsl.com>; Sat,  7 Jan 2012 13:52:32 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id D978F21F84F5 for <dc@ietf.org>; Sat,  7 Jan 2012 13:52:31 -0800 (PST)
Received: by iabz21 with SMTP id z21so5051076iab.31 for <dc@ietf.org>; Sat, 07 Jan 2012 13:52:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=rLuGyW24ULnrlhWkc0x3qWyFMLrZVkQklqJAN2LR0QY=; b=bjmekr1XP1tNY3UHZ5lf2E8JOi8PD+LkSs9edL5ESfAFAJW8JXyDCZCjRx/uq13XyZ bQ0nfgqDU1rMd+jzrbqybhuckr5cMQ+P7TqYAkk8ybWy35wrJBnoiMMhLEzVWQc8Esjj /NH1xBUd3XG2mzAtmDtMFUttbMJkLprJlZGzg=
MIME-Version: 1.0
Received: by 10.50.183.166 with SMTP id en6mr12696331igc.7.1325973150277; Sat, 07 Jan 2012 13:52:30 -0800 (PST)
Received: by 10.50.77.197 with HTTP; Sat, 7 Jan 2012 13:52:30 -0800 (PST)
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25F46@XMB-BGL-416.cisco.com>
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4F07488E.2070103@raszuk.net> <13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net> <201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com> <4F07797A.3090907@cisco.com> <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com> <201201071334.q07DYPIb004819@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102B25F46@XMB-BGL-416.cisco.com>
Date: Sat, 7 Jan 2012 16:52:30 -0500
Message-ID: <CANtnpwhivUvnFuG-vh8yMWvLXOP6c8D_NOGsPzjOoJBpDBgmaQ@mail.gmail.com>
From: Bhumip Khasnabish <vumip1@gmail.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Content-Type: multipart/alternative; boundary=14dae9340fe99a9fb304b5f72e83
Cc: Thomas Narten <narten@us.ibm.com>, Ronald Bonica <rbonica@juniper.net>, robert@raszuk.net, dc@ietf.org, "Stewart Bryant \(stbryant\)" <stbryant@cisco.com>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 07 Jan 2012 21:52:33 -0000

--14dae9340fe99a9fb304b5f72e83
Content-Type: text/plain; charset=ISO-8859-1

Hello Ashish,

We have done some work on Cloud/DataCenter reference architecture
(http://tools.ietf.org/id/draft-khasnabish-cloud-reference-framework-02.txt)

and had some discussion along the line you are talking about during

IETF-81 (pls see the slides at
http://www.ietf.org/proceedings/81/slides/opsawg-3.pdf )

Thanks.

Best.

Bhumip



On Sat, Jan 7, 2012 at 8:42 AM, Ashish Dalela (adalela)
<adalela@cisco.com>wrote:

> Thomas,
>
> >> You have to have an overall architecture/vision of how things fit
> together, and then individual components that fit into the overall
> architecture.
>
> Yes, that's what should happen. Can you say where that
> architecture/vision is described, vs. the individual components? Once
> that architecture has been agreed, then people can go into "smaller"
> problems and their solutions.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: Thomas Narten [mailto:narten@us.ibm.com]
> Sent: Saturday, January 07, 2012 7:04 PM
> To: Ashish Dalela (adalela)
> Cc: Stewart Bryant (stbryant); Ronald Bonica; robert@raszuk.net;
> dc@ietf.org
> Subject: Re: [dc] DC Work Plan
>
>  Hi Ashish.
>
> > >> I think it's much more helpful (and IETF tradition!) to try and
> > identify individual problems and where possible, solve problems
> > individually (i.e., divide and conquer).
>
> > It is indeed IETF tradition to solve individual problems individually.
> > What makes it different in this case is that a customer spans multiple
> > datacenters. That means VM mobility, segmentation, broadcast,
> multicast,
> > unicast, policy (e.g. bandwidth) are spanning across both intra and
> > inter-DC.
>
> Sorry, but I do not believe that things are different in this
> case. The IETF always grapples between problem areas whose impacts
> potentially span everything vs. scoping a problem down into one or
> more manageable units, where it can be worked on in a more standalone
> fashion vs. trying to boil the ocean.
>
> Indeed, any large project works that way. You have to have an overall
> architecture/vision of how things fit together, and then individual
> components that fit into the overall architecture.
>
> I have not seen evidence that the current "DC" problem area is any
> different.
>
> Indeed, I believe that the reason why the various "problem statements"
> have gotten so little traction is because they come across as too big,
> too high-level and aren't scoped into small enough pieces (or
> sub-pieces) so that folk can look at a particular part (or sub-part)
> and say "yes, I understand what is being proposed, I agree that this
> is a problem that would be useful to solve and I think that we can
> solve it".
>
> > If we follow the IETF tradition here, it will mean that mobility,
> > segmentation, broadcast, multicast, policy will potentially all be
> done
> > in two sets of ways across and between datacenters, because these can
> > indeed be stated as separate problems.
>
> They already are being done in different ways, to some extent, and
> this is for reasons well outside of the lack of "one unifying
> standard". E.g., some DCs are L2 based. Some are more L3 based. That
> is not going to change. There are valid (local) reasons for choosing a
> particular approach in a given DC. Likewise, there are multiple ways
> of gluing DCs together across the WAN (L2VPN, L3VPN, IPsec,
> etc.). That is not going to change anytime soon either.
>
> > Now, you have a huge issue in mapping one type of scheme into
> > another, given the many possible ways to do each of mobility,
> > segmentation, broadcast, multicast, policy, themselves.
>
> The above is a great example of how the "problem space" we are talking
> about sounds (to me) like boiling the ocean.
>
> Do you believe that what this group needs to do is come up with one
> set of protocols/standards for doing all of the above that can be used
> everywhere, so that one standard is used everywhere across the
> Internet?
>
> > Added to the above is the fact that there are divergent approaches -
> L2
> > vs. L3, host-based vs. network-based, overlay vs. flat. Just to solve
> > address resolution, there is more than one approach - e.g. directory
> > based or control plane based.
>
> Are you saying you think this group can and should come up with a
> one-size-fits all approach that applies everywhere, including data
> centers and across the WAN?
>
> > The total number of permutations and combinations arising from these
> > many alternatives will eventually mean that things don't work
> > together.
>
> We are dreaming if we think we will not have permutations and
> variations. They exist already today for local business and other
> reasons and will not magically disappear anytime soon. And having lots
> of permutations is not necessarily a problem if the interfaces between
> components (e.g., like between the DC and the WAN) are clean and well
> defined, so that each "permutation" fits in with the existing model,
> so that interfacing between the two systems is straightforward and
> routine.
>
> As one example, today many DCs use VLANs for multi-tenancy internally,
> but use VPLS when going across the WAN. At the boundary between VLANs
> and L2VPN, you have to map VLAN identifiers into the VPLS
> equivalent. This is the well-defined interface between the two
> systems.
>
> If instead of VLANs, the DC used something like NVO3/VXLAN/NVGRE,
> you'd have to do the same: map from the NVO3/VXLAN/NVGRE tenant
> identifier into the corresponding L2VPN identifier. Have we added some
> new complexity here? Not really. We've just added a new mapping, that
> is conceptuatlly the same as the one already being used today with
> VLANs. We have not made the "mapping problem" any more complex than it
> already was.
>
> > If intra-dc is directory based, then it doesn't help inter-dc to be
> > control plane based. If intra-dc uses VLAN for segmentation then it
> > doesn't help inter-dc to use GRE. If the unicast is multi-path then it
> > doesn't help if multicast and broadcast are static trees. If one DC is
> > L2 encap then it doesn't help if another DC is L3 encap. Note that
> this
> > is already discounting the fact that this has to interoperate with
> > classic L2/L3 networks. That may mean a L3 OSPF routing table entry
> into
> > a L2 IS-IS forwarding entry. Proliferation of divergent approaches
> just
> > worsens interoperability.
>
> Sounds to me like you think we can somehow magically wave away
> diversity that already exists today, and replace it with one overall
> standard. Is that what you think this group needs to do?
>
> Thomas
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>

--14dae9340fe99a9fb304b5f72e83
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div>Hello Ashish,</div>
<div>=A0</div>
<div>We have done some work on Cloud/DataCenter reference architecture</div=
>
<div>(<a href=3D"http://tools.ietf.org/id/draft-khasnabish-cloud-reference-=
framework-02.txt">http://tools.ietf.org/id/draft-khasnabish-cloud-reference=
-framework-02.txt</a>)</div>
<div>=A0</div>
<div>and had some discussion along the line you are talking about during</d=
iv>
<div>=A0</div>
<div>IETF-81 (pls see the slides at=A0 </div>
<div><a href=3D"http://www.ietf.org/proceedings/81/slides/opsawg-3.pdf">htt=
p://www.ietf.org/proceedings/81/slides/opsawg-3.pdf</a>=A0)</div>
<div>=A0</div>
<div>Thanks.</div>
<div>=A0</div>
<div>Best.</div>
<div>=A0</div>
<div>Bhumip</div>
<div><br><br>=A0</div>
<div class=3D"gmail_quote">On Sat, Jan 7, 2012 at 8:42 AM, Ashish Dalela (a=
dalela) <span dir=3D"ltr">&lt;<a href=3D"mailto:adalela@cisco.com">adalela@=
cisco.com</a>&gt;</span> wrote:<br>
<blockquote style=3D"BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PA=
DDING-LEFT:1ex" class=3D"gmail_quote">Thomas,<br>
<div class=3D"im"><br>&gt;&gt; You have to have an overall architecture/vis=
ion of how things fit<br>together, and then individual components that fit =
into the overall<br>architecture.<br><br></div>Yes, that&#39;s what should =
happen. Can you say where that<br>
architecture/vision is described, vs. the individual components? Once<br>th=
at architecture has been agreed, then people can go into &quot;smaller&quot=
;<br>problems and their solutions.<br><br>Thanks, Ashish<br>
<div class=3D"im"><br><br>-----Original Message-----<br>From: Thomas Narten=
 [mailto:<a href=3D"mailto:narten@us.ibm.com">narten@us.ibm.com</a>]<br>Sen=
t: Saturday, January 07, 2012 7:04 PM<br>To: Ashish Dalela (adalela)<br>Cc:=
 Stewart Bryant (stbryant); Ronald Bonica; <a href=3D"mailto:robert@raszuk.=
net">robert@raszuk.net</a>;<br>
<a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>Subject: Re: [dc] DC Work=
 Plan<br><br></div>
<div>
<div></div>
<div class=3D"h5">Hi Ashish.<br><br>&gt; &gt;&gt; I think it&#39;s much mor=
e helpful (and IETF tradition!) to try and<br>&gt; identify individual prob=
lems and where possible, solve problems<br>&gt; individually (i.e., divide =
and conquer).<br>
<br>&gt; It is indeed IETF tradition to solve individual problems individua=
lly.<br>&gt; What makes it different in this case is that a customer spans =
multiple<br>&gt; datacenters. That means VM mobility, segmentation, broadca=
st,<br>
multicast,<br>&gt; unicast, policy (e.g. bandwidth) are spanning across bot=
h intra and<br>&gt; inter-DC.<br><br>Sorry, but I do not believe that thing=
s are different in this<br>case. The IETF always grapples between problem a=
reas whose impacts<br>
potentially span everything vs. scoping a problem down into one or<br>more =
manageable units, where it can be worked on in a more standalone<br>fashion=
 vs. trying to boil the ocean.<br><br>Indeed, any large project works that =
way. You have to have an overall<br>
architecture/vision of how things fit together, and then individual<br>comp=
onents that fit into the overall architecture.<br><br>I have not seen evide=
nce that the current &quot;DC&quot; problem area is any<br>different.<br>
<br>Indeed, I believe that the reason why the various &quot;problem stateme=
nts&quot;<br>have gotten so little traction is because they come across as =
too big,<br>too high-level and aren&#39;t scoped into small enough pieces (=
or<br>
sub-pieces) so that folk can look at a particular part (or sub-part)<br>and=
 say &quot;yes, I understand what is being proposed, I agree that this<br>i=
s a problem that would be useful to solve and I think that we can<br>solve =
it&quot;.<br>
<br>&gt; If we follow the IETF tradition here, it will mean that mobility,<=
br>&gt; segmentation, broadcast, multicast, policy will potentially all be<=
br>done<br>&gt; in two sets of ways across and between datacenters, because=
 these can<br>
&gt; indeed be stated as separate problems.<br><br>They already are being d=
one in different ways, to some extent, and<br>this is for reasons well outs=
ide of the lack of &quot;one unifying<br>standard&quot;. E.g., some DCs are=
 L2 based. Some are more L3 based. That<br>
is not going to change. There are valid (local) reasons for choosing a<br>p=
articular approach in a given DC. Likewise, there are multiple ways<br>of g=
luing DCs together across the WAN (L2VPN, L3VPN, IPsec,<br>etc.). That is n=
ot going to change anytime soon either.<br>
<br>&gt; Now, you have a huge issue in mapping one type of scheme into<br>&=
gt; another, given the many possible ways to do each of mobility,<br>&gt; s=
egmentation, broadcast, multicast, policy, themselves.<br><br>The above is =
a great example of how the &quot;problem space&quot; we are talking<br>
about sounds (to me) like boiling the ocean.<br><br>Do you believe that wha=
t this group needs to do is come up with one<br>set of protocols/standards =
for doing all of the above that can be used<br>everywhere, so that one stan=
dard is used everywhere across the<br>
Internet?<br><br>&gt; Added to the above is the fact that there are diverge=
nt approaches -<br>L2<br>&gt; vs. L3, host-based vs. network-based, overlay=
 vs. flat. Just to solve<br>&gt; address resolution, there is more than one=
 approach - e.g. directory<br>
&gt; based or control plane based.<br><br>Are you saying you think this gro=
up can and should come up with a<br>one-size-fits all approach that applies=
 everywhere, including data<br>centers and across the WAN?<br><br>&gt; The =
total number of permutations and combinations arising from these<br>
&gt; many alternatives will eventually mean that things don&#39;t work<br>&=
gt; together.<br><br>We are dreaming if we think we will not have permutati=
ons and<br>variations. They exist already today for local business and othe=
r<br>
reasons and will not magically disappear anytime soon. And having lots<br>o=
f permutations is not necessarily a problem if the interfaces between<br>co=
mponents (e.g., like between the DC and the WAN) are clean and well<br>
defined, so that each &quot;permutation&quot; fits in with the existing mod=
el,<br>so that interfacing between the two systems is straightforward and<b=
r>routine.<br><br>As one example, today many DCs use VLANs for multi-tenanc=
y internally,<br>
but use VPLS when going across the WAN. At the boundary between VLANs<br>an=
d L2VPN, you have to map VLAN identifiers into the VPLS<br>equivalent. This=
 is the well-defined interface between the two<br>systems.<br><br>If instea=
d of VLANs, the DC used something like NVO3/VXLAN/NVGRE,<br>
you&#39;d have to do the same: map from the NVO3/VXLAN/NVGRE tenant<br>iden=
tifier into the corresponding L2VPN identifier. Have we added some<br>new c=
omplexity here? Not really. We&#39;ve just added a new mapping, that<br>
is conceptuatlly the same as the one already being used today with<br>VLANs=
. We have not made the &quot;mapping problem&quot; any more complex than it=
<br>already was.<br><br>&gt; If intra-dc is directory based, then it doesn&=
#39;t help inter-dc to be<br>
&gt; control plane based. If intra-dc uses VLAN for segmentation then it<br=
>&gt; doesn&#39;t help inter-dc to use GRE. If the unicast is multi-path th=
en it<br>&gt; doesn&#39;t help if multicast and broadcast are static trees.=
 If one DC is<br>
&gt; L2 encap then it doesn&#39;t help if another DC is L3 encap. Note that=
<br>this<br>&gt; is already discounting the fact that this has to interoper=
ate with<br>&gt; classic L2/L3 networks. That may mean a L3 OSPF routing ta=
ble entry<br>
into<br>&gt; a L2 IS-IS forwarding entry. Proliferation of divergent approa=
ches<br>just<br>&gt; worsens interoperability.<br><br>Sounds to me like you=
 think we can somehow magically wave away<br>diversity that already exists =
today, and replace it with one overall<br>
standard. Is that what you think this group needs to do?<br><br>Thomas<br><=
br>_______________________________________________<br>dc mailing list<br><a=
 href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br><a href=3D"https://www.ietf=
.org/mailman/listinfo/dc" target=3D"_blank">https://www.ietf.org/mailman/li=
stinfo/dc</a><br>
</div></div></blockquote></div><br><br clear=3D"all">=A0

--14dae9340fe99a9fb304b5f72e83--

From linda.dunbar@huawei.com  Sun Jan  8 20:32:50 2012
Return-Path: <linda.dunbar@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id AD76B21F8635 for <dc@ietfa.amsl.com>; Sun,  8 Jan 2012 20:32:50 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.579
X-Spam-Level: 
X-Spam-Status: No, score=-2.579 tagged_above=-999 required=5 tests=[AWL=0.020,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ms5UsAAYNfFU for <dc@ietfa.amsl.com>; Sun,  8 Jan 2012 20:32:50 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id E06E421F8632 for <dc@ietf.org>; Sun,  8 Jan 2012 20:32:49 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml201-edg.china.huawei.com) ([172.18.9.243]) by dfwrg02-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACE88152; Sun, 08 Jan 2012 23:32:49 -0500 (EST)
Received: from DFWEML403-HUB.china.huawei.com (10.193.5.151) by dfweml201-edg.china.huawei.com (172.18.9.107) with Microsoft SMTP Server (TLS) id 14.1.323.3; Sun, 8 Jan 2012 20:30:16 -0800
Received: from DFWEML505-MBX.china.huawei.com ([10.124.31.100]) by dfweml403-hub.china.huawei.com ([10.193.5.151]) with mapi id 14.01.0323.003; Sun, 8 Jan 2012 20:30:09 -0800
From: Linda Dunbar <linda.dunbar@huawei.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>, "dc@ietf.org" <dc@ietf.org>
Thread-Topic: [dc] DCREQ Section 4 Outline
Thread-Index: AczMwzqSmKDf1zT6SlWzZwPqD8IVPQALcU3gAGOskjA=
Date: Mon, 9 Jan 2012 04:30:09 +0000
Message-ID: <4A95BA014132FF49AE685FAB4B9F17F62A4E69E8@dfweml505-mbx>
References: <4A95BA014132FF49AE685FAB4B9F17F62A4E6640@dfweml505-mbx> <618BE8B40039924EB9AED233D4A09C5102B25F2B@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25F2B@XMB-BGL-416.cisco.com>
Accept-Language: en-US, zh-CN
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.47.132.34]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Ronald Bonica <rbonica@juniper.net>
Subject: Re: [dc] DCREQ Section 4 Outline
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 09 Jan 2012 04:32:50 -0000

Ashish,=20

Answers to your questions/comments are inserted below:=20

Linda

> -----Original Message-----
> From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> Sent: Friday, January 06, 2012 10:08 PM
> To: Linda Dunbar; dc@ietf.org
> Cc: Ronald Bonica
> Subject: RE: [dc] DCREQ Section 4 Outline
>=20
>=20
> Linda, Please see inline. Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Linda Dunbar
> Sent: Saturday, January 07, 2012 4:04 AM
> To: dc@ietf.org
> Cc: Ronald Bonica
> Subject: [dc] DCREQ Section 4 Outline
>=20
>=20
> Section 4: "Data Center Characteristics" should at least add the
> following sub-sections:
> 	- Types of data centers: homogeneous content provider data
> centers, multi-tenant data centers (small/medium sized data centers,
> and
> mega sized data centers), Network service providers' data centers, etc.
>=20
>=20
> [AD] What is the difference between a multi-tenant DC and a NSP's DC?
> How about including enterprise datacenters as part of hybrid scenarios?

>=20
> 	Each of them has different characteristics. E.g. homogeneous
> content provider data center might not address fragmentation issues,
> but
> may have other issues.
>=20
> [AD] Do web/content providers need to have VM's? If not, how does the
> scale and mobility problem change for them?

[Linda] VMs allows the same number of physical servers to host much greater=
 number of applications. A Data Center with VMs may not need to face mobili=
ty issue if they can re-assign IP addresses when a VM is moved from Loc-A t=
o Loc-B.   A Data Center without VMs could face address mobility issue if t=
hey need to instantiated an application with IP address in Subnet-A to a ph=
ysical server on the rack which was under Subnet-B.=20

>=20
> 	-  A section on how location of load-balancing, firewall, and
> other middleware boxes determine where VMs/hosts in different segments
> can be aggregated/exchanged.
>=20
> [AD] You are talking about a L2-L3 boundary? Or something else? What
> happens when the firewall is in the Hypervisor?

[Linda] the issue is that majority of servers (or hypervisors) in Data Cent=
er don't support the needed security features. Not many low cost TORs are e=
quipped with the features either.  Under those circumstances, the traffic h=
as to be funneled to the switches/routers where the needed security feature=
s can be enforced.=20

>=20
> 	- A section on how applications are instantiated in data center.
> Applications servers (e.g. Oracle's middleware WebLogicServer)
> instantiate multiple instances of one application, assign them the IP
> addresses, and VM/server managers place them onto a server rack.
>=20
> [AD] How does that affect what problems we are discussing? E.g. isn't
> this a form of orchestration?

[Linda] Those middleware are a big part of data centers. Instances created =
by one application server may need to be on a short reach to each other. Ne=
twork may need to provide input to orchestration system on optimal location=
s where those instances (VMs) can be placed.=20
=20
>=20
> 	- A section to describe how backend data/storage are separated
> from the front end service networks.
>=20
> [AD] Isn't this adequately covered by the "Data is Immobile" section?
> What additions do you propose?

[Linda] Not really. Most servers are using different network (even differen=
t physical ports) to reach the backend data storage. VMs can't be instantia=
ted to a server which is not connected to the backend storage network.=20

=20
>=20
> Linda
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From adalela@cisco.com  Sun Jan  8 21:01:34 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3490121F8642 for <dc@ietfa.amsl.com>; Sun,  8 Jan 2012 21:01:34 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.414
X-Spam-Level: 
X-Spam-Status: No, score=-2.414 tagged_above=-999 required=5 tests=[AWL=0.185,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id N5y5qhegfceu for <dc@ietfa.amsl.com>; Sun,  8 Jan 2012 21:01:33 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 1B75C21F8646 for <dc@ietf.org>; Sun,  8 Jan 2012 21:01:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=5857; q=dns/txt; s=iport; t=1326085292; x=1327294892; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=Fp149ltxjXzTjG60B8RAzOSEkUImSltxKZjxLRIVG68=; b=WnNnyLRd6mRYh2W3i8giEJLv9FmeKTd0dmidNXhFZ1V5h8y8RAuk+9zW 8be7dkkKLtSXUDqTyYLp5P4BuCCrH8H3OrMyDU+z9lebwFr6f2YMRjGJE BByONq6zvKAWXk9zRSIkhE9hRpw6VVXMXtpqlgla99RxRO/tH1UTaPiAI c=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ap8EADV0Ck9Io8UY/2dsb2JhbABDrUuBcgEBAQMBAQEBDwEdCjQCCQwEAgEIEQQBAQsGFwEGASYfCQgBAQQBCggIGodYCJcqAZ1qBIsuYwSIN58T
X-IronPort-AV: E=Sophos;i="4.71,478,1320624000";  d="scan'208";a="3003225"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 09 Jan 2012 05:01:29 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q0951THM030442; Mon, 9 Jan 2012 05:01:29 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Mon, 9 Jan 2012 10:31:29 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Mon, 9 Jan 2012 10:31:28 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102B26018@XMB-BGL-416.cisco.com>
In-Reply-To: <4A95BA014132FF49AE685FAB4B9F17F62A4E69E8@dfweml505-mbx>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] DCREQ Section 4 Outline
Thread-Index: AczMwzqSmKDf1zT6SlWzZwPqD8IVPQALcU3gAGOskjAAAiY4EA==
References: <4A95BA014132FF49AE685FAB4B9F17F62A4E6640@dfweml505-mbx> <618BE8B40039924EB9AED233D4A09C5102B25F2B@XMB-BGL-416.cisco.com> <4A95BA014132FF49AE685FAB4B9F17F62A4E69E8@dfweml505-mbx>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Linda Dunbar" <linda.dunbar@huawei.com>, <dc@ietf.org>
X-OriginalArrivalTime: 09 Jan 2012 05:01:29.0242 (UTC) FILETIME=[BF50C7A0:01CCCE8B]
Cc: Ronald Bonica <rbonica@juniper.net>
Subject: Re: [dc] DCREQ Section 4 Outline
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 09 Jan 2012 05:01:34 -0000

Hi Linda,

>> A Data Center without VMs could face address mobility issue if they
need to instantiated an application with IP address in Subnet-A to a
physical server on the rack which was under Subnet-B.

So, you are saying there is application mobility without VM mobility?
Yes, that's a possibility where you recreate an application with the
same network address without moving state (i.e. the VM). All
hardware/software failures are going to be recovered in this way,
because you can't move a dead server/VM/application. That's not a small
thing either and related to the reliability problem.

>> Under those circumstances, the traffic has to be funneled to the
switches/routers where the needed security features can be enforced.

I understand the problem. There is also the problem to multi-path. If
everything has to funnel through a firewall which is not placed at the
access, then you are basically breaking multi-paths. Mobility
constraints is only one part of the problem.=20

>> Network may need to provide input to orchestration system on optimal
locations where those instances (VMs) can be placed

This is exactly right. This is described in the Cloud orchestration
problem, on why the cloud control plane has to interact with the network
control plane. You might have missed this.=20

>> Not really. Most servers are using different network (even different
physical ports) to reach the backend data storage. VMs can't be
instantiated to a server which is not connected to the backend storage
network.

Yeah, this presents a greater number of challenges for SAN than for
Ethernet/TCP/IP networks. But, it falls into the general problem of
determining the VM move decision, which needs to know the network
topology, for a variety of things of which storage is one of them.

Thanks, Ashish


-----Original Message-----
From: Linda Dunbar [mailto:linda.dunbar@huawei.com]=20
Sent: Monday, January 09, 2012 10:00 AM
To: Ashish Dalela (adalela); dc@ietf.org
Cc: Ronald Bonica
Subject: RE: [dc] DCREQ Section 4 Outline

Ashish,=20

Answers to your questions/comments are inserted below:=20

Linda

> -----Original Message-----
> From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> Sent: Friday, January 06, 2012 10:08 PM
> To: Linda Dunbar; dc@ietf.org
> Cc: Ronald Bonica
> Subject: RE: [dc] DCREQ Section 4 Outline
>=20
>=20
> Linda, Please see inline. Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Linda Dunbar
> Sent: Saturday, January 07, 2012 4:04 AM
> To: dc@ietf.org
> Cc: Ronald Bonica
> Subject: [dc] DCREQ Section 4 Outline
>=20
>=20
> Section 4: "Data Center Characteristics" should at least add the
> following sub-sections:
> 	- Types of data centers: homogeneous content provider data
> centers, multi-tenant data centers (small/medium sized data centers,
> and
> mega sized data centers), Network service providers' data centers,
etc.
>=20
>=20
> [AD] What is the difference between a multi-tenant DC and a NSP's DC?
> How about including enterprise datacenters as part of hybrid
scenarios?

>=20
> 	Each of them has different characteristics. E.g. homogeneous
> content provider data center might not address fragmentation issues,
> but
> may have other issues.
>=20
> [AD] Do web/content providers need to have VM's? If not, how does the
> scale and mobility problem change for them?

[Linda] VMs allows the same number of physical servers to host much
greater number of applications. A Data Center with VMs may not need to
face mobility issue if they can re-assign IP addresses when a VM is
moved from Loc-A to Loc-B.   A Data Center without VMs could face
address mobility issue if they need to instantiated an application with
IP address in Subnet-A to a physical server on the rack which was under
Subnet-B.=20

>=20
> 	-  A section on how location of load-balancing, firewall, and
> other middleware boxes determine where VMs/hosts in different segments
> can be aggregated/exchanged.
>=20
> [AD] You are talking about a L2-L3 boundary? Or something else? What
> happens when the firewall is in the Hypervisor?

[Linda] the issue is that majority of servers (or hypervisors) in Data
Center don't support the needed security features. Not many low cost
TORs are equipped with the features either.  Under those circumstances,
the traffic has to be funneled to the switches/routers where the needed
security features can be enforced.=20

>=20
> 	- A section on how applications are instantiated in data center.
> Applications servers (e.g. Oracle's middleware WebLogicServer)
> instantiate multiple instances of one application, assign them the IP
> addresses, and VM/server managers place them onto a server rack.
>=20
> [AD] How does that affect what problems we are discussing? E.g. isn't
> this a form of orchestration?

[Linda] Those middleware are a big part of data centers. Instances
created by one application server may need to be on a short reach to
each other. Network may need to provide input to orchestration system on
optimal locations where those instances (VMs) can be placed.=20
=20
>=20
> 	- A section to describe how backend data/storage are separated
> from the front end service networks.
>=20
> [AD] Isn't this adequately covered by the "Data is Immobile" section?
> What additions do you propose?

[Linda] Not really. Most servers are using different network (even
different physical ports) to reach the backend data storage. VMs can't
be instantiated to a server which is not connected to the backend
storage network.=20

=20
>=20
> Linda
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From diego@tid.es  Mon Jan  9 06:29:13 2012
Return-Path: <diego@tid.es>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id BA1D321F8785 for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 06:29:13 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.187
X-Spam-Level: 
X-Spam-Status: No, score=-5.187 tagged_above=-999 required=5 tests=[AWL=1.412,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id aDRHXKe2DbL1 for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 06:29:09 -0800 (PST)
Received: from correo-bck.tid.es (correo-bck.tid.es [195.235.93.200]) by ietfa.amsl.com (Postfix) with ESMTP id 029B421F8781 for <dc@ietf.org>; Mon,  9 Jan 2012 06:29:09 -0800 (PST)
Received: from sbrightmailg02.hi.inet (Sbrightmailg02.hi.inet [10.95.78.105]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXJ0083OC8FMX@tid.hi.inet> for dc@ietf.org; Mon, 09 Jan 2012 15:29:07 +0100 (MET)
Received: from vanvan (vanvan.hi.inet [10.95.78.49])	by sbrightmailg02.hi.inet (Symantec Messaging Gateway) with SMTP id E2.9A.02643.3B9FA0F4; Mon, 09 Jan 2012 15:29:07 +0100 (CET)
Received: from correo.tid.es (mailhost.hi.inet [10.95.64.100]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPS id <0LXJ00849C8JMX@tid.hi.inet> for dc@ietf.org; Mon, 09 Jan 2012 15:29:07 +0100 (MET)
Received: from EXCLU2K7.hi.inet ([10.95.67.65]) by htcasmad2.hi.inet ([192.168.0.2]) with mapi; Mon, 09 Jan 2012 15:29:07 +0100
Date: Mon, 09 Jan 2012 15:29:04 +0100
From: DIEGO LOPEZ GARCIA <diego@tid.es>
In-reply-to: <4A95BA014132FF49AE685FAB4B9F17F62A4E65BD@dfweml505-mbx>
To: "dc@ietf.org" <dc@ietf.org>
Message-id: <D2DAE86C-9194-4C5F-9103-ADD8E4D51A2F@tid.es>
MIME-version: 1.0
Content-type: text/plain; charset=Windows-1252
Content-language: en-US
Content-transfer-encoding: quoted-printable
Accept-Language: en-US
Thread-topic: [dc] DC Work Plan
Thread-index: AczO2wrqPzKEvKWRT/6m+xxRtEmvmA==
acceptlanguage: en-US
X-AuditID: 0a5f4e69-b7f6b6d000000a53-eb-4f0af9b320b3
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprKKsWRmVeSWpSXmKPExsXCFe9nqLv5J5e/wZI15hYt5++yOjB6LFny kymAMYrLJiU1J7MstUjfLoEr497/FUwFN7gqJkydytTAeIOji5GTQ0LAROLgrgNsELaYxIV7 64FsLg4hgW2MEntPr2eEcL4ySiyccZ0FwmlklPi2ZAILSAuLgKrE4kXrGEFsNgF1iZaj38Di wgJyEi/PfWQGsTkFXCU2H+5nB7FFBOQlnm7ZBVbDK2ApsW3LW3YIW1Dix+R7YHFmAT2Jj39u M0LY4hLNrTeh4toST95dYAWxGYFO/X5qDRPMzOstm4FsdiBbT+KhIESFqMSd9vWMEI8JSCzZ c54ZwhaVePn4HyvEK22MEm/uHWWZwCg2C8kVs5BcMQvJFbOQXLGAkWUVo1hxUlFmekZJbmJm TrqBkV5Gpl5mXmrJJkZIvGTuYFy+U+UQowAHoxIP74wMLn8h1sSy4srcQ4ySHExKory/vwOF +JLyUyozEosz4otKc1KLDzFKcDArifBW9gHleFMSK6tSi/JhUjIcHEoSvDrA2BYSLEpNT61I y8wBJgWYNBMHJ0g7D1C7JkgNb3FBYm5xZjpE/hSjKsf1xrnnGIVY8vLzUqXEeX/+ACoSACnK KM2Dm/OKURzoYGFeFZARPMC0BjfhFdBwJqDhD/6wgwwvSURISTUwBn1/u/7gLV2JF51HKmZw /Mm/rluj+air+tgHtzuhu/dLazkuU9v0L0GnYd6L6L9ME0+FTJnVk9DCy1ByOO7P+qq4DaZr GhbxVQTHqP7deGzxxEV8rp9Xegt88/YUjNFK4mpsl0y9EuM3s+BWP5PeUiNWzuOTbvxKVTDc cfc3y0e1uRxu9zIeK7EUZyQaajEXFScCAELkoHcoAwAA
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4A95BA014132FF49AE685FAB4B9F17F62A4E65BD@dfweml505-mbx>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 09 Jan 2012 14:29:13 -0000

Hi Ashish,

I'd like to see a couple of additions to the document on requirements.

First, section 4 should introduce what it is discussed later on under 5.9 (=
cloud control) as the "deep control", from the application layer down to L2=
 properties is a characteristic rather specific of cloud datacenter infrast=
ructures.

Second, section 5 should add a discussion on AAA aspects, and how identity =
is mapped down the protocol stack from the requesting user authentication a=
nd authorization data, and conversely accounting records are marked and agg=
regated up the protocol stack to provide appropriate information and, event=
ually, match them against SLA requirements. I think this would deserve a se=
parate subsection, but it could be included under 5.9 as well.

Be goode,

--
"Esta vez no fallaremos, Doctor Infierno"

Dr Diego R. Lopez
Telefonica I+D

e-mail: diego@tid.es
Tel:      +34 913 129 041
Mobile: +34 682 051 091
-----------------------------------------


Este mensaje se dirige exclusivamente a su destinatario. Puede consultar nu=
estra pol=EDtica de env=EDo y recepci=F3n de correo electr=F3nico en el enl=
ace situado m=E1s abajo.
This message is intended exclusively for its addressee. We only send and re=
ceive email on the basis of the terms set out at.
http://www.tid.es/ES/PAGINAS/disclaimer.aspx

From yakov@juniper.net  Mon Jan  9 07:43:50 2012
Return-Path: <yakov@juniper.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2436121F8821 for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 07:43:50 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.288
X-Spam-Level: 
X-Spam-Status: No, score=-106.288 tagged_above=-999 required=5 tests=[AWL=-0.004, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, SARE_MILLIONSOF=0.315, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4kEUpzc2GI0G for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 07:43:49 -0800 (PST)
Received: from exprod7og113.obsmtp.com (exprod7og113.obsmtp.com [64.18.2.179]) by ietfa.amsl.com (Postfix) with ESMTP id 25A2A21F881C for <dc@ietf.org>; Mon,  9 Jan 2012 07:43:41 -0800 (PST)
Received: from P-EMHUB02-HQ.jnpr.net ([66.129.224.36]) (using TLSv1) by exprod7ob113.postini.com ([64.18.6.12]) with SMTP ID DSNKTwsLHkj9yzMc67aC5egSr8tPNBQe12BP@postini.com; Mon, 09 Jan 2012 07:43:41 PST
Received: from magenta.juniper.net (172.17.27.123) by P-EMHUB02-HQ.jnpr.net (172.24.192.33) with Microsoft SMTP Server (TLS) id 8.3.213.0; Mon, 9 Jan 2012 07:42:44 -0800
Received: from juniper.net (sapphire.juniper.net [172.17.28.108])	by magenta.juniper.net (8.11.3/8.11.3) with ESMTP id q09FghS72850; Mon, 9 Jan 2012 07:42:44 -0800 (PST)	(envelope-from yakov@juniper.net)
Message-ID: <201201091542.q09FghS72850@magenta.juniper.net>
To: Xuxiaohu <xuxiaohu@huawei.com>
In-Reply-To: <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7642D0@szxeml525-mbs.china.huawei.com>
References: <13205C286662DE4387D9AF3AC30EF456D74E7BD952@EMBX01-WF.jnpr.net> <6665BC1FEA04AB47B1F75FA641C43BC0926D106C@FHDP1LUMXC7V41.us.one.verizon.com> <201112211830.pBLIUiZA017188@cichlid.raleigh.ibm.com> <13205C286662DE4387D9AF3AC30EF456D74E91DBFD@EMBX01-WF.jnpr.net> <618BE8B40039924EB9AED233D4A09C5102A4881E@XMB-BGL-416.cisco.com> <4EF7B019.3030202@riw.us> <201112281700.pBSH0kB2011575@cichlid.raleigh.ibm.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76387F@szxeml525-mbs.china.huawei.com> <229A8E99-6EB2-49E5-B530-BA0F6C7C40AC@asgaard.org> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7639B3@szxeml525-mbs.china.huawei.com> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE763ACD@szxeml525-mbs.china.huawei.com> <201201031432.q03EWhS44922@magenta.juniper.net> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE76405C@szxeml525-mbs.china.huawei.com> <201201041301.q04D1kS47564@magenta.juniper.net> <1FEE3F8F5CCDE64C9A8E8F4AD27C19EE7642D0@szxeml525-mbs.china.huawei.com>
X-MH-In-Reply-To: Xuxiaohu <xuxiaohu@huawei.com> message dated "Thu, 05 Jan 2012 01:33:35 +0000."
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-ID: <14489.1326123763.1@juniper.net>
Date: Mon, 9 Jan 2012 07:42:43 -0800
From: Yakov Rekhter <yakov@juniper.net>
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] Elevator Pitch
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 09 Jan 2012 15:43:50 -0000

Xiaohu,

> > Xuxiaohu,
> > 
> > > Hi Yakov,
> > >
> > > > Xuxiaohu,
> > > >
> > > > > Hi all,
> > > > >
> > > > > Since there are some differences in the problems and requirements
> > > > > between data center network (DCN) and data center interconnect
> > > > > (DCI), I try to list several problems and requirements for DCN and
> > > > > DCI separately as follows. Here the dat a centers mainly refer to
> > > > > those multi-tenant data centers which are operated by public cloud
> > > > > providers to deliver cloud service (i.e., IaaS) to their customers
> > > > > (i.e., tenants).
> > > > >
> > > > > 1. DCN problems and requirements:
> > > > >
> > > > > 1) VM mobility across multiple pods -> LAN/subnet extension across po
ds
> > > > >
> > > > > 2) Some cluster applications use non-IP or link-local multicast
> > > > >     (optional) -> Layer2 networking
> > > > >
> > > > > 3) Multi-tenancy isolation -> VPN/VLAN instance scalability
> > > > >
> > > > > 4) Millions of VMs -> MAC/IP forwarding table scalability
> > > > >
> > > > > 5) Increasing bandwidth demands for server-to-server connectivity
> > > > >    (i.e., east-west traffic)-> ECMP and shortest path forwarding
> > > > >    capabilities
> > > > >
> > > > > 6) Network resiliency -> Fast convergence and multi-homing
> > > >
> > > > Do you need fast routing convergence, or fast connectivity restoration 
?
> > >
> > > Both.
> > 
> > I understand the rationale for fast connectivity restoration. But
> > given that there is fast connectivity restoration, why would one
> > also need fast routing convergence ?
> 
> Hi Yakov,
> 
> I have a much similar question to you: given that there is fast connectivity 
> restoration, why did we also need fast routing convergence for the Internet 
> routing system?

Because fast routing convergence is *one* possible way to accomplish
fast connectivity restoration. Having said this, I want to make it
clear that while it is one possible way, it is *not* the only way
possible. 

Moreover, when talking about routing convergence one need to
distinguish between the cases where (a) during routing convergence
connectivity is preserved, although perhaps along suboptimal routes,
and (b) during routing convergence connectivity is disrupted.

Yakov.

From rbonica@juniper.net  Mon Jan  9 07:43:56 2012
Return-Path: <rbonica@juniper.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0F6BD21F881C for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 07:43:56 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.532
X-Spam-Level: 
X-Spam-Status: No, score=-106.532 tagged_above=-999 required=5 tests=[AWL=0.067, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id W+K6DIYFwfjE for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 07:43:51 -0800 (PST)
Received: from exprod7og126.obsmtp.com (exprod7og126.obsmtp.com [64.18.2.206]) by ietfa.amsl.com (Postfix) with ESMTP id 4C37D21F8820 for <dc@ietf.org>; Mon,  9 Jan 2012 07:43:49 -0800 (PST)
Received: from P-EMHUB02-HQ.jnpr.net ([66.129.224.36]) (using TLSv1) by exprod7ob126.postini.com ([64.18.6.12]) with SMTP ID DSNKTwsLNDtIkQdPYVsNVqo6GFPyl0z0ineb@postini.com; Mon, 09 Jan 2012 07:43:49 PST
Received: from p-emfe01-wf.jnpr.net (172.28.145.24) by P-EMHUB02-HQ.jnpr.net (172.24.192.36) with Microsoft SMTP Server (TLS) id 8.3.213.0; Mon, 9 Jan 2012 07:43:02 -0800
Received: from EMBX01-WF.jnpr.net ([fe80::1914:3299:33d9:e43b]) by p-emfe01-wf.jnpr.net ([fe80::d0d1:653d:5b91:a123%11]) with mapi; Mon, 9 Jan 2012 10:42:57 -0500
From: Ronald Bonica <rbonica@juniper.net>
To: "dc@ietf.org" <dc@ietf.org>
Date: Mon, 9 Jan 2012 10:42:54 -0500
Thread-Topic: DC Interim Meeting: CANCELLED
Thread-Index: AczO5Vp43NMDU3IHTrWcJmRSbGDHtg==
Message-ID: <13205C286662DE4387D9AF3AC30EF456D74EFCD832@EMBX01-WF.jnpr.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: [dc] DC Interim Meeting: CANCELLED
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 09 Jan 2012 15:43:56 -0000

Folks,

In light of recent discussion on this mailing list, I think that it is unli=
kely that the requirements document will be sufficiently complete to discus=
s on February 21. Therefore, the DC Interim Meeting scheduled for that date=
 has been cancelled.

Let's continue on our work plan, with a goal producing a requirements docum=
ent by IETF 83. Stewart and I will arrange for a BoF to discuss the require=
ments draft in Paris.

--------------------------
Ron Bonica
vcard:       www.bonica.org/ron/ronbonica.vcf



From lizho.jin@gmail.com  Mon Jan  9 07:00:11 2012
Return-Path: <lizho.jin@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E581221F8783 for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 07:00:11 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.598
X-Spam-Level: 
X-Spam-Status: No, score=-3.598 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id uya3WfLKvjPD for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 07:00:11 -0800 (PST)
Received: from mail-qw0-f51.google.com (mail-qw0-f51.google.com [209.85.216.51]) by ietfa.amsl.com (Postfix) with ESMTP id 3AA5D21F877A for <dc@ietf.org>; Mon,  9 Jan 2012 07:00:11 -0800 (PST)
Received: by qadz3 with SMTP id z3so2213523qad.10 for <dc@ietf.org>; Mon, 09 Jan 2012 07:00:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:cc:content-type; bh=QUJkVHUr0un7X2X36SEexRcbkuDEJqEjqN3RQIxmPDI=; b=LcfQOXY8iStMLx2tDPYj7W4G780vSigtlaU28nb/KX3b1rt/6oDljBBjMmACZBKQVe 3ENQBkT724fWRK9t8cop1YITDSec2HRLZRCkMH5NZA+ht5IisfdVuttsi83Tkb5hRz5s 37dLjCnv2RjmUXGbcX/ug+k25tadNn471d60Y=
MIME-Version: 1.0
Received: by 10.224.31.202 with SMTP id z10mr19144405qac.96.1326121210773; Mon, 09 Jan 2012 07:00:10 -0800 (PST)
Received: by 10.224.52.200 with HTTP; Mon, 9 Jan 2012 07:00:10 -0800 (PST)
Date: Mon, 9 Jan 2012 23:00:10 +0800
Message-ID: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com>
From: Lizhong Jin <lizho.jin@gmail.com>
To: adalela@cisco.com
Content-Type: multipart/alternative; boundary=20cf3074b14cb2814b04b619a7b4
X-Mailman-Approved-At: Mon, 09 Jan 2012 08:27:37 -0800
Cc: Lizhong Jin <lizhong.jin@zte.com.cn>, dc@ietf.org
Subject: [dc] Comment of draft-dalela-dc-requirements
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 09 Jan 2012 15:00:12 -0000

--20cf3074b14cb2814b04b619a7b4
Content-Type: text/plain; charset=ISO-8859-1

Hi Ashish,
I have several comments to this requirement. Thanks.

Section 5.3, Multi-tenancy problem. From my side, the multi-tenancy problem
is not only the scalability of segmentation-ID problem, it is also required
to isolate performance and security among multi-tenant. For example, one
tenant should not suffer denial-of-service attacks by other tenant within
same datacenter. A detailed example is, the huge broadcast traffic from one
tenant should not influence the performace and availability of other
tenants.

Section 5.3, "The use of L3 VRFs also poses similar challenges of scaling".
The challenge for VRF is quite different with VLAN. The challenge for VRF
is the scalability of forwarding table. And you also said:"With VRFs, these
entries will be present even if there is no traffic from a host to other
hosts in the VRF". I think this could be optimized that the FIB would store
only active route entries, while the RIB would store all route entries.

Section 5.5, last paragraph about mobility. It seems that this paragraph is
not much related with "network convergence", but is about "host mobility
impact to the network resource". While I find that, section 5.1 is about
the "host mobility impact to L2/3 forwarding", 5.10 is about the "host
mobility impact to forwarding tables". Suggest to re-organize the three
parts which are all related with impact of host mobility.

Regards
Lizhong

--20cf3074b14cb2814b04b619a7b4
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div>Hi Ashish,</div>
<div>I have several comments to this requirement. Thanks.</div>
<div>=A0</div>
<div>Section 5.3, Multi-tenancy problem. From my side, the multi-tenancy pr=
oblem is not only the scalability of segmentation-ID problem, it is also re=
quired to isolate performance and security among multi-tenant. For example,=
 one tenant should not suffer denial-of-service attacks by other tenant wit=
hin same datacenter. A detailed example is, the huge=A0broadcast traffic fr=
om one tenant should not influence the performace and availability of other=
 tenants.</div>

<div>=A0</div>
<div>Section 5.3, &quot;The use of L3 VRFs also poses similar challenges of=
 scaling&quot;. The challenge for VRF is quite different with VLAN. The cha=
llenge for VRF is the scalability of forwarding table. And you also said:&q=
uot;With VRFs, these entries will be present even if there is no traffic fr=
om a host to other hosts in the VRF&quot;. I think=A0this could be optimize=
d that the FIB would store only active route entries, while the RIB would s=
tore all route entries.</div>

<div>=A0</div>
<div>Section 5.5, last paragraph about mobility. It seems that this paragra=
ph is not much related with &quot;network convergence&quot;, but is about &=
quot;host=A0mobility impact to the network resource&quot;. While I find tha=
t, section 5.1 is about the &quot;host mobility impact to L2/3 forwarding&q=
uot;, 5.10 is about the &quot;host mobility impact to forwarding tables&quo=
t;. Suggest to re-organize the three parts which are all related with impac=
t of host mobility.</div>

<div>=A0</div>
<div>Regards</div>
<div>Lizhong</div>

--20cf3074b14cb2814b04b619a7b4--

From spencer@wonderhamster.org  Mon Jan  9 15:46:10 2012
Return-Path: <spencer@wonderhamster.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 150B921F84FB for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 15:46:10 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -102.555
X-Spam-Level: 
X-Spam-Status: No, score=-102.555 tagged_above=-999 required=5 tests=[AWL=0.044, BAYES_00=-2.599, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 55jAoQ7hZjJf for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 15:46:09 -0800 (PST)
Received: from mout.perfora.net (mout.perfora.net [74.208.4.194]) by ietfa.amsl.com (Postfix) with ESMTP id 7C0B721F8469 for <dc@ietf.org>; Mon,  9 Jan 2012 15:46:09 -0800 (PST)
Received: from [10.0.0.232] (user.216.126.222.zhong-ren.net [222.126.216.9]) by mrelay.perfora.net (node=mrus3) with ESMTP (Nemesis) id 0LkfRE-1SKpf03u7A-00b6K7; Mon, 09 Jan 2012 18:46:08 -0500
Message-ID: <4F0B7C38.7030307@wonderhamster.org>
Date: Mon, 09 Jan 2012 17:46:00 -0600
From: Spencer Dawkins <spencer@wonderhamster.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:8.0) Gecko/20111105 Thunderbird/8.0
MIME-Version: 1.0
To: dc@ietf.org
References: <13205C286662DE4387D9AF3AC30EF456D74EFCD832@EMBX01-WF.jnpr.net>
In-Reply-To: <13205C286662DE4387D9AF3AC30EF456D74EFCD832@EMBX01-WF.jnpr.net>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
X-Provags-ID: V02:K0:hecxHVVoZwJ8Rk4PRH1bAEpg0q5ByZ5eati6nSF1uAC teHkZJ6W5u8MQXfD0YgeCbSXNDMlYxE+EQyoN7t00AJ5I48c5S sfVoDJPJ+D7nLnYTbchP6Zn41NayoU2telVLyS/HHjJ4jZZ91b bhyHh94EC0+Dzqk5kJSbw7AYeBlrjfCuT3GhoPzUW3Q5m9cNY+ QONZn3l+aC79tSduEiDT7P78FXIZFG3rtwLISOoH/NsknIMQBG CqWwyrQ/8emiA71tAVWGkLz0xkDGAjl9hWq9pxHQQ90IWRw5qh OTFQve9KqOOVwnaNMaij6KnjalpdM4YANrzo9NYgv9swwo9YdB Ki/jBQ4D9D5xKIrrfns5Qejl5Od4XDHMtd4u+RJ6h
Subject: Re: [dc] DC Interim Meeting: CANCELLED
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 09 Jan 2012 23:46:10 -0000

On 1/9/2012 9:42 AM, Ronald Bonica wrote:
> Folks,
>
> In light of recent discussion on this mailing list, I think that it is unlikely that the requirements document will be sufficiently complete to discuss on February 21. Therefore, the DC Interim Meeting scheduled for that date has been cancelled.
>
> Let's continue on our work plan, with a goal producing a requirements document by IETF 83. Stewart and I will arrange for a BoF to discuss the requirements draft in Paris.

Hi, Ron,

I'm pretty sure I know the answer, but just to make sure - your 
intention is that "continue on our work plan" happens on THIS mailing 
list, right?

Thanks,

Spencer

From david.black@emc.com  Mon Jan  9 22:17:35 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F330121F8803 for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 22:17:34 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.592
X-Spam-Level: 
X-Spam-Status: No, score=-106.592 tagged_above=-999 required=5 tests=[AWL=0.007, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id hRIIw9wF67Zb for <dc@ietfa.amsl.com>; Mon,  9 Jan 2012 22:17:34 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id D4C0B21F8800 for <dc@ietf.org>; Mon,  9 Jan 2012 22:17:33 -0800 (PST)
Received: from hop04-l1d11-si01.isus.emc.com (HOP04-L1D11-SI01.isus.emc.com [10.254.111.54]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q0A6HUd9003785 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 10 Jan 2012 01:17:31 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.222.129]) by hop04-l1d11-si01.isus.emc.com (RSA Interceptor); Tue, 10 Jan 2012 01:17:15 -0500
Received: from mxhub20.corp.emc.com (mxhub20.corp.emc.com [10.254.93.49]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q0A6HFsx027327; Tue, 10 Jan 2012 01:17:15 -0500
Received: from mx14a.corp.emc.com ([169.254.1.99]) by mxhub20.corp.emc.com ([10.254.93.49]) with mapi; Tue, 10 Jan 2012 01:17:14 -0500
From: <david.black@emc.com>
To: <adalela@cisco.com>
Date: Tue, 10 Jan 2012 01:17:13 -0500
Thread-Topic: [dc] DC Work Plan
Thread-Index: AczMx8Uehz9tTZwlQBi1/eQTndZ9sgAMhyjQAJeklkA=
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A7B80B03@MX14A.corp.emc.com>
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net><4F07488E.2070103@raszuk.net><13205C286662DE4387D9AF3AC30EF456D74EEEF153@EMBX01-WF.jnpr.net><201201062201.q06M0xkO008099@cichlid.raleigh.ibm.com><4F07797A.3090907@cisco.com> <201201062305.q06N5i8o008460@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102B25F2F@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 10 Jan 2012 06:17:35 -0000

Ashish,

I need to find the time to seriously take a look at your requirements draft=
,
but Linda's already pointed out a number of things that need attention.

I also think you're dodging a rather important piece of the problem:

> It is indeed IETF tradition to solve individual problems individually.
> What makes it different in this case is that a customer spans multiple
> datacenters. That means VM mobility, segmentation, broadcast, multicast,
> unicast, policy (e.g. bandwidth) are spanning across both intra and
> inter-DC.

For the L2 networking service usually used for VM mobility, that service
has to be provided consistently in the data centers and across the
network that interconnects.

> If we follow the IETF tradition here, it will mean that mobility,
> segmentation, broadcast, multicast, policy will potentially all be done
> in two sets of ways across and between datacenters, because these can
> indeed be stated as separate problems.=20

That's a fine exercise in strawman demolition, but it's beside the point,
because it ignores layering of network services.  Nobody is proposing
decomposing the L2 networking service used by the VMs into the functional
sub-components listed above.  The L2 service used by the VMs has to be
consistent, but it can be run over a variety of underlying infrastructure.

> Now, you have a huge issue in
> mapping one type of scheme into another, given the many possible ways to
> do each of mobility, segmentation, broadcast, multicast, policy,
> themselves.

I think this comment is seriously off-course, as I think the mapping proble=
m
among the intra-dc and inter-dc technologies is important.

For the sake of argument, assume that this mapping problem is intractable
and one solution should be selected, starting with the data planes.  For
the L2 networking service that is currently expected for VM mobility,
there are at least 4 interesting data planes for the data center - IEEE
SPB/PBB, TRILL, and the two new NVO3 planes (VxLAN and NVGRE).  Framing
the "problem" as picking one of those data planes as the "winner" requires
deciding which 3 of those 4 proverbial "oxen" to gore.  The "oxen" tend
to win that sort of battle :-(.

As Robert (and others) have pointed out, there's an interesting opportunity
here, in that all four of these data planes are converging on a 24-bit
network identifier for isolation purposes.  For these technologies, VM
mobility and segmentation basically work, so there's an opportunity to
develop some commonality among the other aspects by attacking the
interesting portions of the mapping problem instead of complaining about:

> The total number of permutations and combinations arising from these
> many alternatives will eventually mean that things don't work together.

We may not be able to map everything to everything, but there's enough
of an opportunity here to do something interesting and useful.  A common
interface between the L2 service in the data center and VPNs that carry
that L2 service across data centers is one productive "place to dig".

As for the cited examples, I really wonder ...
=20
> If intra-dc is directory based, then it doesn't help inter-dc to be
> control plane based.

The reverse is much more likely.  Use of gratuitous ARPs predates VM
mobility by many years.  They are a fact of life that need to work in
data centers.

> If intra-dc uses VLAN for segmentation then it
> doesn't help inter-dc to use GRE.
=20
That's wrong, sorry.  See draft-hasmit-otv-03 for a counterexample.
OTV happens to use UDP instead of GRE, but it's the same basic idea.

> If the unicast is multi-path then it
> doesn't help if multicast and broadcast are static trees.

Bad idea - don't do that ... thereby reducing the interesting portion
of the problem space.

> If one DC is L2 encap then it doesn't help if another DC is L3 encap.=20

Really?  http://www.ietf.org/proceedings/82/slides/l2vpn-8.pdf from
the Taipei proceedings strongly suggests otherwise.

> Note that this
> is already discounting the fact that this has to interoperate with
> classic L2/L3 networks. That may mean a L3 OSPF routing table entry into
> a L2 IS-IS forwarding entry.

That's also wrong - the L2 IS-IS functionality is encapsulated in a
fashion that the L3 routing can't see - OSPF at L3 has no idea that
IS-IS is running at L2.

Thanks,
--David

> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Ashis=
h Dalela (adalela)
> Sent: Saturday, January 07, 2012 12:39 AM
> To: Thomas Narten; Stewart Bryant (stbryant)
> Cc: Ronald Bonica; dc@ietf.org; robert@raszuk.net
> Subject: Re: [dc] DC Work Plan
>=20
>=20
> Thomas,
>=20
> >> I think it's much more helpful (and IETF tradition!) to try and
> identify individual problems and where possible, solve problems
> individually (i.e., divide and conquer).
>=20
> It is indeed IETF tradition to solve individual problems individually.
> What makes it different in this case is that a customer spans multiple
> datacenters. That means VM mobility, segmentation, broadcast, multicast,
> unicast, policy (e.g. bandwidth) are spanning across both intra and
> inter-DC.
>=20
> If we follow the IETF tradition here, it will mean that mobility,
> segmentation, broadcast, multicast, policy will potentially all be done
> in two sets of ways across and between datacenters, because these can
> indeed be stated as separate problems. Now, you have a huge issue in
> mapping one type of scheme into another, given the many possible ways to
> do each of mobility, segmentation, broadcast, multicast, policy,
> themselves.
>=20
> Added to the above is the fact that there are divergent approaches - L2
> vs. L3, host-based vs. network-based, overlay vs. flat. Just to solve
> address resolution, there is more than one approach - e.g. directory
> based or control plane based.
>=20
> The total number of permutations and combinations arising from these
> many alternatives will eventually mean that things don't work together.
>=20
> If intra-dc is directory based, then it doesn't help inter-dc to be
> control plane based. If intra-dc uses VLAN for segmentation then it
> doesn't help inter-dc to use GRE. If the unicast is multi-path then it
> doesn't help if multicast and broadcast are static trees. If one DC is
> L2 encap then it doesn't help if another DC is L3 encap. Note that this
> is already discounting the fact that this has to interoperate with
> classic L2/L3 networks. That may mean a L3 OSPF routing table entry into
> a L2 IS-IS forwarding entry. Proliferation of divergent approaches just
> worsens interoperability.
>=20
> I understand your concern, and that it is different than IETF tradition.
> But we need to choose between tradition and interoperability.
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Thomas Narten
> Sent: Saturday, January 07, 2012 4:36 AM
> To: Stewart Bryant (stbryant)
> Cc: Ronald Bonica; robert@raszuk.net; dc@ietf.org
> Subject: Re: [dc] DC Work Plan
>=20
> Hi Stewart.
>=20
> > I think that SDN has wider applicability than DC and thus has a
> > life (or death) of its own.
>=20
> It's not just that it has wider applicability, it's that that effort
> seems somewhat self-contained and I can see how one might carve out an
> effort in this space.
>=20
> > NVO3 is a candidate solution to the DC problem,
>=20
> This is probably just quick typing, but I think it is worth saying
> that we are not helping ourselves by thinking there is *one* DC
> problem.
>=20
> There are a number of possible problems. Some relate to each others,
> some more so, some less so. But I think it is not helpful to try and
> view this area as having *one* problem that needs sorting out.
>=20
> I think it's much more helpful (and IETF tradition!) to try and
> identify individual problems and where possible, solve problems
> individually (i.e., divide and conquer). Of course, there can be
> interdependencies between problem and solution spaces, but we are
> likely going to flail if we try to view this as one big problem
> needing one overall solution approach. Or that we can't work on sub
> problems unless we understand the entire problem space.
>=20
> > but from the discussion on the list I am yet to be convinced that we
> > have either the right problem statement to endorse it as the
> > approach for L2, let alone to determine whether we need a L2
> > solution, a L3 solution or a mixed solution.
>=20
> NVO3 is aimed at a subset of the DC "problem area". The bullet point
> summary is that it's different way of providing multi-tenancy in the
> DC. It is not the *only* way. But I think the Taipai session showed
> there was significant support for this approach.
>=20
> Thomas
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


From adalela@cisco.com  Tue Jan 10 02:26:12 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 36B6521F86A9 for <dc@ietfa.amsl.com>; Tue, 10 Jan 2012 02:26:12 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.417
X-Spam-Level: 
X-Spam-Status: No, score=-2.417 tagged_above=-999 required=5 tests=[AWL=0.181,  BAYES_00=-2.599, HTML_MESSAGE=0.001]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6Uo5oXNJUHzx for <dc@ietfa.amsl.com>; Tue, 10 Jan 2012 02:26:10 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 3981421F8507 for <dc@ietf.org>; Tue, 10 Jan 2012 02:26:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=14276; q=dns/txt; s=iport; t=1326191168; x=1327400768; h=mime-version:subject:date:message-id:in-reply-to: references:from:to:cc; bh=W97ao7FEbPaBBakIz61cVcZfRVtvMxtpHPMjn93A8VM=; b=Z077VrT+qedIvhOiYZYQlsg/dF7chKxPxa7QjABDAHEFJ7NdOTsZslmL NYzNhX7sZhJ7En8SpG3gnyLEAhFWR/ODGPzkH+QuolOMtLOEhF3PzaTmv GGsPjsp2w3YPuVVlipQ01rBVBQNNZJfiWoMTb6/a84bWl5mTTesazSJO5 Y=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqAEAOURDE9Io8UY/2dsb2JhbABDgk6rDoFyAQEBBBIBCREDSRACAQgOAwQBAQsGFwEGAUUJCAEBBAsICBMHn18BnkeLLmMEiDifFA
X-IronPort-AV: E=Sophos;i="4.71,486,1320624000"; d="scan'208,217";a="3116266"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 10 Jan 2012 10:26:06 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q0AAQ61l027309; Tue, 10 Jan 2012 10:26:06 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 10 Jan 2012 15:56:05 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CCCF82.42D31811"
Date: Tue, 10 Jan 2012 15:56:04 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102BE69B6@XMB-BGL-416.cisco.com>
In-Reply-To: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] Comment of draft-dalela-dc-requirements
Thread-Index: AczO651nqqstRz0cQqmqZ/EZVCi6pwAlGyYQ
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Lizhong Jin" <lizho.jin@gmail.com>
X-OriginalArrivalTime: 10 Jan 2012 10:26:05.0930 (UTC) FILETIME=[42BD44A0:01CCCF82]
Cc: Lizhong Jin <lizhong.jin@zte.com.cn>, dc@ietf.org
Subject: Re: [dc] Comment of draft-dalela-dc-requirements
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 10 Jan 2012 10:26:12 -0000

This is a multi-part message in MIME format.

------_=_NextPart_001_01CCCF82.42D31811
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable

Hi Lizhong,

=20

Please see inline.

=20

Thanks, Ashish

=20

From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Lizhong Jin
Sent: Monday, January 09, 2012 8:30 PM
To: Ashish Dalela (adalela)
Cc: Lizhong Jin; dc@ietf.org
Subject: [dc] Comment of draft-dalela-dc-requirements

=20

Hi Ashish,

I have several comments to this requirement. Thanks.

=20

Section 5.3, Multi-tenancy problem. From my side, the multi-tenancy
problem is not only the scalability of segmentation-ID problem, it is
also required to isolate performance and security among multi-tenant.
For example, one tenant should not suffer denial-of-service attacks by
other tenant within same datacenter. A detailed example is, the huge
broadcast traffic from one tenant should not influence the performace
and availability of other tenants.

=20

[AD] Ok, will add this to the multi-tenancy section.

=20

Section 5.3, "The use of L3 VRFs also poses similar challenges of
scaling". The challenge for VRF is quite different with VLAN. The
challenge for VRF is the scalability of forwarding table. And you also
said:"With VRFs, these entries will be present even if there is no
traffic from a host to other hosts in the VRF". I think this could be
optimized that the FIB would store only active route entries, while the
RIB would store all route entries.

=20

[AD] Others have expressed reservations about these "dynamically learnt"
entries. E.g. if a IP scanning attack is launched to unknown IP
addresses, it will load the control plane, although there is nothing in
the RIB. But, in any case, we are just trying to define the problem and
optimizations are a solution are possible. I will modify the VRF
statement to reflect your concern.

=20

[AD] The same scaling concern also exists in the L2/VLAN case where a
MAC address has to be learnt. They are host routes, either L2 or L3 or
done via some sort of encapsulation.

=20

Section 5.5, last paragraph about mobility. It seems that this paragraph
is not much related with "network convergence", but is about "host
mobility impact to the network resource".=20

=20

[AD] This paragraph is describing what happens when a whole server is
replicated (not just a single VM). This will happen in case of disaster
recovery. Note that in this case there is change to host routes, but
also change to network routes (the routes to reach the virtual switch
have changed). Let me know if this needs to be reworded or clarified
more.

=20

While I find that, section 5.1 is about the "host mobility impact to
L2/3 forwarding", 5.10 is about the "host mobility impact to forwarding
tables". Suggest to re-organize the three parts which are all related
with impact of host mobility.

=20

[AD] Agree with your comment. Section 5.5 describes control plane scale
while 5.10 describes forwarding plane scale issues. Section 5.1 is
overlapping. Would you suggest removing 5.1?=20

=20

Regards

Lizhong


------_=_NextPart_001_01CCCF82.42D31811
Content-Type: text/html;
	charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" =
xmlns:p=3D"urn:schemas-microsoft-com:office:powerpoint" =
xmlns:a=3D"urn:schemas-microsoft-com:office:access" =
xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" =
xmlns:s=3D"uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" =
xmlns:rs=3D"urn:schemas-microsoft-com:rowset" xmlns:z=3D"#RowsetSchema" =
xmlns:b=3D"urn:schemas-microsoft-com:office:publisher" =
xmlns:ss=3D"urn:schemas-microsoft-com:office:spreadsheet" =
xmlns:c=3D"urn:schemas-microsoft-com:office:component:spreadsheet" =
xmlns:odc=3D"urn:schemas-microsoft-com:office:odc" =
xmlns:oa=3D"urn:schemas-microsoft-com:office:activation" =
xmlns:html=3D"http://www.w3.org/TR/REC-html40" =
xmlns:q=3D"http://schemas.xmlsoap.org/soap/envelope/" =
xmlns:rtc=3D"http://microsoft.com/officenet/conferencing" =
xmlns:D=3D"DAV:" xmlns:Repl=3D"http://schemas.microsoft.com/repl/" =
xmlns:mt=3D"http://schemas.microsoft.com/sharepoint/soap/meetings/" =
xmlns:x2=3D"http://schemas.microsoft.com/office/excel/2003/xml" =
xmlns:ppda=3D"http://www.passport.com/NameSpace.xsd" =
xmlns:ois=3D"http://schemas.microsoft.com/sharepoint/soap/ois/" =
xmlns:dir=3D"http://schemas.microsoft.com/sharepoint/soap/directory/" =
xmlns:ds=3D"http://www.w3.org/2000/09/xmldsig#" =
xmlns:dsp=3D"http://schemas.microsoft.com/sharepoint/dsp" =
xmlns:udc=3D"http://schemas.microsoft.com/data/udc" =
xmlns:xsd=3D"http://www.w3.org/2001/XMLSchema" =
xmlns:sub=3D"http://schemas.microsoft.com/sharepoint/soap/2002/1/alerts/"=
 xmlns:ec=3D"http://www.w3.org/2001/04/xmlenc#" =
xmlns:sp=3D"http://schemas.microsoft.com/sharepoint/" =
xmlns:sps=3D"http://schemas.microsoft.com/sharepoint/soap/" =
xmlns:xsi=3D"http://www.w3.org/2001/XMLSchema-instance" =
xmlns:udcs=3D"http://schemas.microsoft.com/data/udc/soap" =
xmlns:udcxf=3D"http://schemas.microsoft.com/data/udc/xmlfile" =
xmlns:udcp2p=3D"http://schemas.microsoft.com/data/udc/parttopart" =
xmlns:wf=3D"http://schemas.microsoft.com/sharepoint/soap/workflow/" =
xmlns:dsss=3D"http://schemas.microsoft.com/office/2006/digsig-setup" =
xmlns:dssi=3D"http://schemas.microsoft.com/office/2006/digsig" =
xmlns:mdssi=3D"http://schemas.openxmlformats.org/package/2006/digital-sig=
nature" =
xmlns:mver=3D"http://schemas.openxmlformats.org/markup-compatibility/2006=
" xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns:mrels=3D"http://schemas.openxmlformats.org/package/2006/relationshi=
ps" xmlns:spwp=3D"http://microsoft.com/sharepoint/webpartpages" =
xmlns:ex12t=3D"http://schemas.microsoft.com/exchange/services/2006/types"=
 =
xmlns:ex12m=3D"http://schemas.microsoft.com/exchange/services/2006/messag=
es" =
xmlns:pptsl=3D"http://schemas.microsoft.com/sharepoint/soap/SlideLibrary/=
" =
xmlns:spsl=3D"http://microsoft.com/webservices/SharePointPortalServer/Pub=
lishedLinksService" xmlns:Z=3D"urn:schemas-microsoft-com:" =
xmlns:st=3D"&#1;" xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 12 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-reply;
	font-family:"Calibri","sans-serif";
	color:#1F497D;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Hi Lizhong,<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Please see inline.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>Thanks, Ashish<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><div =
style=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in'><p class=3DMsoNormal><b><span =
style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'>From:</span>=
</b><span style=3D'font-size:10.0pt;font-family:"Tahoma","sans-serif"'> =
dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] <b>On Behalf Of =
</b>Lizhong Jin<br><b>Sent:</b> Monday, January 09, 2012 8:30 =
PM<br><b>To:</b> Ashish Dalela (adalela)<br><b>Cc:</b> Lizhong Jin; =
dc@ietf.org<br><b>Subject:</b> [dc] Comment of =
draft-dalela-dc-requirements<o:p></o:p></span></p></div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><div><p class=3DMsoNormal>Hi =
Ashish,<o:p></o:p></p></div><div><p class=3DMsoNormal>I have several =
comments to this requirement. Thanks.<o:p></o:p></p></div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal>Section 5.3, Multi-tenancy problem. From my side, the =
multi-tenancy problem is not only the scalability of segmentation-ID =
problem, it is also required to isolate performance and security among =
multi-tenant. For example, one tenant should not suffer =
denial-of-service attacks by other tenant within same datacenter. A =
detailed example is, the huge&nbsp;broadcast traffic from one tenant =
should not influence the performace and availability of other =
tenants.<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>[AD] Ok, will add this to the multi-tenancy =
section.<o:p></o:p></span></p></div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal>Section 5.3, &quot;The use of L3 VRFs also poses =
similar challenges of scaling&quot;. The challenge for VRF is quite =
different with VLAN. The challenge for VRF is the scalability of =
forwarding table. And you also said:&quot;With VRFs, these entries will =
be present even if there is no traffic from a host to other hosts in the =
VRF&quot;. I think&nbsp;this could be optimized that the FIB would store =
only active route entries, while the RIB would store all route =
entries.<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>[AD] Others have expressed reservations about these =
&#8220;dynamically learnt&#8221; entries. E.g. if a IP scanning attack =
is launched to unknown IP addresses, it will load the control plane, =
although there is nothing in the RIB. But, in any case, we are just =
trying to define the problem and optimizations are a solution are =
possible. I will modify the VRF statement to reflect your =
concern.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>[AD] The same scaling concern also exists in the L2/VLAN case where a =
MAC address has to be learnt. They are host routes, either L2 or L3 or =
done via some sort of encapsulation.<o:p></o:p></span></p></div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal>Section 5.5, last paragraph about mobility. It seems =
that this paragraph is not much related with &quot;network =
convergence&quot;, but is about &quot;host&nbsp;mobility impact to the =
network resource&quot;. <span =
style=3D'color:#1F497D'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>[AD] This paragraph is describing what happens when a whole server is =
replicated (not just a single VM). This will happen in case of disaster =
recovery. Note that in this case there is change to host routes, but =
also change to network routes (the routes to reach the virtual switch =
have changed). Let me know if this needs to be reworded or clarified =
more.<o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal>While I find that, =
section 5.1 is about the &quot;host mobility impact to L2/3 =
forwarding&quot;, 5.10 is about the &quot;host mobility impact to =
forwarding tables&quot;. Suggest to re-organize the three parts which =
are all related with impact of host mobility.<o:p></o:p></p><p =
class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497=
D'>[AD] Agree with your comment. Section 5.5 describes control plane =
scale while 5.10 describes forwarding plane scale issues. Section 5.1 is =
overlapping. Would you suggest removing 5.1? =
<o:p></o:p></span></p></div><div><p =
class=3DMsoNormal>&nbsp;<o:p></o:p></p></div><div><p =
class=3DMsoNormal>Regards<o:p></o:p></p></div><div><p =
class=3DMsoNormal>Lizhong<o:p></o:p></p></div></div></body></html>
------_=_NextPart_001_01CCCF82.42D31811--

From adalela@cisco.com  Tue Jan 10 02:32:07 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 77ED621F85DA for <dc@ietfa.amsl.com>; Tue, 10 Jan 2012 02:32:07 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.422
X-Spam-Level: 
X-Spam-Status: No, score=-2.422 tagged_above=-999 required=5 tests=[AWL=0.177,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2koJgTL7Ckal for <dc@ietfa.amsl.com>; Tue, 10 Jan 2012 02:32:06 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 2DA8C21F85D3 for <dc@ietf.org>; Tue, 10 Jan 2012 02:32:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2039; q=dns/txt; s=iport; t=1326191526; x=1327401126; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to; bh=FDoCsDyMFnpGM59QKAJ0CIM6rxGst2sHp+Q9k/+5T0g=; b=eqpQLQU3b9QCzByldZ8voAaTdsjKkCaaW8MzmmxuBohAYjtXkORH4s9q lVLOmDXgpPH5zYD5ks3zbeC/TouOxxUfbX+/pATh7akr63huGSYkhsnOy o2/SxlwbU3so/QbOe4cxhxt0NGEDNZbI7oLW62C9QRg1aIKVohdNZIP26 8=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ap8EAMgSDE9Io8UY/2dsb2JhbAA6Ca1cgXIBAQEEAQEBDwEdPhcEAgEIEQQBAQsGFwEGASYfCQgBAQQBCggIGodgl3oBnkeIV4JXYwSIBzGfFA
X-IronPort-AV: E=Sophos;i="4.71,486,1320624000";  d="scan'208";a="3116751"
Received: from vla196-nat.cisco.com (HELO bgl-core-3.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 10 Jan 2012 10:32:04 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-3.cisco.com (8.14.3/8.14.3) with ESMTP id q0AAW3iM022399; Tue, 10 Jan 2012 10:32:03 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 10 Jan 2012 16:02:04 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Date: Tue, 10 Jan 2012 16:02:01 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102BE69BA@XMB-BGL-416.cisco.com>
In-Reply-To: <D2DAE86C-9194-4C5F-9103-ADD8E4D51A2F@tid.es>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] DC Work Plan
Thread-Index: AczO2wrqPzKEvKWRT/6m+xxRtEmvmAAp30WQ
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net><4A95BA014132FF49AE685FAB4B9F17F62A4E65BD@dfweml505-mbx> <D2DAE86C-9194-4C5F-9103-ADD8E4D51A2F@tid.es>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "DIEGO LOPEZ GARCIA" <diego@tid.es>, <dc@ietf.org>
X-OriginalArrivalTime: 10 Jan 2012 10:32:04.0153 (UTC) FILETIME=[1841C290:01CCCF83]
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 10 Jan 2012 10:32:07 -0000

Hi Diego,

Could you please provide the text for these sections? I agree that a =
single mgmt/control plane that controlled from application to L2 hasn't =
existed so far. Right now that is managed through "cloud APIs" which =
cross these layers, although information across layers is not always =
integrated.

Thanks, Ashish

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of =
DIEGO LOPEZ GARCIA
Sent: Monday, January 09, 2012 7:59 PM
To: dc@ietf.org
Subject: Re: [dc] DC Work Plan

Hi Ashish,

I'd like to see a couple of additions to the document on requirements.

First, section 4 should introduce what it is discussed later on under =
5.9 (cloud control) as the "deep control", from the application layer =
down to L2 properties is a characteristic rather specific of cloud =
datacenter infrastructures.

Second, section 5 should add a discussion on AAA aspects, and how =
identity is mapped down the protocol stack from the requesting user =
authentication and authorization data, and conversely accounting records =
are marked and aggregated up the protocol stack to provide appropriate =
information and, eventually, match them against SLA requirements. I =
think this would deserve a separate subsection, but it could be included =
under 5.9 as well.

Be goode,

--
"Esta vez no fallaremos, Doctor Infierno"

Dr Diego R. Lopez
Telefonica I+D

e-mail: diego@tid.es
Tel:      +34 913 129 041
Mobile: +34 682 051 091
-----------------------------------------


Este mensaje se dirige exclusivamente a su destinatario. Puede consultar =
nuestra pol=EDtica de env=EDo y recepci=F3n de correo electr=F3nico en =
el enlace situado m=E1s abajo.
This message is intended exclusively for its addressee. We only send and =
receive email on the basis of the terms set out at.
http://www.tid.es/ES/PAGINAS/disclaimer.aspx
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From lizhong.jin@zte.com.cn  Tue Jan 10 04:35:39 2012
Return-Path: <lizhong.jin@zte.com.cn>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id AD66321F8645 for <dc@ietfa.amsl.com>; Tue, 10 Jan 2012 04:35:39 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -99.392
X-Spam-Level: 
X-Spam-Status: No, score=-99.392 tagged_above=-999 required=5 tests=[AWL=-1.757, BAYES_00=-2.599, HTML_MESSAGE=0.001, MIME_BASE64_TEXT=1.753, MIME_CHARSET_FARAWAY=2.45, RCVD_DOUBLE_IP_LOOSE=0.76, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tgYBhzeWva6g for <dc@ietfa.amsl.com>; Tue, 10 Jan 2012 04:35:38 -0800 (PST)
Received: from mx5.zte.com.cn (mx6.zte.com.cn [95.130.199.165]) by ietfa.amsl.com (Postfix) with ESMTP id 41DF921F863E for <dc@ietf.org>; Tue, 10 Jan 2012 04:35:38 -0800 (PST)
Received: from [10.30.17.100] by mx5.zte.com.cn with surfront esmtp id 56690122734555; Tue, 10 Jan 2012 20:13:48 +0800 (CST)
Received: from [10.30.3.21] by [192.168.168.16] with StormMail ESMTP id 56968.2732741821; Tue, 10 Jan 2012 20:35:17 +0800 (CST)
Received: from notes_smtp.zte.com.cn ([10.30.1.239]) by mse02.zte.com.cn with ESMTP id q0ACZH2P035689; Tue, 10 Jan 2012 20:35:17 +0800 (GMT-8) (envelope-from lizhong.jin@zte.com.cn)
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102BE69B6@XMB-BGL-416.cisco.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
MIME-Version: 1.0
X-Mailer: Lotus Notes Release 6.5.4 March 27, 2005
Message-ID: <OFDEB20D5A.B2687822-ON48257981.004380E4-48257981.00452707@zte.com.cn>
From: Lizhong Jin<lizhong.jin@zte.com.cn>
Date: Tue, 10 Jan 2012 20:35:12 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.1FP4|July 25, 2010) at 2012-01-10 20:35:21, Serialize complete at 2012-01-10 20:35:21
Content-Type: multipart/alternative; boundary="=_alternative 0045270248257981_="
X-MAIL: mse02.zte.com.cn q0ACZH2P035689
Cc: dc@ietf.org, Lizhong Jin <lizho.jin@gmail.com>
Subject: Re: [dc] Comment of draft-dalela-dc-requirements
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 10 Jan 2012 12:35:39 -0000

This is a multipart message in MIME format.
--=_alternative 0045270248257981_=
Content-Type: text/plain; charset="GB2312"
Content-Transfer-Encoding: base64

SGkgQXNoaXNoLA0KUGxlYXNlIHNlZSBpbmxpbmUgYmVsb3cuIFRoYW5rcy4NCg0KTGl6aG9uZw0K
IA0KDQoiQXNoaXNoIERhbGVsYSAoYWRhbGVsYSkiIDxhZGFsZWxhQGNpc2NvLmNvbT4g0LTT2iAy
MDEyLzAxLzEwIDE4OjI2OjA0Og0KDQo+IEhpIExpemhvbmcsDQo+IA0KPiBQbGVhc2Ugc2VlIGlu
bGluZS4NCj4gDQo+IFRoYW5rcywgQXNoaXNoDQo+IA0KPiBGcm9tOiBkYy1ib3VuY2VzQGlldGYu
b3JnIFttYWlsdG86ZGMtYm91bmNlc0BpZXRmLm9yZ10gT24gQmVoYWxmIE9mIA0KPiBMaXpob25n
IEppbg0KPiBTZW50OiBNb25kYXksIEphbnVhcnkgMDksIDIwMTIgODozMCBQTQ0KPiBUbzogQXNo
aXNoIERhbGVsYSAoYWRhbGVsYSkNCj4gQ2M6IExpemhvbmcgSmluOyBkY0BpZXRmLm9yZw0KPiBT
dWJqZWN0OiBbZGNdIENvbW1lbnQgb2YgZHJhZnQtZGFsZWxhLWRjLXJlcXVpcmVtZW50cw0KPiAN
Cj4gSGkgQXNoaXNoLA0KPiBJIGhhdmUgc2V2ZXJhbCBjb21tZW50cyB0byB0aGlzIHJlcXVpcmVt
ZW50LiBUaGFua3MuDQo+IA0KPiBTZWN0aW9uIDUuMywgTXVsdGktdGVuYW5jeSBwcm9ibGVtLiBG
cm9tIG15IHNpZGUsIHRoZSBtdWx0aS10ZW5hbmN5IA0KPiBwcm9ibGVtIGlzIG5vdCBvbmx5IHRo
ZSBzY2FsYWJpbGl0eSBvZiBzZWdtZW50YXRpb24tSUQgcHJvYmxlbSwgaXQgDQo+IGlzIGFsc28g
cmVxdWlyZWQgdG8gaXNvbGF0ZSBwZXJmb3JtYW5jZSBhbmQgc2VjdXJpdHkgYW1vbmcgbXVsdGkt
DQo+IHRlbmFudC4gRm9yIGV4YW1wbGUsIG9uZSB0ZW5hbnQgc2hvdWxkIG5vdCBzdWZmZXIgZGVu
aWFsLW9mLXNlcnZpY2UgDQo+IGF0dGFja3MgYnkgb3RoZXIgdGVuYW50IHdpdGhpbiBzYW1lIGRh
dGFjZW50ZXIuIEEgZGV0YWlsZWQgZXhhbXBsZSANCj4gaXMsIHRoZSBodWdlIGJyb2FkY2FzdCB0
cmFmZmljIGZyb20gb25lIHRlbmFudCBzaG91bGQgbm90IGluZmx1ZW5jZSANCj4gdGhlIHBlcmZv
cm1hY2UgYW5kIGF2YWlsYWJpbGl0eSBvZiBvdGhlciB0ZW5hbnRzLg0KPiANCj4gW0FEXSBPaywg
d2lsbCBhZGQgdGhpcyB0byB0aGUgbXVsdGktdGVuYW5jeSBzZWN0aW9uLg0KPiANCj4gU2VjdGlv
biA1LjMsICJUaGUgdXNlIG9mIEwzIFZSRnMgYWxzbyBwb3NlcyBzaW1pbGFyIGNoYWxsZW5nZXMg
b2YgDQo+IHNjYWxpbmciLiBUaGUgY2hhbGxlbmdlIGZvciBWUkYgaXMgcXVpdGUgZGlmZmVyZW50
IHdpdGggVkxBTi4gVGhlIA0KPiBjaGFsbGVuZ2UgZm9yIFZSRiBpcyB0aGUgc2NhbGFiaWxpdHkg
b2YgZm9yd2FyZGluZyB0YWJsZS4gQW5kIHlvdSANCj4gYWxzbyBzYWlkOiJXaXRoIFZSRnMsIHRo
ZXNlIGVudHJpZXMgd2lsbCBiZSBwcmVzZW50IGV2ZW4gaWYgdGhlcmUgaXMNCj4gbm8gdHJhZmZp
YyBmcm9tIGEgaG9zdCB0byBvdGhlciBob3N0cyBpbiB0aGUgVlJGIi4gSSB0aGluayB0aGlzIA0K
PiBjb3VsZCBiZSBvcHRpbWl6ZWQgdGhhdCB0aGUgRklCIHdvdWxkIHN0b3JlIG9ubHkgYWN0aXZl
IHJvdXRlIA0KPiBlbnRyaWVzLCB3aGlsZSB0aGUgUklCIHdvdWxkIHN0b3JlIGFsbCByb3V0ZSBl
bnRyaWVzLg0KPiANCj4gW0FEXSBPdGhlcnMgaGF2ZSBleHByZXNzZWQgcmVzZXJ2YXRpb25zIGFi
b3V0IHRoZXNlIKGwZHluYW1pY2FsbHkgDQo+IGxlYXJudKGxIGVudHJpZXMuIEUuZy4gaWYgYSBJ
UCBzY2FubmluZyBhdHRhY2sgaXMgbGF1bmNoZWQgdG8gdW5rbm93bg0KPiBJUCBhZGRyZXNzZXMs
IGl0IHdpbGwgbG9hZCB0aGUgY29udHJvbCBwbGFuZSwgYWx0aG91Z2ggdGhlcmUgaXMgDQo+IG5v
dGhpbmcgaW4gdGhlIFJJQi4gQnV0LCBpbiBhbnkgY2FzZSwgd2UgYXJlIGp1c3QgdHJ5aW5nIHRv
IGRlZmluZSANCj4gdGhlIHByb2JsZW0gYW5kIG9wdGltaXphdGlvbnMgYXJlIGEgc29sdXRpb24g
YXJlIHBvc3NpYmxlLiBJIHdpbGwgDQo+IG1vZGlmeSB0aGUgVlJGIHN0YXRlbWVudCB0byByZWZs
ZWN0IHlvdXIgY29uY2Vybi4NCj4gDQo+IFtBRF0gVGhlIHNhbWUgc2NhbGluZyBjb25jZXJuIGFs
c28gZXhpc3RzIGluIHRoZSBMMi9WTEFOIGNhc2Ugd2hlcmUgDQo+IGEgTUFDIGFkZHJlc3MgaGFz
IHRvIGJlIGxlYXJudC4gVGhleSBhcmUgaG9zdCByb3V0ZXMsIGVpdGhlciBMMiBvciANCj4gTDMg
b3IgZG9uZSB2aWEgc29tZSBzb3J0IG9mIGVuY2Fwc3VsYXRpb24uDQpbTGl6aG9uZ10geWVzLCBh
Z3JlZQ0KPiANCj4gU2VjdGlvbiA1LjUsIGxhc3QgcGFyYWdyYXBoIGFib3V0IG1vYmlsaXR5LiBJ
dCBzZWVtcyB0aGF0IHRoaXMgDQo+IHBhcmFncmFwaCBpcyBub3QgbXVjaCByZWxhdGVkIHdpdGgg
Im5ldHdvcmsgY29udmVyZ2VuY2UiLCBidXQgaXMgDQo+IGFib3V0ICJob3N0IG1vYmlsaXR5IGlt
cGFjdCB0byB0aGUgbmV0d29yayByZXNvdXJjZSIuIA0KPiANCj4gW0FEXSBUaGlzIHBhcmFncmFw
aCBpcyBkZXNjcmliaW5nIHdoYXQgaGFwcGVucyB3aGVuIGEgd2hvbGUgc2VydmVyIA0KPiBpcyBy
ZXBsaWNhdGVkIChub3QganVzdCBhIHNpbmdsZSBWTSkuIFRoaXMgd2lsbCBoYXBwZW4gaW4gY2Fz
ZSBvZiANCj4gZGlzYXN0ZXIgcmVjb3ZlcnkuIE5vdGUgdGhhdCBpbiB0aGlzIGNhc2UgdGhlcmUg
aXMgY2hhbmdlIHRvIGhvc3QgDQo+IHJvdXRlcywgYnV0IGFsc28gY2hhbmdlIHRvIG5ldHdvcmsg
cm91dGVzICh0aGUgcm91dGVzIHRvIHJlYWNoIHRoZSANCj4gdmlydHVhbCBzd2l0Y2ggaGF2ZSBj
aGFuZ2VkKS4gTGV0IG1lIGtub3cgaWYgdGhpcyBuZWVkcyB0byBiZSANCj4gcmV3b3JkZWQgb3Ig
Y2xhcmlmaWVkIG1vcmUuDQpbTGl6aG9uZ10gaWYgeW91IHJlZmVyIHRoZSB2aXJ0dWFsIHN3aXRj
aCBhbmQgZmlyZXdhbGwgcmVsb2NhdGlvbiBhcyANCm5ldHdvcmsgY29udmVyZ2VuY2UsIHRoZW4g
dGhlIHdvcmQgaXMgT0suDQo+IA0KPiBXaGlsZSBJIGZpbmQgdGhhdCwgc2VjdGlvbiA1LjEgaXMg
YWJvdXQgdGhlICJob3N0IG1vYmlsaXR5IGltcGFjdCB0bw0KPiBMMi8zIGZvcndhcmRpbmciLCA1
LjEwIGlzIGFib3V0IHRoZSAiaG9zdCBtb2JpbGl0eSBpbXBhY3QgdG8gDQo+IGZvcndhcmRpbmcg
dGFibGVzIi4gU3VnZ2VzdCB0byByZS1vcmdhbml6ZSB0aGUgdGhyZWUgcGFydHMgd2hpY2ggYXJl
DQo+IGFsbCByZWxhdGVkIHdpdGggaW1wYWN0IG9mIGhvc3QgbW9iaWxpdHkuDQo+IA0KPiBbQURd
IEFncmVlIHdpdGggeW91ciBjb21tZW50LiBTZWN0aW9uIDUuNSBkZXNjcmliZXMgY29udHJvbCBw
bGFuZSANCj4gc2NhbGUgd2hpbGUgNS4xMCBkZXNjcmliZXMgZm9yd2FyZGluZyBwbGFuZSBzY2Fs
ZSBpc3N1ZXMuIFNlY3Rpb24gNS4NCj4gMSBpcyBvdmVybGFwcGluZy4gV291bGQgeW91IHN1Z2dl
c3QgcmVtb3ZpbmcgNS4xPyANCltMaXpob25nXSBtZXJnaW5nIDUuMSB0byA1LjEwIGlzIE9LLg0K
PiANCj4gUmVnYXJkcw0KPiBMaXpob25nDQoNCg0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NClpURSBJbmZvcm1hdGlvbiBTZWN1cml0eSBO
b3RpY2U6IFRoZSBpbmZvcm1hdGlvbiBjb250YWluZWQgaW4gdGhpcyBtYWlsIGlzIHNvbGVseSBw
cm9wZXJ0eSBvZiB0aGUgc2VuZGVyJ3Mgb3JnYW5pemF0aW9uLiBUaGlzIG1haWwgY29tbXVuaWNh
dGlvbiBpcyBjb25maWRlbnRpYWwuIFJlY2lwaWVudHMgbmFtZWQgYWJvdmUgYXJlIG9ibGlnYXRl
ZCB0byBtYWludGFpbiBzZWNyZWN5IGFuZCBhcmUgbm90IHBlcm1pdHRlZCB0byBkaXNjbG9zZSB0
aGUgY29udGVudHMgb2YgdGhpcyBjb21tdW5pY2F0aW9uIHRvIG90aGVycy4NClRoaXMgZW1haWwg
YW5kIGFueSBmaWxlcyB0cmFuc21pdHRlZCB3aXRoIGl0IGFyZSBjb25maWRlbnRpYWwgYW5kIGlu
dGVuZGVkIHNvbGVseSBmb3IgdGhlIHVzZSBvZiB0aGUgaW5kaXZpZHVhbCBvciBlbnRpdHkgdG8g
d2hvbSB0aGV5IGFyZSBhZGRyZXNzZWQuIElmIHlvdSBoYXZlIHJlY2VpdmVkIHRoaXMgZW1haWwg
aW4gZXJyb3IgcGxlYXNlIG5vdGlmeSB0aGUgb3JpZ2luYXRvciBvZiB0aGUgbWVzc2FnZS4gQW55
IHZpZXdzIGV4cHJlc3NlZCBpbiB0aGlzIG1lc3NhZ2UgYXJlIHRob3NlIG9mIHRoZSBpbmRpdmlk
dWFsIHNlbmRlci4NClRoaXMgbWVzc2FnZSBoYXMgYmVlbiBzY2FubmVkIGZvciB2aXJ1c2VzIGFu
ZCBTcGFtIGJ5IFpURSBBbnRpLVNwYW0gc3lzdGVtLg0K
--=_alternative 0045270248257981_=
Content-Type: text/html; charset="GB2312"
Content-Transfer-Encoding: base64

DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPkhpIEFzaGlzaCw8L2ZvbnQ+DQo8
YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPlBsZWFzZSBzZWUgaW5saW5lIGJlbG93
LiBUaGFua3MuPC9mb250Pg0KPGJyPg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlm
Ij5MaXpob25nPC9mb250Pg0KPGJyPjxmb250IHNpemU9MSBmYWNlPSJzYW5zLXNlcmlmIj4mbmJz
cDs8L2ZvbnQ+DQo8YnI+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZxdW90
O0FzaGlzaCBEYWxlbGEgKGFkYWxlbGEpJnF1b3Q7DQombHQ7YWRhbGVsYUBjaXNjby5jb20mZ3Q7
INC009ogMjAxMi8wMS8xMCAxODoyNjowNDo8YnI+DQo8YnI+DQomZ3Q7IEhpIExpemhvbmcsPC9m
b250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7ICZuYnNwOzwvZm9u
dD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyBQbGVhc2Ugc2VlIGlu
bGluZS48L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgJm5i
c3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7IFRoYW5r
cywgQXNoaXNoPC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7
ICZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyBG
cm9tOiBkYy1ib3VuY2VzQGlldGYub3JnIFttYWlsdG86ZGMtYm91bmNlc0BpZXRmLm9yZ10NCk9u
IEJlaGFsZiBPZiA8YnI+DQomZ3Q7IExpemhvbmcgSmluPGJyPg0KJmd0OyBTZW50OiBNb25kYXks
IEphbnVhcnkgMDksIDIwMTIgODozMCBQTTxicj4NCiZndDsgVG86IEFzaGlzaCBEYWxlbGEgKGFk
YWxlbGEpPGJyPg0KJmd0OyBDYzogTGl6aG9uZyBKaW47IGRjQGlldGYub3JnPGJyPg0KJmd0OyBT
dWJqZWN0OiBbZGNdIENvbW1lbnQgb2YgZHJhZnQtZGFsZWxhLWRjLXJlcXVpcmVtZW50czwvZm9u
dD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyAmbmJzcDs8L2ZvbnQ+
DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgSGkgQXNoaXNoLDwvZm9u
dD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyBJIGhhdmUgc2V2ZXJh
bCBjb21tZW50cyB0byB0aGlzDQpyZXF1aXJlbWVudC4gVGhhbmtzLjwvZm9udD4NCjxicj48Zm9u
dCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyAmbmJzcDs8L2ZvbnQ+DQo8YnI+PGZvbnQg
c2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgU2VjdGlvbiA1LjMsIE11bHRpLXRlbmFuY3kg
cHJvYmxlbS4NCkZyb20gbXkgc2lkZSwgdGhlIG11bHRpLXRlbmFuY3kgPGJyPg0KJmd0OyBwcm9i
bGVtIGlzIG5vdCBvbmx5IHRoZSBzY2FsYWJpbGl0eSBvZiBzZWdtZW50YXRpb24tSUQgcHJvYmxl
bSwgaXQNCjxicj4NCiZndDsgaXMgYWxzbyByZXF1aXJlZCB0byBpc29sYXRlIHBlcmZvcm1hbmNl
IGFuZCBzZWN1cml0eSBhbW9uZyBtdWx0aS08YnI+DQomZ3Q7IHRlbmFudC4gRm9yIGV4YW1wbGUs
IG9uZSB0ZW5hbnQgc2hvdWxkIG5vdCBzdWZmZXIgZGVuaWFsLW9mLXNlcnZpY2UNCjxicj4NCiZn
dDsgYXR0YWNrcyBieSBvdGhlciB0ZW5hbnQgd2l0aGluIHNhbWUgZGF0YWNlbnRlci4gQSBkZXRh
aWxlZCBleGFtcGxlDQo8YnI+DQomZ3Q7IGlzLCB0aGUgaHVnZSBicm9hZGNhc3QgdHJhZmZpYyBm
cm9tIG9uZSB0ZW5hbnQgc2hvdWxkIG5vdCBpbmZsdWVuY2UNCjxicj4NCiZndDsgdGhlIHBlcmZv
cm1hY2UgYW5kIGF2YWlsYWJpbGl0eSBvZiBvdGhlciB0ZW5hbnRzLjwvZm9udD4NCjxicj48Zm9u
dCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyAmbmJzcDs8L2ZvbnQ+DQo8YnI+PGZvbnQg
c2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgW0FEXSBPaywgd2lsbCBhZGQgdGhpcyB0byB0
aGUgbXVsdGktdGVuYW5jeQ0Kc2VjdGlvbi48L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9
InNhbnMtc2VyaWYiPiZndDsgJm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJz
YW5zLXNlcmlmIj4mZ3Q7IFNlY3Rpb24gNS4zLCAmcXVvdDtUaGUgdXNlIG9mIEwzDQpWUkZzIGFs
c28gcG9zZXMgc2ltaWxhciBjaGFsbGVuZ2VzIG9mIDxicj4NCiZndDsgc2NhbGluZyZxdW90Oy4g
VGhlIGNoYWxsZW5nZSBmb3IgVlJGIGlzIHF1aXRlIGRpZmZlcmVudCB3aXRoIFZMQU4uDQpUaGUg
PGJyPg0KJmd0OyBjaGFsbGVuZ2UgZm9yIFZSRiBpcyB0aGUgc2NhbGFiaWxpdHkgb2YgZm9yd2Fy
ZGluZyB0YWJsZS4gQW5kIHlvdQ0KPGJyPg0KJmd0OyBhbHNvIHNhaWQ6JnF1b3Q7V2l0aCBWUkZz
LCB0aGVzZSBlbnRyaWVzIHdpbGwgYmUgcHJlc2VudCBldmVuIGlmIHRoZXJlDQppczxicj4NCiZn
dDsgbm8gdHJhZmZpYyBmcm9tIGEgaG9zdCB0byBvdGhlciBob3N0cyBpbiB0aGUgVlJGJnF1b3Q7
LiBJIHRoaW5rIHRoaXMNCjxicj4NCiZndDsgY291bGQgYmUgb3B0aW1pemVkIHRoYXQgdGhlIEZJ
QiB3b3VsZCBzdG9yZSBvbmx5IGFjdGl2ZSByb3V0ZSA8YnI+DQomZ3Q7IGVudHJpZXMsIHdoaWxl
IHRoZSBSSUIgd291bGQgc3RvcmUgYWxsIHJvdXRlIGVudHJpZXMuPC9mb250Pg0KPGJyPjxmb250
IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7ICZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBz
aXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyBbQURdIE90aGVycyBoYXZlIGV4cHJlc3NlZCBy
ZXNlcnZhdGlvbnMNCmFib3V0IHRoZXNlIKGwZHluYW1pY2FsbHkgPGJyPg0KJmd0OyBsZWFybnSh
sSBlbnRyaWVzLiBFLmcuIGlmIGEgSVAgc2Nhbm5pbmcgYXR0YWNrIGlzIGxhdW5jaGVkIHRvIHVu
a25vd248YnI+DQomZ3Q7IElQIGFkZHJlc3NlcywgaXQgd2lsbCBsb2FkIHRoZSBjb250cm9sIHBs
YW5lLCBhbHRob3VnaCB0aGVyZSBpcyA8YnI+DQomZ3Q7IG5vdGhpbmcgaW4gdGhlIFJJQi4gQnV0
LCBpbiBhbnkgY2FzZSwgd2UgYXJlIGp1c3QgdHJ5aW5nIHRvIGRlZmluZQ0KPGJyPg0KJmd0OyB0
aGUgcHJvYmxlbSBhbmQgb3B0aW1pemF0aW9ucyBhcmUgYSBzb2x1dGlvbiBhcmUgcG9zc2libGUu
IEkgd2lsbA0KPGJyPg0KJmd0OyBtb2RpZnkgdGhlIFZSRiBzdGF0ZW1lbnQgdG8gcmVmbGVjdCB5
b3VyIGNvbmNlcm4uPC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4m
Z3Q7ICZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0
OyBbQURdIFRoZSBzYW1lIHNjYWxpbmcgY29uY2VybiBhbHNvDQpleGlzdHMgaW4gdGhlIEwyL1ZM
QU4gY2FzZSB3aGVyZSA8YnI+DQomZ3Q7IGEgTUFDIGFkZHJlc3MgaGFzIHRvIGJlIGxlYXJudC4g
VGhleSBhcmUgaG9zdCByb3V0ZXMsIGVpdGhlciBMMiBvcg0KPGJyPg0KJmd0OyBMMyBvciBkb25l
IHZpYSBzb21lIHNvcnQgb2YgZW5jYXBzdWxhdGlvbi48L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0y
IGZhY2U9InNhbnMtc2VyaWYiPltMaXpob25nXSB5ZXMsIGFncmVlPC9mb250Pg0KPGJyPjxmb250
IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7ICZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBz
aXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+Jmd0OyBTZWN0aW9uIDUuNSwgbGFzdCBwYXJhZ3JhcGgg
YWJvdXQNCm1vYmlsaXR5LiBJdCBzZWVtcyB0aGF0IHRoaXMgPGJyPg0KJmd0OyBwYXJhZ3JhcGgg
aXMgbm90IG11Y2ggcmVsYXRlZCB3aXRoICZxdW90O25ldHdvcmsgY29udmVyZ2VuY2UmcXVvdDss
DQpidXQgaXMgPGJyPg0KJmd0OyBhYm91dCAmcXVvdDtob3N0IG1vYmlsaXR5IGltcGFjdCB0byB0
aGUgbmV0d29yayByZXNvdXJjZSZxdW90Oy4gPC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNl
PSJzYW5zLXNlcmlmIj4mZ3Q7ICZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0i
c2Fucy1zZXJpZiI+Jmd0OyBbQURdIFRoaXMgcGFyYWdyYXBoIGlzIGRlc2NyaWJpbmcNCndoYXQg
aGFwcGVucyB3aGVuIGEgd2hvbGUgc2VydmVyIDxicj4NCiZndDsgaXMgcmVwbGljYXRlZCAobm90
IGp1c3QgYSBzaW5nbGUgVk0pLiBUaGlzIHdpbGwgaGFwcGVuIGluIGNhc2Ugb2YNCjxicj4NCiZn
dDsgZGlzYXN0ZXIgcmVjb3ZlcnkuIE5vdGUgdGhhdCBpbiB0aGlzIGNhc2UgdGhlcmUgaXMgY2hh
bmdlIHRvIGhvc3QNCjxicj4NCiZndDsgcm91dGVzLCBidXQgYWxzbyBjaGFuZ2UgdG8gbmV0d29y
ayByb3V0ZXMgKHRoZSByb3V0ZXMgdG8gcmVhY2ggdGhlDQo8YnI+DQomZ3Q7IHZpcnR1YWwgc3dp
dGNoIGhhdmUgY2hhbmdlZCkuIExldCBtZSBrbm93IGlmIHRoaXMgbmVlZHMgdG8gYmUgPGJyPg0K
Jmd0OyByZXdvcmRlZCBvciBjbGFyaWZpZWQgbW9yZS48L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0y
IGZhY2U9InNhbnMtc2VyaWYiPltMaXpob25nXSBpZiB5b3UgcmVmZXIgdGhlIHZpcnR1YWwgc3dp
dGNoDQphbmQgZmlyZXdhbGwgcmVsb2NhdGlvbiBhcyBuZXR3b3JrIGNvbnZlcmdlbmNlLCB0aGVu
IHRoZSB3b3JkIGlzIE9LLjwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJp
ZiI+Jmd0OyAmbmJzcDs8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYi
PiZndDsgV2hpbGUgSSBmaW5kIHRoYXQsIHNlY3Rpb24gNS4xDQppcyBhYm91dCB0aGUgJnF1b3Q7
aG9zdCBtb2JpbGl0eSBpbXBhY3QgdG88YnI+DQomZ3Q7IEwyLzMgZm9yd2FyZGluZyZxdW90Oywg
NS4xMCBpcyBhYm91dCB0aGUgJnF1b3Q7aG9zdCBtb2JpbGl0eSBpbXBhY3QNCnRvIDxicj4NCiZn
dDsgZm9yd2FyZGluZyB0YWJsZXMmcXVvdDsuIFN1Z2dlc3QgdG8gcmUtb3JnYW5pemUgdGhlIHRo
cmVlIHBhcnRzIHdoaWNoDQphcmU8YnI+DQomZ3Q7IGFsbCByZWxhdGVkIHdpdGggaW1wYWN0IG9m
IGhvc3QgbW9iaWxpdHkuPC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlm
Ij4mZ3Q7ICZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0ic2Fucy1zZXJpZiI+
Jmd0OyBbQURdIEFncmVlIHdpdGggeW91ciBjb21tZW50LiBTZWN0aW9uDQo1LjUgZGVzY3JpYmVz
IGNvbnRyb2wgcGxhbmUgPGJyPg0KJmd0OyBzY2FsZSB3aGlsZSA1LjEwIGRlc2NyaWJlcyBmb3J3
YXJkaW5nIHBsYW5lIHNjYWxlIGlzc3Vlcy4gU2VjdGlvbg0KNS48YnI+DQomZ3Q7IDEgaXMgb3Zl
cmxhcHBpbmcuIFdvdWxkIHlvdSBzdWdnZXN0IHJlbW92aW5nIDUuMT8gPC9mb250Pg0KPGJyPjxm
b250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj5bTGl6aG9uZ10gbWVyZ2luZyA1LjEgdG8gNS4x
MCBpcyBPSy48L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsg
Jm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJzYW5zLXNlcmlmIj4mZ3Q7IFJl
Z2FyZHM8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPiZndDsgTGl6
aG9uZzwvZm9udD4NCjxicj48cHJlPg0KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NClpURSZuYnNwO0luZm9ybWF0aW9uJm5ic3A7U2VjdXJp
dHkmbmJzcDtOb3RpY2U6Jm5ic3A7VGhlJm5ic3A7aW5mb3JtYXRpb24mbmJzcDtjb250YWluZWQm
bmJzcDtpbiZuYnNwO3RoaXMmbmJzcDttYWlsJm5ic3A7aXMmbmJzcDtzb2xlbHkmbmJzcDtwcm9w
ZXJ0eSZuYnNwO29mJm5ic3A7dGhlJm5ic3A7c2VuZGVyJ3MmbmJzcDtvcmdhbml6YXRpb24uJm5i
c3A7VGhpcyZuYnNwO21haWwmbmJzcDtjb21tdW5pY2F0aW9uJm5ic3A7aXMmbmJzcDtjb25maWRl
bnRpYWwuJm5ic3A7UmVjaXBpZW50cyZuYnNwO25hbWVkJm5ic3A7YWJvdmUmbmJzcDthcmUmbmJz
cDtvYmxpZ2F0ZWQmbmJzcDt0byZuYnNwO21haW50YWluJm5ic3A7c2VjcmVjeSZuYnNwO2FuZCZu
YnNwO2FyZSZuYnNwO25vdCZuYnNwO3Blcm1pdHRlZCZuYnNwO3RvJm5ic3A7ZGlzY2xvc2UmbmJz
cDt0aGUmbmJzcDtjb250ZW50cyZuYnNwO29mJm5ic3A7dGhpcyZuYnNwO2NvbW11bmljYXRpb24m
bmJzcDt0byZuYnNwO290aGVycy4NClRoaXMmbmJzcDtlbWFpbCZuYnNwO2FuZCZuYnNwO2FueSZu
YnNwO2ZpbGVzJm5ic3A7dHJhbnNtaXR0ZWQmbmJzcDt3aXRoJm5ic3A7aXQmbmJzcDthcmUmbmJz
cDtjb25maWRlbnRpYWwmbmJzcDthbmQmbmJzcDtpbnRlbmRlZCZuYnNwO3NvbGVseSZuYnNwO2Zv
ciZuYnNwO3RoZSZuYnNwO3VzZSZuYnNwO29mJm5ic3A7dGhlJm5ic3A7aW5kaXZpZHVhbCZuYnNw
O29yJm5ic3A7ZW50aXR5Jm5ic3A7dG8mbmJzcDt3aG9tJm5ic3A7dGhleSZuYnNwO2FyZSZuYnNw
O2FkZHJlc3NlZC4mbmJzcDtJZiZuYnNwO3lvdSZuYnNwO2hhdmUmbmJzcDtyZWNlaXZlZCZuYnNw
O3RoaXMmbmJzcDtlbWFpbCZuYnNwO2luJm5ic3A7ZXJyb3ImbmJzcDtwbGVhc2UmbmJzcDtub3Rp
ZnkmbmJzcDt0aGUmbmJzcDtvcmlnaW5hdG9yJm5ic3A7b2YmbmJzcDt0aGUmbmJzcDttZXNzYWdl
LiZuYnNwO0FueSZuYnNwO3ZpZXdzJm5ic3A7ZXhwcmVzc2VkJm5ic3A7aW4mbmJzcDt0aGlzJm5i
c3A7bWVzc2FnZSZuYnNwO2FyZSZuYnNwO3Rob3NlJm5ic3A7b2YmbmJzcDt0aGUmbmJzcDtpbmRp
dmlkdWFsJm5ic3A7c2VuZGVyLg0KVGhpcyZuYnNwO21lc3NhZ2UmbmJzcDtoYXMmbmJzcDtiZWVu
Jm5ic3A7c2Nhbm5lZCZuYnNwO2ZvciZuYnNwO3ZpcnVzZXMmbmJzcDthbmQmbmJzcDtTcGFtJm5i
c3A7YnkmbmJzcDtaVEUmbmJzcDtBbnRpLVNwYW0mbmJzcDtzeXN0ZW0uDQo8L3ByZT4=
--=_alternative 0045270248257981_=--


From rbonica@juniper.net  Tue Jan 10 08:08:30 2012
Return-Path: <rbonica@juniper.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1837621F86E3 for <dc@ietfa.amsl.com>; Tue, 10 Jan 2012 08:08:30 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.537
X-Spam-Level: 
X-Spam-Status: No, score=-106.537 tagged_above=-999 required=5 tests=[AWL=0.062, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4kycP9ysx7Qe for <dc@ietfa.amsl.com>; Tue, 10 Jan 2012 08:08:29 -0800 (PST)
Received: from exprod7og113.obsmtp.com (exprod7og113.obsmtp.com [64.18.2.179]) by ietfa.amsl.com (Postfix) with ESMTP id B355721F8670 for <dc@ietf.org>; Tue, 10 Jan 2012 08:08:28 -0800 (PST)
Received: from P-EMHUB03-HQ.jnpr.net ([66.129.224.36]) (using TLSv1) by exprod7ob113.postini.com ([64.18.6.12]) with SMTP ID DSNKTwxicMl7KzEvH/BEby5wDrwR/CdqExMw@postini.com; Tue, 10 Jan 2012 08:08:29 PST
Received: from p-emfe02-wf.jnpr.net (172.28.145.25) by P-EMHUB03-HQ.jnpr.net (172.24.192.37) with Microsoft SMTP Server (TLS) id 8.3.213.0; Tue, 10 Jan 2012 08:05:27 -0800
Received: from EMBX01-WF.jnpr.net ([fe80::1914:3299:33d9:e43b]) by p-emfe02-wf.jnpr.net ([fe80::c126:c633:d2dc:8090%11]) with mapi; Tue, 10 Jan 2012 11:05:16 -0500
From: Ronald Bonica <rbonica@juniper.net>
To: Spencer Dawkins <spencer@wonderhamster.org>, "dc@ietf.org" <dc@ietf.org>
Date: Tue, 10 Jan 2012 11:05:04 -0500
Thread-Topic: [dc] DC Interim Meeting: CANCELLED
Thread-Index: AczPKN/+nsN8+QQeTqiwXG9IZAStXgAiK+qg
Message-ID: <13205C286662DE4387D9AF3AC30EF456D74F08355B@EMBX01-WF.jnpr.net>
References: <13205C286662DE4387D9AF3AC30EF456D74EFCD832@EMBX01-WF.jnpr.net> <4F0B7C38.7030307@wonderhamster.org>
In-Reply-To: <4F0B7C38.7030307@wonderhamster.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dc] DC Interim Meeting: CANCELLED
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 10 Jan 2012 16:08:30 -0000

>=20
> I'm pretty sure I know the answer, but just to make sure - your
> intention is that "continue on our work plan" happens on THIS mailing
> list, right?
>=20

Absolutely!


From diego@tid.es  Wed Jan 11 02:30:51 2012
Return-Path: <diego@tid.es>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2E95521F86D0 for <dc@ietfa.amsl.com>; Wed, 11 Jan 2012 02:30:51 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.328
X-Spam-Level: 
X-Spam-Status: No, score=-3.328 tagged_above=-999 required=5 tests=[AWL=-0.729, BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KtXwphojRV8W for <dc@ietfa.amsl.com>; Wed, 11 Jan 2012 02:30:50 -0800 (PST)
Received: from tidos.tid.es (tidos.tid.es [195.235.93.44]) by ietfa.amsl.com (Postfix) with ESMTP id A068B21F85EF for <dc@ietf.org>; Wed, 11 Jan 2012 02:30:49 -0800 (PST)
Received: from sbrightmailg01.hi.inet (sbrightmailg01.hi.inet [10.95.64.104]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXM00D5LQJ7V6@tid.hi.inet> for dc@ietf.org; Wed, 11 Jan 2012 11:30:48 +0100 (MET)
Received: from tid (tid.hi.inet [10.95.64.10])	by sbrightmailg01.hi.inet (Symantec Messaging Gateway) with SMTP id E4.80.02893.8D46D0F4; Wed, 11 Jan 2012 11:30:48 +0100 (CET)
Received: from correo.tid.es (mailhost.hi.inet [10.95.64.100]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPS id <0LXM00D5XQJBV6@tid.hi.inet> for dc@ietf.org; Wed, 11 Jan 2012 11:30:48 +0100 (MET)
Received: from EXCLU2K7.hi.inet ([10.95.67.65]) by htcasmad1.hi.inet ([192.168.0.1]) with mapi; Wed, 11 Jan 2012 11:30:47 +0100
Date: Wed, 11 Jan 2012 11:30:46 +0100
From: DIEGO LOPEZ GARCIA <diego@tid.es>
In-reply-to: <618BE8B40039924EB9AED233D4A09C5102BE69BA@XMB-BGL-416.cisco.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Message-id: <091C600F-5A0C-428C-B734-D300115638CA@tid.es>
MIME-version: 1.0
Content-type: text/plain; charset=iso-8859-1
Content-language: en-US
Content-transfer-encoding: quoted-printable
Accept-Language: en-US
Thread-topic: [dc] DC Work Plan
Thread-index: AczQTBUIyPzuwqbZRpadQwwCZF1w/Q==
acceptlanguage: en-US
X-AuditID: 0a5f4068-b7f2d6d000000b4d-88-4f0d64d81ee2
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprCKsWRmVeSWpSXmKPExsXCFe/ApXsjhdff4PAbXouW83dZHRg9liz5 yRTAGMVlk5Kak1mWWqRvl8CVca1hNkvBKuWKwy83sjQwrpTpYuTkkBAwkWiatZQRwhaTuHBv PVsXIxeHkMAGRok/l5sYIZyvjBLLZvVAOY2MEt/n/WUGaWERUJVYdf4ZK4jNJqAu0XL0GwuI LSwgJ/Hy3EegGg4OTgFfiS+dCSBhEQFDiRc7b7CDhJkF5CV6jgmAhHkFLCUeH2xnhbAFJX5M vgc2hVlAR6L3+zdmCFtcorn1JlRcW+LJuwtg9YxAR38/tYYJYry8xPWWzVC2nsT243uZIGpE Je60r4d6UkBiyZ7zzBC2qMTLx//A5ggJ/GSUuHLHaAKj+CwkZ8xCcsYsJGfMQnLGAkaWVYxi xUlFmekZJbmJmTnpBoZ6GZl6mXmpJZsYIVGUsYNx+U6VQ4wCHIxKPLwZ03j8hVgTy4orcw8x SnIwKYnyLkvg9RfiS8pPqcxILM6ILyrNSS0+xCjBwawkwssmB5TjTUmsrEotyodJyXBwKEnw 5gIjXkiwKDU9tSItMweYKmDSTBycIO08QO0JIDW8xQWJucWZ6RD5U4ySUuK8vCAJAZBERmke XO8rRnGgI4V500CyPMCkBtf1CmggE9DALet4QAaWJCKkpBoY5Tb8vBp1iFOue1Xo489NNcGq 1QwWftzfDrJEtOy4vXJr7veFR9u877hvlPot/PSoV3oJk+fMjMc2b6/KNphNmPM89rBx2roN jHNXb45r6d19MSfnb8rBltar69ZOWlTzVDMy6KS6lkhmsrjDnemi7TvVCwt+PXzNaP56+xeX 9LVsHPcV2rw/KbEUZyQaajEXFScCAOFDmNAnAwAA
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4A95BA014132FF49AE685FAB4B9F17F62A4E65BD@dfweml505-mbx> <D2DAE86C-9194-4C5F-9103-ADD8E4D51A2F@tid.es> <618BE8B40039924EB9AED233D4A09C5102BE69BA@XMB-BGL-416.cisco.com>
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 11 Jan 2012 10:30:51 -0000

Hi Ashish,

On 10 Jan 2012, at 11:32 , Ashish Dalela (adalela) wrote:
> Could you please provide the text for these sections? I agree that a sing=
le mgmt/control plane that controlled from application to L2 hasn't existed=
 so far. Right now that is managed through "cloud APIs" which cross these l=
ayers, although information across layers is not always integrated.

What about this?

8<--------------
4.4. Holistic Control

   Users in a multi-tenant model require a deep control of the whole
   infrastructure on which their applications run. In contrast with the
   traditional management models in which separate entities took care
   of separate aspects of the infrastructure (the network layer, the
   operating system environment, the security mechanims, the application
   environment, etc), the cloud paradigm assumes it will interface with
   a single entity defining and monitoring all aspects of the infrastructur=
e.

   In what relates to the network aspects, this implies the need for
   coherent management interfaces accross the different layers, from
   the application down to the link and even physical layers. These
   interfaces must, on the one hand, translate between the top-level
   APIs users employ to configure the infrastructure down to the actual
   protocols and APIs deployed, and on the other, be able to present
   abstractions suitable to be applied at each point. An example of this
   could be the availability of virtual firewall rules for users to define,
   translated into ACLs or forwarding rules at routers, switches, or
   hypervisors themselves.

   Two key properties of these interfaces are information consistency
   accross layers, so abstractions do not hide relevant details, and
   bi-directionality: it is about the network knowing about the application=
s
   as much as the applications knowing about the network.

8<--------------
(It is proposed that this goes between the current 5.9 and 5.10 subsections=
, so
the one entitled "The Forwarding Plane Scale Problem" becomes 5.11)

5.10. The AAA Problem

   Alll the control mechanisms outlined above require appropriate methods
   for establishing the identity and rights of both the controlling and the
   controlled entities, as well as for collecting information about their e=
ffects
   and the performance of the infrastructure. In other words, Authenticatio=
n,
   Authorization and Accounting (AAA) methods.

   User identities and rights are usually established via the cloud API
   authentication and authorization procedures, based on common Web
   Services protocols and data representations. Conversely, users refer to
   virtual resources when interacting with the infrastructure, defining the
   rules under which sensitive data can be used and processed. These
   identities and rights data must be appropriately progressed down to the
   network infrastructure, maintaning their semantics.

   Some solutions to map the refined user authentication and
   authorization procedures available at the application layer into network
   infrastructure protocols are being discussed, though their application
   in the concrete aspects of cloud APIs and the protocols being discussed
   here remains obviously open. Furthermore, the mechanisms for
   mapping virtual resource identities to their actual realizations constit=
ute
   an open challenge.

   Accounting requires to work in the opposite direction: Actual accounting
   happens at the real infrastructure components and need to be marked
   and aggregated up the virtualization layer stack to provide appropriate
   information, eventually suitable to be matched against SLA requirements.
   In this process resource identity mappings play a key role on what to
   be aggregated and how, and user identity mappings define where this
   aggregates must be collected

8<--------------

Be goode,

--
"Esta vez no fallaremos, Doctor Infierno"

Dr Diego R. Lopez
Telefonica I+D

e-mail: diego@tid.es
Tel:      +34 913 129 041
Mobile: +34 682 051 091
-----------------------------------------


Este mensaje se dirige exclusivamente a su destinatario. Puede consultar nu=
estra pol=EDtica de env=EDo y recepci=F3n de correo electr=F3nico en el enl=
ace situado m=E1s abajo.
This message is intended exclusively for its addressee. We only send and re=
ceive email on the basis of the terms set out at.
http://www.tid.es/ES/PAGINAS/disclaimer.aspx

From diego@tid.es  Wed Jan 11 02:42:10 2012
Return-Path: <diego@tid.es>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9839E21F8783 for <dc@ietfa.amsl.com>; Wed, 11 Jan 2012 02:42:10 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.262
X-Spam-Level: 
X-Spam-Status: No, score=-5.262 tagged_above=-999 required=5 tests=[AWL=1.337,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UI6P857RUTrN for <dc@ietfa.amsl.com>; Wed, 11 Jan 2012 02:42:10 -0800 (PST)
Received: from correo-bck.tid.es (correo-bck.tid.es [195.235.93.200]) by ietfa.amsl.com (Postfix) with ESMTP id 919EA21F86F4 for <dc@ietf.org>; Wed, 11 Jan 2012 02:42:09 -0800 (PST)
Received: from sbrightmailg02.hi.inet (Sbrightmailg02.hi.inet [10.95.78.105]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXM0001OR2816@tid.hi.inet> for dc@ietf.org; Wed, 11 Jan 2012 11:42:08 +0100 (MET)
Received: from vanvan (vanvan.hi.inet [10.95.78.49])	by sbrightmailg02.hi.inet (Symantec Messaging Gateway) with SMTP id 32.E2.02643.0876D0F4; Wed, 11 Jan 2012 11:42:08 +0100 (CET)
Received: from correo.tid.es (mailhost.hi.inet [10.95.64.100]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPS id <0LXM0001LR2816@tid.hi.inet> for dc@ietf.org; Wed, 11 Jan 2012 11:42:08 +0100 (MET)
Received: from EXCLU2K7.hi.inet ([10.95.67.65]) by htcasmad1.hi.inet ([192.168.0.1]) with mapi; Wed, 11 Jan 2012 11:42:08 +0100
Date: Wed, 11 Jan 2012 11:42:07 +0100
From: DIEGO LOPEZ GARCIA <diego@tid.es>
In-reply-to: <091C600F-5A0C-428C-B734-D300115638CA@tid.es>
To: "dc@ietf.org" <dc@ietf.org>
Message-id: <DAF99388-DC00-43CB-B45E-9A5EE92B29BA@tid.es>
MIME-version: 1.0
Content-type: text/plain; charset=iso-8859-1
Content-language: en-US
Content-transfer-encoding: quoted-printable
Accept-Language: en-US
Thread-topic: [dc] DC Work Plan
Thread-index: AczQTaq/tQO9n3MlRruB8p9tlLvugg==
acceptlanguage: en-US
X-AuditID: 0a5f4e69-b7f6b6d000000a53-3f-4f0d6780a0de
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupileLIzCtJLcpLzFFi42Lhivcz1G1I5/U3WHlR0aLl/F1WB0aPJUt+ MgUwRnHZpKTmZJalFunbJXBl9F2+wVhwmbtiXdMp1gbGY5xdjJwcEgImEncvPGSBsMUkLtxb zwZiCwlsY5Q4PFmii5ELyP7KKDFv1kpmCKeRUeJK4xxGkCoWAVWJhuV/mUFsNgF1iZaj38Am CQvISbw89xEszilgJfH28yawuIiAvMTTLbvAbF4BS4npkz8wQtiCEj8m3wOLMwvoSPR+/8YM YYtLNLfehIprSzx5d4EVxGYEuvT7qTVMMDOvt2yGsvUklh68zQJRIypxp309I8RnAhJL9pxn hrBFJV4+/scK8cxKJok9LROZJjCKzUJyxywkd8xCcscsJHcsYGRZxShWnFSUmZ5RkpuYmZNu YKSXkamXmZdasokREjGZOxiX71Q5xCjAwajEw5sxjcdfiDWxrLgy9xCjJAeTkihvWCqvvxBf Un5KZUZicUZ8UWlOavEhRgkOZiUR3pkpQDnelMTKqtSifJiUDAeHkgRvSxpQSrAoNT21Ii0z B5gWYNJMHJwg7TxA7TtAaniLCxJzizPTIfKnGCWlxHmXgyQEQBIZpXlwva8YxYGOFOZtBMny ABMYXNcroIFMQAO3rOMBGViSiJCSamCMdHSYfZnpHXvDC/ZZIst/rvC0sJXkX/N33k7mTXZB b7u5H1rFfNUOj3LUPm33aqmTxUTX9V9vVkmWTp+xROTY4gaOyQcZZKR+zf78OvCLZ3DnB7P/ emvXVMSU+W4WOMK57VXwh0csMg7rez+qiEjEnvrnPvPIoZxElQDjrV5SHKuMZNKOXNBVYinO SDTUYi4qTgQAFChiSB0DAAA=
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4A95BA014132FF49AE685FAB4B9F17F62A4E65BD@dfweml505-mbx> <D2DAE86C-9194-4C5F-9103-ADD8E4D51A2F@tid.es> <618BE8B40039924EB9AED233D4A09C5102BE69BA@XMB-BGL-416.cisco.com> <091C600F-5A0C-428C-B734-D300115638CA@tid.es>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 11 Jan 2012 10:42:10 -0000

Hi again,

> On 10 Jan 2012, at 11:32 , Ashish Dalela (adalela) wrote:
>> Could you please provide the text for these sections? I agree that a sin=
gle mgmt/control plane that controlled from application to L2 hasn't existe=
d so far. Right now that is managed through "cloud APIs" which cross these =
layers, although information across layers is not always integrated.


In what relates to this common management plane I'd like to draw your atten=
tion to the ideas around Crosss-Stratum Optimization, a work area proposed =
as WG to IETF/IRTF. There was a BoF in Taipei and the issue is ongoing. You=
 have a couple of introductory drafts on this here:

http://tools.ietf.org/html/draft-contreras-cso-functional-architecture-00
http://tools.ietf.org/html/draft-tovar-cso-path-computation-requirements-00

And there is a workshop on this matter arranged for this summer:

http://cccso.net/

Be goode,

--
"Esta vez no fallaremos, Doctor Infierno"

Dr Diego R. Lopez
Telefonica I+D

e-mail: diego@tid.es
Tel:      +34 913 129 041
Mobile: +34 682 051 091
-----------------------------------------


Este mensaje se dirige exclusivamente a su destinatario. Puede consultar nu=
estra pol=EDtica de env=EDo y recepci=F3n de correo electr=F3nico en el enl=
ace situado m=E1s abajo.
This message is intended exclusively for its addressee. We only send and re=
ceive email on the basis of the terms set out at.
http://www.tid.es/ES/PAGINAS/disclaimer.aspx

From lars@netapp.com  Wed Jan 11 06:36:28 2012
Return-Path: <lars@netapp.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id BE96821F8754 for <dc@ietfa.amsl.com>; Wed, 11 Jan 2012 06:36:28 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -10.499
X-Spam-Level: 
X-Spam-Status: No, score=-10.499 tagged_above=-999 required=5 tests=[AWL=0.100, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 2-wLk37GgKUJ for <dc@ietfa.amsl.com>; Wed, 11 Jan 2012 06:36:28 -0800 (PST)
Received: from mx2.netapp.com (mx2.netapp.com [216.240.18.37]) by ietfa.amsl.com (Postfix) with ESMTP id 0214321F8705 for <dc@ietf.org>; Wed, 11 Jan 2012 06:36:27 -0800 (PST)
X-IronPort-AV: E=Sophos;i="4.71,493,1320652800";  d="p7s'?scan'208";a="615620264"
Received: from smtp1.corp.netapp.com ([10.57.156.124]) by mx2-out.netapp.com with ESMTP; 11 Jan 2012 06:36:27 -0800
Received: from svlrsexc2-prd.hq.netapp.com (svlrsexc2-prd.hq.netapp.com [10.57.115.31]) by smtp1.corp.netapp.com (8.13.1/8.13.1/NTAP-1.6) with ESMTP id q0BEaP7s013117; Wed, 11 Jan 2012 06:36:26 -0800 (PST)
Received: from VMWEXCEHT02-PRD.hq.netapp.com ([10.106.76.240]) by svlrsexc2-prd.hq.netapp.com with Microsoft SMTPSVC(6.0.3790.4675);  Wed, 11 Jan 2012 06:36:20 -0800
Received: from SACEXCMBX04-PRD.hq.netapp.com ([169.254.6.193]) by vmwexceht02-prd.hq.netapp.com ([10.106.76.240]) with mapi id 14.01.0355.002; Wed, 11 Jan 2012 06:36:19 -0800
From: "Eggert, Lars" <lars@netapp.com>
To: DIEGO LOPEZ GARCIA <diego@tid.es>
Thread-Topic: [dc] DC Work Plan
Thread-Index: AczMpBVlzok2urS5RMGeXcLx/H8AYQAEo8GwAJnc0AAAKgM2gAAyP24AAABleoAACC3DAA==
Date: Wed, 11 Jan 2012 14:36:18 +0000
Message-ID: <A8300589-7A02-47F6-B325-3C9E62A3409C@netapp.com>
References: <13205C286662DE4387D9AF3AC30EF456D74EEEEF99@EMBX01-WF.jnpr.net> <4A95BA014132FF49AE685FAB4B9F17F62A4E65BD@dfweml505-mbx> <D2DAE86C-9194-4C5F-9103-ADD8E4D51A2F@tid.es> <618BE8B40039924EB9AED233D4A09C5102BE69BA@XMB-BGL-416.cisco.com> <091C600F-5A0C-428C-B734-D300115638CA@tid.es> <DAF99388-DC00-43CB-B45E-9A5EE92B29BA@tid.es>
In-Reply-To: <DAF99388-DC00-43CB-B45E-9A5EE92B29BA@tid.es>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.106.53.51]
Content-Type: multipart/signed; boundary="Apple-Mail=_8D8B5953-5D4E-408D-B630-498ABA2A0DB2"; protocol="application/pkcs7-signature"; micalg=sha1
MIME-Version: 1.0
X-OriginalArrivalTime: 11 Jan 2012 14:36:20.0565 (UTC) FILETIME=[62928050:01CCD06E]
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] DC Work Plan
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 11 Jan 2012 14:36:28 -0000

--Apple-Mail=_8D8B5953-5D4E-408D-B630-498ABA2A0DB2
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=iso-8859-1

On Jan 11, 2012, at 11:42, DIEGO LOPEZ GARCIA wrote:
> In what relates to this common management plane I'd like to draw your =
attention to the ideas around Crosss-Stratum Optimization, a work area =
proposed as WG to IETF/IRTF. There was a BoF in Taipei and the issue is =
ongoing.=20

There was *NOT* an IETF BOF in Taipei. There was a side meeting with the =
intent to propose an IRTF research group. (Which I decided to not =
charter.)

Lars=

--Apple-Mail=_8D8B5953-5D4E-408D-B630-498ABA2A0DB2
Content-Disposition: attachment; filename="smime.p7s"
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIMQDCCBUow
ggQyoAMCAQICEFcfSRTG0jNknqb9LV9GuFkwDQYJKoZIhvcNAQEFBQAwgd0xCzAJBgNVBAYTAlVT
MRcwFQYDVQQKEw5WZXJpU2lnbiwgSW5jLjEfMB0GA1UECxMWVmVyaVNpZ24gVHJ1c3QgTmV0d29y
azE7MDkGA1UECxMyVGVybXMgb2YgdXNlIGF0IGh0dHBzOi8vd3d3LnZlcmlzaWduLmNvbS9ycGEg
KGMpMDkxHjAcBgNVBAsTFVBlcnNvbmEgTm90IFZhbGlkYXRlZDE3MDUGA1UEAxMuVmVyaVNpZ24g
Q2xhc3MgMSBJbmRpdmlkdWFsIFN1YnNjcmliZXIgQ0EgLSBHMzAeFw0xMTEyMTAwMDAwMDBaFw0x
MjEyMDkyMzU5NTlaMIIBDTEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlT
aWduIFRydXN0IE5ldHdvcmsxRjBEBgNVBAsTPXd3dy52ZXJpc2lnbi5jb20vcmVwb3NpdG9yeS9S
UEEgSW5jb3JwLiBieSBSZWYuLExJQUIuTFREKGMpOTgxHjAcBgNVBAsTFVBlcnNvbmEgTm90IFZh
bGlkYXRlZDEzMDEGA1UECxMqRGlnaXRhbCBJRCBDbGFzcyAxIC0gTmV0c2NhcGUgRnVsbCBTZXJ2
aWNlMRQwEgYDVQQDFAtMYXJzIEVnZ2VydDEeMBwGCSqGSIb3DQEJARYPbGFyc0BuZXRhcHAuY29t
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAokrhJTcXt6J/VEpZOicLoguBlYTjXP9v
Ze4HuuhXnURUS8YouAfgaqA0zYbt5yd6fh4PBMdAaEWr5yJyHuFykXlrCumjUWSpLuqTS2A+pt4q
cZaAQk9iLDN/UVd3SpkUuvWbxXlqzG7/BSqa3VNObBzCmyh+V7aXxri+30CT//DSsNRC4VFy6sn6
dMgSaFenXLwe/FBwY0qTMfICT1PrrX6Sw1S8OfH9rykLlZXbmfkFExxQngp1DJH9xMHeODHGbCv/
ty5gdxMOrLe+vENxFEcy1YQWBZd1kNL4UObugF8A/jE/s+Oa3H1VFH8ghqZTdqGDysVxmtKHuNFx
6jIBSQIDAQABo4HSMIHPMAkGA1UdEwQCMAAwRAYDVR0gBD0wOzA5BgtghkgBhvhFAQcXATAqMCgG
CCsGAQUFBwIBFhxodHRwczovL3d3dy52ZXJpc2lnbi5jb20vcnBhMAsGA1UdDwQEAwIFoDAdBgNV
HSUEFjAUBggrBgEFBQcDBAYIKwYBBQUHAwIwUAYDVR0fBEkwRzBFoEOgQYY/aHR0cDovL2luZGMx
ZGlnaXRhbGlkLWczLWNybC52ZXJpc2lnbi5jb20vSW5kQzFEaWdpdGFsSUQtRzMuY3JsMA0GCSqG
SIb3DQEBBQUAA4IBAQBA7q6tR92qpd7xo7VBsrOfGCWzoxIVfTc7t0RhB/Oz/+c3lnhYnNScIuKN
JmyZvznmVxqB9BJ72+NkvmdB/hnILSBTRawL2tyLo9PkBtN0nRt4gS6wjpWnD8G83hlJLE7r25jk
7HkRev61dTIXsANFpJKF02C4XSoDfEzNV6MpuEvHvcgHCqMrlwWwfKc7+NoDnE8PBuRzwSXvlD5L
mswCY2iiOsd7ImNO4OzTCxETvKTDu92+FTIbRJJpYjVNv1UF7e3w9Kq65BkZJErUH19beUeQl0Wh
2BJQE6/15rQyCnP0iJ/Nmx2/kI6M0PWunEsI6FMs0MbosreaWGHlQmomMIIG7jCCBdagAwIBAgIQ
cRVmBUrkkSFN6bxE+azT3DANBgkqhkiG9w0BAQUFADCByjELMAkGA1UEBhMCVVMxFzAVBgNVBAoT
DlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQL
EzEoYykgMTk5OSBWZXJpU2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYD
VQQDEzxWZXJpU2lnbiBDbGFzcyAxIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y
aXR5IC0gRzMwHhcNMDkwNTAxMDAwMDAwWhcNMTkwNDMwMjM1OTU5WjCB3TELMAkGA1UEBhMCVVMx
FzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3Jr
MTswOQYDVQQLEzJUZXJtcyBvZiB1c2UgYXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAo
YykwOTEeMBwGA1UECxMVUGVyc29uYSBOb3QgVmFsaWRhdGVkMTcwNQYDVQQDEy5WZXJpU2lnbiBD
bGFzcyAxIEluZGl2aWR1YWwgU3Vic2NyaWJlciBDQSAtIEczMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEA7cRH3yooHXwGa7vXITLJbBOP6bGNQU4099oL42r6ZYggCxET6ZvgSU6Lb9UB
0F8NR5GKWkx0Pj/GkQm7TDSejW6hglFi92l2WJYHr54UGAdPWr2f0jGyVBlzRmoZQhHsEnMhjfXc
MM3l2VYKMcU2bSkUl70t2olHGYjYSwQ967Y8Zx50ABMN0Ibak2f4MwOuGjxraXj2wCyO4YM/d/mZ
//6fUlrCtIcK2GypR8FUKWVDPkrAlh/Brfd3r2yxBF6+wbaULZeQLSfSux7pg2qE9sSyriMGZSal
J1grByK0b6ZiSBp38tVQJ5op05b7KPW6JHZi44xZ6/tu1ULEvkHH9QIDAQABo4ICuTCCArUwNAYI
KwYBBQUHAQEEKDAmMCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC52ZXJpc2lnbi5jb20wEgYDVR0T
AQH/BAgwBgEB/wIBADBwBgNVHSAEaTBnMGUGC2CGSAGG+EUBBxcBMFYwKAYIKwYBBQUHAgEWHGh0
dHBzOi8vd3d3LnZlcmlzaWduLmNvbS9jcHMwKgYIKwYBBQUHAgIwHhocaHR0cHM6Ly93d3cudmVy
aXNpZ24uY29tL3JwYTA0BgNVHR8ELTArMCmgJ6AlhiNodHRwOi8vY3JsLnZlcmlzaWduLmNvbS9w
Y2ExLWczLmNybDAOBgNVHQ8BAf8EBAMCAQYwbgYIKwYBBQUHAQwEYjBgoV6gXDBaMFgwVhYJaW1h
Z2UvZ2lmMCEwHzAHBgUrDgMCGgQUS2u5KJYGDLvQUjibKaxLB4shBRgwJhYkaHR0cDovL2xvZ28u
dmVyaXNpZ24uY29tL3ZzbG9nbzEuZ2lmMC4GA1UdEQQnMCWkIzAhMR8wHQYDVQQDExZQcml2YXRl
TGFiZWw0LTIwNDgtMTE4MB0GA1UdDgQWBBR5R2EIQf04BKJL57XM9UP2SSsR+DCB8QYDVR0jBIHp
MIHmoYHQpIHNMIHKMQswCQYDVQQGEwJVUzEXMBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNV
BAsTFlZlcmlTaWduIFRydXN0IE5ldHdvcmsxOjA4BgNVBAsTMShjKSAxOTk5IFZlcmlTaWduLCBJ
bmMuIC0gRm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxRTBDBgNVBAMTPFZlcmlTaWduIENsYXNzIDEg
UHVibGljIFByaW1hcnkgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgLSBHM4IRAItbdVaEVIULAM+v
OEjOsaQwDQYJKoZIhvcNAQEFBQADggEBADlNz0GZgbWpBbVSOOk5hIls5DSoWufYbAlMJBq6WaSH
O3Mh8ZOBz79oY1pn/jWFK6HDXaNKwjoZ3TDWzE3v8dKBl8pUWkO/N4t6jhmND0OojPKvYLMVirOV
nDzgnrMnmKQ1chfl/Cpdh9OKDcLRRSr4wPSsKpM61a4ScAjr+zvid+zoK2Q1ds262uDRyxTWcVib
vtU+fbbZ6CTFJGZMXZEfdrMXPn8NxiGJL7M3uKH/XLJtSd5lUkL7DojS7Uodv0vj+Mxy+kgOZY5J
yNb4mZg7t5Q+MXEGh/psWVMu198r7V9jAKwV7QO4VRaMxmgD5yKocwuxvKDaUljdCg5/wYIxggSL
MIIEhwIBATCB8jCB3TELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYD
VQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2UgYXQgaHR0
cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykwOTEeMBwGA1UECxMVUGVyc29uYSBOb3QgVmFs
aWRhdGVkMTcwNQYDVQQDEy5WZXJpU2lnbiBDbGFzcyAxIEluZGl2aWR1YWwgU3Vic2NyaWJlciBD
QSAtIEczAhBXH0kUxtIzZJ6m/S1fRrhZMAkGBSsOAwIaBQCgggJtMBgGCSqGSIb3DQEJAzELBgkq
hkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTEyMDExMTE0MzYxOVowIwYJKoZIhvcNAQkEMRYEFEZH
zAgG8/+sTM0JMJMws23zk0efMIIBAwYJKwYBBAGCNxAEMYH1MIHyMIHdMQswCQYDVQQGEwJVUzEX
MBUGA1UEChMOVmVyaVNpZ24sIEluYy4xHzAdBgNVBAsTFlZlcmlTaWduIFRydXN0IE5ldHdvcmsx
OzA5BgNVBAsTMlRlcm1zIG9mIHVzZSBhdCBodHRwczovL3d3dy52ZXJpc2lnbi5jb20vcnBhIChj
KTA5MR4wHAYDVQQLExVQZXJzb25hIE5vdCBWYWxpZGF0ZWQxNzA1BgNVBAMTLlZlcmlTaWduIENs
YXNzIDEgSW5kaXZpZHVhbCBTdWJzY3JpYmVyIENBIC0gRzMCEFcfSRTG0jNknqb9LV9GuFkwggEF
BgsqhkiG9w0BCRACCzGB9aCB8jCB3TELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJ
bmMuMR8wHQYDVQQLExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1
c2UgYXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykwOTEeMBwGA1UECxMVUGVyc29u
YSBOb3QgVmFsaWRhdGVkMTcwNQYDVQQDEy5WZXJpU2lnbiBDbGFzcyAxIEluZGl2aWR1YWwgU3Vi
c2NyaWJlciBDQSAtIEczAhBXH0kUxtIzZJ6m/S1fRrhZMA0GCSqGSIb3DQEBAQUABIIBAByjIP/1
HHGhYWPEmMX1Hw8b1D5FRSdYbiSEv7NnVeABNcog8872XY6Cl3kIvbf0eWvzPkVeOtN4L6VFfNR9
x2DUNoBK5NLWs0z5+YqYW1Sa4NwEWScV9X0SPWnE0+/WbliNzoFpU9kf8VGu2ubXAmf7DNtCDyh5
TnjX62s/fqtEXkoeihxC8/Sp6ypm0jwP89b+VEgXzDIt7HX0TSmTtfJ98/c6RkVm4rybr57gJN1W
JsZHUI5j8EDOa1NmoF30SkYnK3rJNe74jzkpCdh8Fv2YpM5hYEcFvwxa8IIuCTy8h8JtgsPNOdD7
Q/vG5MruwgQJa1bSbrkA0E1b2IHGQyAAAAAAAAA=

--Apple-Mail=_8D8B5953-5D4E-408D-B630-498ABA2A0DB2--

From vumip1@gmail.com  Fri Jan 13 16:07:37 2012
Return-Path: <vumip1@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D761621F84DA for <dc@ietfa.amsl.com>; Fri, 13 Jan 2012 16:07:37 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.425
X-Spam-Level: 
X-Spam-Status: No, score=-3.425 tagged_above=-999 required=5 tests=[AWL=0.173,  BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id XqrLuEbZt1sc for <dc@ietfa.amsl.com>; Fri, 13 Jan 2012 16:07:37 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id EE71C21F84C3 for <dc@ietf.org>; Fri, 13 Jan 2012 16:07:36 -0800 (PST)
Received: by iaae16 with SMTP id e16so5364968iaa.31 for <dc@ietf.org>; Fri, 13 Jan 2012 16:07:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=hOeXAP12KkEOlB+izLL1aJnGFYOuUR8y7jp1VcQUUxo=; b=nH2cn923d7LTLCvJF0pJZfsfleERic3i9ir3whqi8wWdrmGP+pJjSlt4nrcIeIjOkV M/1FtYoJ/2SqiFAnmMHQ+BkLG6fZZlmDzKuhFtXcwSKgPgK57f4EtmQDqV0V97BIrPNt 4WWx/Znq3qdRrzSYosFKpbMmUdZywg5CU1U2I=
MIME-Version: 1.0
Received: by 10.50.45.195 with SMTP id p3mr221021igm.2.1326499653861; Fri, 13 Jan 2012 16:07:33 -0800 (PST)
Received: by 10.50.77.197 with HTTP; Fri, 13 Jan 2012 16:07:33 -0800 (PST)
In-Reply-To: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com>
Date: Fri, 13 Jan 2012 19:07:33 -0500
Message-ID: <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com>
From: Bhumip Khasnabish <vumip1@gmail.com>
To: Lizhong Jin <lizho.jin@gmail.com>
Content-Type: multipart/alternative; boundary=14dae934061ba9c1a604b671c4c1
Cc: Lizhong Jin <lizhong.jin@zte.com.cn>, adalela@cisco.com, dc@ietf.org
Subject: Re: [dc] Comment of draft-dalela-dc-requirements
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 14 Jan 2012 00:07:38 -0000

--14dae934061ba9c1a604b671c4c1
Content-Type: text/plain; charset=ISO-8859-1

Hello Ashish,
Hello Mr. Lizhong,

Please note that, we are also working on a draft related to virtual machine
(and virtual network element) mobility and interconnection
(http://www.ietf.org/id/draft-khasnabish-vmmi-problems-00.txt).
Would appreciate comments and  suggestions. Thanks.
Best.
Bhumip



On Mon, Jan 9, 2012 at 10:00 AM, Lizhong Jin <lizho.jin@gmail.com> wrote:

> Hi Ashish,
> I have several comments to this requirement. Thanks.
>
> Section 5.3, Multi-tenancy problem. From my side, the multi-tenancy
> problem is not only the scalability of segmentation-ID problem, it is also
> required to isolate performance and security among multi-tenant. For
> example, one tenant should not suffer denial-of-service attacks by other
> tenant within same datacenter. A detailed example is, the huge broadcast
> traffic from one tenant should not influence the performace and
> availability of other tenants.
>
> Section 5.3, "The use of L3 VRFs also poses similar challenges of
> scaling". The challenge for VRF is quite different with VLAN. The challenge
> for VRF is the scalability of forwarding table. And you also said:"With
> VRFs, these entries will be present even if there is no traffic from a host
> to other hosts in the VRF". I think this could be optimized that the FIB
> would store only active route entries, while the RIB would store all route
> entries.
>
> Section 5.5, last paragraph about mobility. It seems that this paragraph
> is not much related with "network convergence", but is about "host mobility
> impact to the network resource". While I find that, section 5.1 is about
> the "host mobility impact to L2/3 forwarding", 5.10 is about the "host
> mobility impact to forwarding tables". Suggest to re-organize the three
> parts which are all related with impact of host mobility.
>
> Regards
> Lizhong
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>
>

--14dae934061ba9c1a604b671c4c1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div>Hello Ashish, </div>
<div>Hello Mr. Lizhong, </div>
<div>=A0</div>
<div>Please note that, we are also working on a draft related to virtual ma=
chine (and virtual network element) mobility and interconnection</div>
<div>(<a href=3D"http://www.ietf.org/id/draft-khasnabish-vmmi-problems-00.t=
xt" target=3D"_blank">http://www.ietf.org/id/draft-khasnabish-vmmi-problems=
-00.txt</a>).</div>
<div>Would appreciate comments and =A0suggestions. Thanks.</div>
<div>Best.</div>
<div>Bhumip</div>
<div><br><br>=A0</div>
<div class=3D"gmail_quote">On Mon, Jan 9, 2012 at 10:00 AM, Lizhong Jin <sp=
an dir=3D"ltr">&lt;<a href=3D"mailto:lizho.jin@gmail.com" target=3D"_blank"=
>lizho.jin@gmail.com</a>&gt;</span> wrote:<br>
<blockquote style=3D"BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PA=
DDING-LEFT:1ex" class=3D"gmail_quote">
<div>Hi Ashish,</div>
<div>I have several comments to this requirement. Thanks.</div>
<div>=A0</div>
<div>Section 5.3, Multi-tenancy problem. From my side, the multi-tenancy pr=
oblem is not only the scalability of segmentation-ID problem, it is also re=
quired to isolate performance and security among multi-tenant. For example,=
 one tenant should not suffer denial-of-service attacks by other tenant wit=
hin same datacenter. A detailed example is, the huge=A0broadcast traffic fr=
om one tenant should not influence the performace and availability of other=
 tenants.</div>

<div>=A0</div>
<div>Section 5.3, &quot;The use of L3 VRFs also poses similar challenges of=
 scaling&quot;. The challenge for VRF is quite different with VLAN. The cha=
llenge for VRF is the scalability of forwarding table. And you also said:&q=
uot;With VRFs, these entries will be present even if there is no traffic fr=
om a host to other hosts in the VRF&quot;. I think=A0this could be optimize=
d that the FIB would store only active route entries, while the RIB would s=
tore all route entries.</div>

<div>=A0</div>
<div>Section 5.5, last paragraph about mobility. It seems that this paragra=
ph is not much related with &quot;network convergence&quot;, but is about &=
quot;host=A0mobility impact to the network resource&quot;. While I find tha=
t, section 5.1 is about the &quot;host mobility impact to L2/3 forwarding&q=
uot;, 5.10 is about the &quot;host mobility impact to forwarding tables&quo=
t;. Suggest to re-organize the three parts which are all related with impac=
t of host mobility.</div>

<div>=A0</div>
<div>Regards</div>
<div>Lizhong</div><br>_______________________________________________<br>dc=
 mailing list<br><a href=3D"mailto:dc@ietf.org" target=3D"_blank">dc@ietf.o=
rg</a><br><a href=3D"https://www.ietf.org/mailman/listinfo/dc" target=3D"_b=
lank">https://www.ietf.org/mailman/listinfo/dc</a><br>
<br></blockquote></div><br><br clear=3D"all">=A0=20

--14dae934061ba9c1a604b671c4c1--

From narten@us.ibm.com  Tue Jan 17 07:43:41 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 34D9321F86DF for <dc@ietfa.amsl.com>; Tue, 17 Jan 2012 07:43:41 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -106.459
X-Spam-Level: 
X-Spam-Status: No, score=-106.459 tagged_above=-999 required=5 tests=[AWL=0.140, BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bKBLoQlROnF9 for <dc@ietfa.amsl.com>; Tue, 17 Jan 2012 07:43:40 -0800 (PST)
Received: from e31.co.us.ibm.com (e31.co.us.ibm.com [32.97.110.149]) by ietfa.amsl.com (Postfix) with ESMTP id A157E21F8572 for <dc@ietf.org>; Tue, 17 Jan 2012 07:43:40 -0800 (PST)
Received: from /spool/local by e31.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Tue, 17 Jan 2012 08:43:39 -0700
Received: from d03dlp01.boulder.ibm.com (9.17.202.177) by e31.co.us.ibm.com (192.168.1.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Tue, 17 Jan 2012 08:43:37 -0700
Received: from d01relay07.pok.ibm.com (d01relay07.pok.ibm.com [9.56.227.147]) by d03dlp01.boulder.ibm.com (Postfix) with ESMTP id 644771FF043C for <dc@ietf.org>; Tue, 17 Jan 2012 08:40:26 -0700 (MST)
Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay07.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q0HFeQrx3301396 for <dc@ietf.org>; Tue, 17 Jan 2012 10:40:26 -0500
Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q0HFeQnM004730 for <dc@ietf.org>; Tue, 17 Jan 2012 13:40:26 -0200
Received: from cichlid.raleigh.ibm.com (sig-9-65-205-191.mts.ibm.com [9.65.205.191]) by d01av02.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q0HFePj6004703 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 17 Jan 2012 13:40:26 -0200
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q0HFeNan008591; Tue, 17 Jan 2012 10:40:24 -0500
Message-Id: <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com>
To: Bhumip Khasnabish <vumip1@gmail.com>
In-reply-to: <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com>
Comments: In-reply-to Bhumip Khasnabish <vumip1@gmail.com> message dated "Fri, 13 Jan 2012 19:07:33 -0500."
Date: Tue, 17 Jan 2012 10:40:23 -0500
From: Thomas Narten <narten@us.ibm.com>
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12011715-7282-0000-0000-000005B0A69C
Cc: dc@ietf.org
Subject: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 17 Jan 2012 15:43:41 -0000

Bhumip,

I skimmed this document and am having trouble figuring out what it is
intended to do.

The draft name itself has "problem" in it, but there is no single (or
small set of) succinct problems listed. It's all very high level and
hand wavy. I need help making the connection to an IETF action that
could come out of this document.

For example, it talks about VM migration.

Is VM Migration a "problem" today? There are properietary approaches
that the market seems to like OK. 

What is wrong with the current approaches? What is "broken" that needs
fixing? Why should the IETF get involved in this space? What value
would the IETF bring?

Do you want to be able to do VM migration from one vendor's hypervisor
to another vendor's?  If so, please just say so. Then we can see
whether others here think that is an area the IETF (or some other SDO)
should get involved in.

Thomas


From vumip1@gmail.com  Tue Jan 17 22:33:51 2012
Return-Path: <vumip1@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 58DEE21F8691 for <dc@ietfa.amsl.com>; Tue, 17 Jan 2012 22:33:51 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.44
X-Spam-Level: 
X-Spam-Status: No, score=-3.44 tagged_above=-999 required=5 tests=[AWL=0.158,  BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id REZPPDoyURzN for <dc@ietfa.amsl.com>; Tue, 17 Jan 2012 22:33:50 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id BA64D21F8693 for <dc@ietf.org>; Tue, 17 Jan 2012 22:33:50 -0800 (PST)
Received: by iaae16 with SMTP id e16so12714534iaa.31 for <dc@ietf.org>; Tue, 17 Jan 2012 22:33:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=EJcU/EzZ/N7uM+9hW/32cbjYtsUzkVcH4TqboXrUg5s=; b=Lp0K2wvfhNLoS/OtKOzSgJGy5vyLwnW98c7Wja1Pp9Uwc20rfWmgIFXNlpLmuc0l2v FO/SfSAPHkiJBzROfYb3UHyZW/x3nXj09erw6bsxvzoZUytlxvPGKiHa8Wth/UuiInP1 Zg0tzZPJam1zBff3sSpHU7Z0ldi/GF2Cm5UA8=
MIME-Version: 1.0
Received: by 10.50.195.227 with SMTP id ih3mr21191117igc.19.1326868430288; Tue, 17 Jan 2012 22:33:50 -0800 (PST)
Received: by 10.50.209.98 with HTTP; Tue, 17 Jan 2012 22:33:50 -0800 (PST)
In-Reply-To: <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com>
Date: Wed, 18 Jan 2012 01:33:50 -0500
Message-ID: <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com>
From: Bhumip Khasnabish <vumip1@gmail.com>
To: Thomas Narten <narten@us.ibm.com>
Content-Type: multipart/alternative; boundary=14dae934127373753b04b6c7a154
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 18 Jan 2012 06:33:51 -0000

--14dae934127373753b04b6c7a154
Content-Type: text/plain; charset=ISO-8859-1

Tom,

Thanks.

Yes, seamless migration of VM and VNE can be problematic in both intra- and
inter-data-center environments, especially in multi-hypervisor case.

It may be very helpful to bring one or more of these
proprietary VM migration approaches to IETF for consideration
for standardization, if that is appropriate.
Sure, we'll update the draft to articulate these requirements.

Best.

Bhumip


On Tue, Jan 17, 2012 at 10:40 AM, Thomas Narten <narten@us.ibm.com> wrote:

> Bhumip,
>
> I skimmed this document and am having trouble figuring out what it is
> intended to do.
>
> The draft name itself has "problem" in it, but there is no single (or
> small set of) succinct problems listed. It's all very high level and
> hand wavy. I need help making the connection to an IETF action that
> could come out of this document.
>
> For example, it talks about VM migration.
>
> Is VM Migration a "problem" today? There are properietary approaches
> that the market seems to like OK.
>
> What is wrong with the current approaches? What is "broken" that needs
> fixing? Why should the IETF get involved in this space? What value
> would the IETF bring?
>
> Do you want to be able to do VM migration from one vendor's hypervisor
> to another vendor's?  If so, please just say so. Then we can see
> whether others here think that is an area the IETF (or some other SDO)
> should get involved in.
>
> Thomas
>
>

--14dae934127373753b04b6c7a154
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div>Tom,</div>
<div>=A0</div>
<div>Thanks. </div>
<div>=A0</div>
<div>Yes, seamless migration of VM and VNE can be problematic in both intra=
- and inter-data-center environments, especially in multi-hypervisor case.=
=A0=A0</div>
<div>=A0</div>
<div>It may be very helpful to bring one or more of these </div>
<div>proprietary VM migration approaches to IETF for consideration </div>
<div>for standardization, if that is appropriate.<br></div>
<div>Sure, we&#39;ll update the draft to articulate these requirements. </d=
iv>
<div>=A0</div>
<div>Best.</div>
<div>=A0</div>
<div>Bhumip</div>
<div><br>=A0</div>
<div class=3D"gmail_quote">On Tue, Jan 17, 2012 at 10:40 AM, Thomas Narten =
<span dir=3D"ltr">&lt;<a href=3D"mailto:narten@us.ibm.com" target=3D"_blank=
">narten@us.ibm.com</a>&gt;</span> wrote:<br>
<blockquote style=3D"BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PA=
DDING-LEFT:1ex" class=3D"gmail_quote">Bhumip,<br><br>I skimmed this documen=
t and am having trouble figuring out what it is<br>intended to do.<br><br>
The draft name itself has &quot;problem&quot; in it, but there is no single=
 (or<br>small set of) succinct problems listed. It&#39;s all very high leve=
l and<br>hand wavy. I need help making the connection to an IETF action tha=
t<br>
could come out of this document.<br><br>For example, it talks about VM migr=
ation.<br><br>Is VM Migration a &quot;problem&quot; today? There are proper=
ietary approaches<br>that the market seems to like OK.<br><br>What is wrong=
 with the current approaches? What is &quot;broken&quot; that needs<br>
fixing? Why should the IETF get involved in this space? What value<br>would=
 the IETF bring?<br><br>Do you want to be able to do VM migration from one =
vendor&#39;s hypervisor<br>to another vendor&#39;s? =A0If so, please just s=
ay so. Then we can see<br>
whether others here think that is an area the IETF (or some other SDO)<br>s=
hould get involved in.<br><font color=3D"#888888"><br>Thomas<br><br></font>=
</blockquote></div><br><br clear=3D"all">=A0=20

--14dae934127373753b04b6c7a154--

From zhouzhipeng@huawei.com  Tue Jan 17 23:39:19 2012
Return-Path: <zhouzhipeng@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 79A0D21F8596 for <dc@ietfa.amsl.com>; Tue, 17 Jan 2012 23:39:19 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.123
X-Spam-Level: 
X-Spam-Status: No, score=-4.123 tagged_above=-999 required=5 tests=[AWL=-1.091, BAYES_05=-1.11, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-4, SUBJ_ALL_CAPS=2.077]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4SXubAuBFye9 for <dc@ietfa.amsl.com>; Tue, 17 Jan 2012 23:39:18 -0800 (PST)
Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [119.145.14.67]) by ietfa.amsl.com (Postfix) with ESMTP id 759B721F858B for <dc@ietf.org>; Tue, 17 Jan 2012 23:39:04 -0800 (PST)
Received: from huawei.com (szxga04-in [172.24.2.12]) by szxga04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXZ006F8H5ZW3@szxga04-in.huawei.com> for dc@ietf.org; Wed, 18 Jan 2012 15:37:12 +0800 (CST)
Received: from szxrg02-dlp.huawei.com ([172.24.2.119]) by szxga04-in.huawei.com (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LXZ0052KH5ZN4@szxga04-in.huawei.com> for dc@ietf.org; Wed, 18 Jan 2012 15:37:11 +0800 (CST)
Received: from szxeml214-edg.china.huawei.com ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.1.9-GA)	with ESMTP id AGJ93903; Wed, 18 Jan 2012 15:37:09 +0800
Received: from SZXEML418-HUB.china.huawei.com (10.82.67.157) by szxeml214-edg.china.huawei.com (172.24.2.29) with Microsoft SMTP Server (TLS) id 14.1.323.3; Wed, 18 Jan 2012 15:37:01 +0800
Received: from SZXEML519-MBS.china.huawei.com ([169.254.7.221]) by szxeml418-hub.china.huawei.com ([10.82.67.157]) with mapi id 14.01.0323.003; Wed, 18 Jan 2012 15:36:58 +0800
Date: Wed, 18 Jan 2012 07:36:56 +0000
From: ZhouZhipeng <zhouzhipeng@huawei.com>
X-Originating-IP: [10.138.73.38]
To: "dc@ietf.org" <dc@ietf.org>
Message-id: <260A32EFD5C9EB47AD504929D005DF62285ACC72@SZXEML519-MBS.china.huawei.com>
MIME-version: 1.0
Content-type: multipart/mixed; boundary="Boundary_(ID_kA3u8lsle6xzBQcNQN6vjA)"
Content-language: zh-CN
Accept-Language: zh-CN, en-US
Thread-topic: CDN-DC API
Thread-index: AczVs+4D1ij+lirdRd6IRbSTTKhhQg==
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
X-CFilter-Loop: Reflected
Subject: [dc] CDN-DC API
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 18 Jan 2012 07:39:19 -0000

--Boundary_(ID_kA3u8lsle6xzBQcNQN6vjA)
Content-type: multipart/alternative;
 boundary="Boundary_(ID_1GX0TD9Mo5GDig0pHL7+PQ)"


--Boundary_(ID_1GX0TD9Mo5GDig0pHL7+PQ)
Content-type: text/plain; charset=us-ascii
Content-transfer-encoding: 7BIT

Hi all,
Recently I have a proposal to do some standard work of CDN with Cloud.
In part 2-3 years, I have done some CDN standardization. And from last year, it is obvious that it is time to do some CDN-Cloud combination work.
One issue is to define a set of CDN API with the Data Center. Hence the CDN may be used for the data transport among DCs.
Some friends have suggested me exposing that idea in the dc mail list in case many DC experts may have interest or questions.
Pls see I have attached my proposal in this mail and like to see your valuable ideas very much.

Thanks.
Zhipeng


-----------------------------------------------------
Huawei Carrier Software Business Unit.
No.101,Software Avenue,Yuhuatai District,Nanjing, P.R.of China
Zipcode:210012
E-Mail: zhouzhipeng@huawei.com<mailto:zhouzp@huawei.com>
Phone:(+86) 25-56620690
Fax:(+86) 25-56624081
Mobile:(+86) 13404162849
-----------------------------------------------------


--Boundary_(ID_1GX0TD9Mo5GDig0pHL7+PQ)
Content-type: text/html; charset=us-ascii
Content-transfer-encoding: 7BIT

<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 12 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Albertus;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	text-align:justify;
	text-justify:inter-ideograph;
	font-size:10.5pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
/* Page Definitions */
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="ZH-CN" link="blue" vlink="purple" style="text-justify-trim:punctuation">
<div class="WordSection1">
<p class="MsoNormal"><span lang="EN-US">Hi all,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Recently I have a proposal to do some standard work of CDN with Cloud.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">In part 2-3 years, I have done some CDN standardization. And from last year, it is obvious that it is time to do some CDN-Cloud combination work.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">One issue is to define a set of CDN API with the Data Center. Hence the CDN may be used for the data transport among DCs.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Some friends have suggested me exposing that idea in the dc mail list in case many DC experts may have interest or questions.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Pls see I have attached my proposal in this mail and like to see your valuable ideas very much.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p>&nbsp;</o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Thanks.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Zhipeng<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p>&nbsp;</o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p>&nbsp;</o:p></span></p>
<p class="MsoNormal" align="left" style="text-align:left"><span lang="EN-US" style="font-size:12.0pt;font-family:SimSun">-----------------------------------------------------</span><span lang="EN-US" style="font-family:Albertus;color:gray"><o:p></o:p></span></p>
<p class="MsoNormal" align="left" style="text-align:left;layout-grid-mode:char"><span lang="EN-US" style="font-size:12.0pt;font-family:SimSun">Huawei Carrier Software Business Unit.<o:p></o:p></span></p>
<p class="MsoNormal" align="left" style="text-align:left;layout-grid-mode:char"><span lang="EN-US" style="font-size:12.0pt;font-family:SimSun">No.101,Software Avenue,Yuhuatai District,Nanjing, P.R.of China<br>
Zipcode:210012<br>
E-Mail: <a href="mailto:zhouzp@huawei.com" title="mailto:zhangrenzhou@huawei.com">
<span style="font-size:10.5pt;font-family:Albertus;color:gray">zhouzhipeng@huawei.com</span></a><br>
Phone:(&#43;86) 25-56620690<o:p></o:p></span></p>
<p class="MsoNormal" align="left" style="text-align:left;layout-grid-mode:char"><span lang="EN-US" style="font-size:12.0pt;font-family:SimSun">Fax:(&#43;86) 25-56624081<o:p></o:p></span></p>
<p class="MsoNormal" align="left" style="text-align:left;layout-grid-mode:char"><span lang="EN-US" style="font-size:12.0pt;font-family:SimSun">Mobile:(&#43;86) 13404162849<br>
-----------------------------------------------------</span><span lang="EN-US" style="font-family:Albertus;color:gray"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><o:p>&nbsp;</o:p></span></p>
</div>
</body>
</html>

--Boundary_(ID_1GX0TD9Mo5GDig0pHL7+PQ)--

--Boundary_(ID_kA3u8lsle6xzBQcNQN6vjA)
Content-type:  application/vnd.openxmlformats-officedocument.wordprocessingml.document;
 name="CDN-DC API.docx"
Content-transfer-encoding: base64
Content-disposition: attachment; filename="CDN-DC API.docx"; size=66680;
 creation-date="Wed, 18 Jan 2012 07:32:56 GMT";
 modification-date="Wed, 18 Jan 2012 07:35:05 GMT"
Content-description: CDN-DC API.docx

UEsDBBQABgAIAAAAIQCMimiR9gEAAOIKAAATAAgCW0NvbnRlbnRfVHlwZXNdLnhtbCCiBAIooAAC
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADE
Vstu2zAQvBfoPwi8FhadFCiKwnIOfRzbAHWBXmlyZTMRHyDXSfz3XUmRGtiO5JoQehEgCDs7nJld
cXHzZKrsAULUzhbsKp+zDKx0SttNwX6tvs0+siyisEpUzkLB9hDZzfLtm8Vq7yFmVG1jwbaI/hPn
UW7BiJg7D5a+lC4YgfQaNtwLeS82wK/n8w9cOotgcYY1BlsufhCBoBVktyLgd2GoD390QfHSObQO
IeYEx7LPbV3dumDC+0pLgUScP1h10HTmylJLUE7uDLXKazgfnIQY6WimynvodzU0P01C7iI689tU
XCOY2+B8vEqm0oPWeBBQQ+w4fIFS7CrMvj6RPq0ldx42ByfXplay+UC8T9QEqOJBzYhaz/bkVNko
GrfaD7EatmNA0cbW3pVhmAtc7ZGN0LZT9dV42Z1ZQ6A8JHt6FK8eepRExH01RcBb3NH2YNVEE9Yh
D1Egv5qp4pTPZBOgnhoFakaDfjBYr0YgAiIFYIIF0yEPHb9fchCuk49/lMF6xUE4s//7/9G/t7/d
ickUWph/8b/VKH2pXyo+0h8TePNMJ9HAjPq9BaEmyVsLfGb/CfJ2Zv+SbhErsa4gOW4nTH+GHhXh
EdY/J1s9L8BHibSipWfvSItxN/5OvwsXmNHdWSRVnxh53txQl38AAAD//wMAUEsDBBQABgAIAAAA
IQCZVX4FBAEAAOECAAALAAgCX3JlbHMvLnJlbHMgogQCKKAAAgAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArJLPSsNAEMbvgu+wzL2ZtIqINOlF
hN5E4gMMu9MkmP3D7lTbt3ctiAZq0oPHnfnmm9987HpzsIN655h67ypYFiUodtqb3rUVvDZPi3tQ
ScgZGrzjCo6cYFNfX61feCDJQ6nrQ1LZxaUKOpHwgJh0x5ZS4QO73Nn5aEnyM7YYSL9Ry7gqyzuM
vz2gHnmqrakgbs0NqOYY8uZ5b7/b9Zofvd5bdnJmBfJB2Bk2ixAzW5Q+X6Maii1LBcbr51xOSCEU
GRvwPNHqcqK/r0XLQoaEUPvI0zxfiimg5eVA8xGNFT/pfPhoMEd0ynaK5vY/afQ+ibcz8Zw030g4
+pj1JwAAAP//AwBQSwMEFAAGAAgAAAAhAG8BpR95AQAAUggAABwACAF3b3JkL19yZWxzL2RvY3Vt
ZW50LnhtbC5yZWxzIKIEASigAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArFbLTsMwELwj
8Q+R78R1C+Whpr0gpF6hSFzdZPMQiR3ZW6B/z9KqadpG5rKXSDtRdiazs05mi5+mjr7A+cqaRKh4
JCIwqc0qUyTiffVy8yAij9pkurYGErEFLxbz66vZK9Qa6SFfVq2PqIvxiSgR2ycpfVpCo31sWzB0
J7eu0UilK2Sr009dgByPRlPp+j3E/KRntMwS4ZYZ8a+2LTH/39vmeZXCs003DRgcoJAl6AwcddSu
AKSeu1rFJFLIYX414RSQW4t9Aft6EhLAyu9xW9MEOwP2dYj+nvP1wWSGDOgJOCAhCWrMqWE4A8ER
sPKbTbMGR/t1nEIHBV3gNCHdeLTNB8W+i0Icyw6VFUITXIspp5q/LTjLRQcFLVHcKi53cxwScMfJ
/w3rN0CkZPT2oweGhChWJUjHNxyTsSvl7hrMhKKPB99ZPXxUBgXccvL7i1kckNAgHjklDB9VwUQq
Vg9ya3Cl13UvDB10cEGe/AnMfwEAAP//AwBQSwMEFAAGAAgAAAAhABW0GtzqEwAAn3IAABEAAAB3
b3JkL2RvY3VtZW50LnhtbOxd63LbxhX+35m+ww79J52hKNx5UcTIouw2M2nqcdz+zUAkKMIGARoA
KSmdzqQv0wdrX6Tf2V1QuyAkAhQlU04Syxaxi8Xu2XO+c8Xy2+9u5hFbBWkWJvFpy+wYLRbE42QS
xlenrb9/eHvUa7Es9+OJHyVxcNq6DbLWd8M//uHb68EkGS/nQZwzDBFngxVaZ3m+GBwfZ+NZMPez
TrIIYjROk3Tu5/iYXh3P/fTTcnE0TuYLPw8vwyjMb48tw/BacpjktLVM44Ec4mgejtMkS6Y53TJI
ptNwHMh/ijvSOs8Vd17IKfMnHqdBhDkkcTYLF1kx2nzX0bDEWTHI6qFFrOZR0e96Uedpk9S/xn7M
IzHt6ySdLNJkHGQZrl6IxvWIpvHQsyUBaYj1HXWmoD+zmMncD+P1MMQdpf1fb14Hm3csnn1MQ90t
BLQYgpcuk8kt/btg1wPw4uT9acsw3lie179oFZcugqm/jHJqGRnW2/M3Rcs7utTzLNe94IMt3qU0
1scx7lz50WlrDC4N0tbx8NtjPEK08i6p/P1tEucZes/CGOMHfpa/zkJf3iA7cT4fZAt/jIUu0iAL
0lXQGv4Y5FjRJ0ZD5/wBYuRi1spCaFKNnjh8/e57bVya/gaZXnuGZ0N+JOUUMuktnEzyEh9ELH7x
U34bBQWp/KlDy74ehPEE16JgCoKYjkXDT8M0y38I42A089MMl21Xu3zasnq2uDtZ5hE6/rCKioGN
pyH/64wBXtgoSpYTFsbTFHuXLsf5Mg3aLJ8FbO5/RLtGRRAKs1qzmd33XNfknNNsd8BVYR4GWXnw
xvtM0LfJWWy8TFM8I7plfhrwxUz83GeCm7OO9tjfOaMsmB+w+RdErxGXfpbNkmU0YZcBGydxHIzz
AAwe5jMW+OMZS8ArKfMz9ja8epBb3vad0fku3GJq+wX+a8wmw0dtec+0vR60ueD8Mhios4mTd2mS
TDkMZLG/+JD8OQ0nAj2lhCzCMcBuNUhBRxZOTls/3xj47+fMMs0+WQyAFKBkkkFAknjgX2ZJtMyD
E6j/qzA+IlgZmF634yzy4lqeLAZmt9N1cek6nOSzgd3tWPgwC8KrWT5w+p0ePv1yBGQKbgaWa3YN
x7YMCO5qkAc3+WVyUzx4niVHMS4dyeuDV9r8iPL5zeXNCLAP5qCPmuLRcfN9bURViZjW0inDi5HO
F0KOSZeo8zterxAaRlCdqxqxDeiNJ+8NAfi+16THkzBVNvMXQX67CFTOym3ooHEC4yELfwFvWaZn
GG3+d4slQE/oKeqSDGCngiHzFoOFOTttzduR0jdosWkYRdRjyjmHOgFf0+Q6+cTvomb6dUpjSaig
yZy2YoxMWiwZRMn4E1sNwGGwFiYhnrWeM41BorIarC8Rg/IP6nogKZbZYmLkgj35IusJj21w4ZAC
RcJjmZ3+nfC4lio8ptWH7EyjcDG40UUIpgNAYb3SJBUzghb1Sew4jfAh+RSQH5DiD0h12rokEpCv
MNEuKCvfYFAd8hqATCXtYPDsTLu+2/FAjntp1/VU2rluxyK6CuqxW41+pukQqL48+kFUdqZfz3uY
fp4NcknYNm2b43Yl9Yyu3ecyK+XsELmvSsVhx2tJqdXbkFLX6dh3UrpdxdlW3+NC2ETF9XBHSYV8
SRVnaabP7ipuvwjS3V0CLFcwtYIgfavTBdcLy8UkS0UKgGd3zDV66NgBG+bQub8Se53dKWfaZtno
A5ooRp/pGLxDQb1+x7gDX418Rt9xEK56edBrP4J8hl1W+2XymRyb1+Rbs15JcRk9x4QOPWjqVUEv
5lwLek3D43pHEVHT7XXMJthrWg4h6dr4Lp78oHtBaH1A2OscJPYicLSz9bEpArrl2xU6V0qAbr1p
AGIajgcoew4R+HqcsyqZxG7WkknIHylGVSTtjttEIg3LchpLpHtYEunuSSIb8tTTuepltxZKeY/C
bZoqaJteT9Vv9xsHru0cpmw/3z7AS9l9Hza05xE3PAr7Vt0EtyuEuNLFQ5KAkhO/g+ygQcSjCmTh
rtQCWQuRoUeGVS2razVG2e5hoaz9taMswuy7SrcVxqoSdlWAhTtB3COtp3vx1XRc75mcr4Z6Tg/e
q3mOJoHt+0fh2YLnzDAfckZqwC5GRT4NiR52ecveJUiiszyRvyBf/EnPkNbZz3co6jAMfRO25WG6
lntxLtLPNbMX+vicVcyLrmGL9J4sFHgp+fHhXwJkiUOk4zKG8hDk3pDgxDbI0CrPHyv50Iz2anTx
Y1smPTW4RKBNzZDvnPPUA396+K5mgqzD3lB+Fmw292+1xWDyIoFLaX5Ro5ROkBuaMJRNoBYAJR9T
FGswFC+xGYqaqBbgbgQ/FilfDLwKfU4dDNjRyFDJqd5Fd2SPijyqypRVlp1gSsKMlP8lgONRpoDR
BfDWMgVMb8OK66s5om3RZ6/b78KKbhYBoelti4DcT0O9hYukvFSmYU3+eeIEqz7f99twS++uLm9T
Oel9tzDaPURqwGjrdKXqzuXEa1uyrtTFn3w8bbkOMrI8JV+kXk2jh0si/3pmnNn8xxJZ2zOrLZpF
dtY4s85M/mO0jTPjBqVVlDYVmceP0CmS4+ehLOVaDaiucBn5GXWcsuAzChhR7sVemaQE9Eu8+mh9
KVvOGX8qM8q9tSb9LoDqBN3ZK4PxiWtPofvQZrAzXj9V+agzV2aG1Ynz/PPaOxIW3XiZ5clccZqQ
bKV6K05N48RonxknZ6YgH32yThRKngh64uKZLbvIK6q95sdXUYAxra7RNnt3P31Qn/6HcSlKKsgV
OW3haWduGwOeeWLZM6AqBiA6z1hR53HaemW0QX52k2J8ZMwxLs25xW5LF0SeeD1InWw5DVMyeOsD
ITK6ZZ/IUpHQ7GsZXxGnUkpNvJ7XR66Gpo1EGi9+wS+A9MeriKeSXMtAbmmL6PI+6zIJIZep4BZZ
NFFHDjkLX6X+JEQxDYcRWUpRYmtiJSkBa7DhQk71HirqUBXRZn6aT7am3ut3PAorqpHGUqCRJ2Gl
l2O5HYe6qxve75ld74Qi/TzkcSSKhVfBQIzJW8Tt5Sa1wgR6mkon1tUmTZK45II/nEgwzi3PQpQH
nEg1s0qdkt7C9Yy8tKMara5EBfc/VKfEd/kFiQxqFbaKDO8jRUZICoe4z7dAUbMvRCiy3Z4pfv18
I7UcvxSJTp9vlDtvgrlyazz9fGM6jt2Tt4kmoQylqgAup0uq15dVSjUkb6tC4WvQtIgyCalRJJSv
hbqsQoTqKCkOvmyoAb4kvuqaCGD0NgMcnPb1EGCzPMrUUg2m2VMR36PyKE38LQPVZIRO9UWWprxN
ZF+77hsLXLYpsnoLF1l5icaUziiVn6PoH7dTeTXUN2XAxYf3Syq59Jd5UqgpWdzOJV7+Xqso8Z5y
ZFktqVfHVuo/fSkqLukth7fIiyAK8TbM7VeyyOLthAqvdifo3qcX69XN5HtmueRBF+XtXqzrNc7j
G5jeNlm+30HTW1Q3jwtjY4lEmfC+soZ73cK6pd5WzypXDMqcUb1Kb6/b6zau9Da87XCsb5SKVHrL
XrZQj4YJ5NwihmQybxhQO2/h2upWje0cZNrmJ1AX7uKTd98mP/+uupqbBdGZrMUWHj48fenlCytI
GDO4Jrx8A56+0a7jWqiesuri6555hdf/gB+vNemxAuHiU1DgPhffeFoXn0gkzbAz8wREIm9fGGcF
hYtm+6SgcQ37jNz8wtWHnSb+kEOtOfnw8OHne/DzyYSAZ//8Tv6m20e8V8/mM4xyeQmsQKXY1hFl
39Lpg8tfep8Er8f0kdTadPJ3FrjNPK7h1c3jmoaISag+LGohlPVsV3ye1ziRSxM8KMW3rwK2SktV
R3kV/yuC+bJzWYU3jbGq4EuRjs24VoNIh2GJ0IXKJU0jHZ5p9b5opGN7wkCPZ6j7pLdwPX0wkQ7g
RuHoicSAZCrF/rtjHvBn6eU6rq5L3GJsFvXU5xbLsTe4xVKT/0CUrXExBxHdL8ktoMA2fNJ5ooJb
yiJcM5e0xyDYziqlkik2XySrrTUt2yprTU+NjG9Xmt0+Crw2lebB4C0lDVQpyumFty2Bcepyl9PS
DF4kpyKyX2Eq8XhWYYUhd7W2gWXw7vks3J3MWG7lIfWiGtkyxqi997g1eoh0FLdRBUGKBBT02oOx
wrL1CdPTQS5LDPKwAWq2X2H0B7JMaqRxbcXWyjJtvttCvFDPAO12y16u6anK2KIXgaT56SHnUI45
mjjG4pAlSZUislw2Yae+LsLb251eOUejUauOMuqarvUllRFI8GhlJE2EwzJdDgG9q7ynurJYodaa
HS7gdbte4zcvDWDFNn643+XQW/YScnrSKlh9vqqZ9WzOk7H5Ll1ty8d0DQCw4jhpdk9XqwioChb0
+z1e60GisntFgE5EddNhUpVe2tX7qgTXWx4eRc/BqKPoLXwUeYlP5WmKXoeob60oI0UxXkXWojZB
ZB2niq1dxz7vjbh8vrDSzuqE3IjOUEspG/hkFcJ6xFr3Wer5TNVT19NriptMVb8GzKC3IiTQ7Dwm
Os1H45k9TvjJSKwf/7LLjIfioCt/svLj3L8KyiQoQhBE253PutJLuHeZ5j2cQDW5D81Ykdp8eBkM
tL6VVoou+iq+OZ7nGoi/Yvqol+H4pgwvQeHu4LOROOaM27TFUWgO3j+Uvu6u6fXhHrb8f7/+WyME
VqRus1LF3kiEqveITtLCUY4pPyyRF1HjEK3NLPn9c8iH8CdXQUz1YU8x7WGH6sM/omqSTUQCP+MF
3Xgqne9EVfBULJ5QDThVg/vZJ9GBrqIRZ93E2RTnf1ElOT9jDRdRWpqHR5Mgy8NYHBPZ0eb+Mnnv
8Yg+rM97+fCDRrL7OWQPXFpsXc0n5sNi12vesIcpFrIzxSGFQdrhTEn4t8zoxY2MxfIsSQRkUEGd
+jinKnoQHBU5/yLroTcnhJykOPExyDrsLaYe3PjzRYTzF8Mplygsj47Wg8zRwoR8FRI3TZM5e01C
eN5hBDQkkkSTmb/CWx0BYAYl3WL0Nr+fKsF9SHKbTULyEHE8ojLIifJhRE+P+djq5Yu7y20W5OMa
Yq0rjgNUKbqfBzkjg70Rx1aJdfNBaAcBxZMkADcnOThavIREJ8Xm+OH8IIOL9L4YP4BxJ6Q9+C3R
85Zfdks2347KCHSmy4iLFIoAcJAbeRKkAEMSXjrYmWs9XToUBBcZLbkNjTml2tLYALs9jXt+y1e2
SYbiJTFgVc6uiwNCL4M4wEHTIZAHwOTHtzjnDke9YpAE75Wt6Cw8IhRB1dhf+OIU644++UrzoBHT
KqSt76/a9Fbu5lG9+kV4AqoR23CqF+eWZbzRrGh5iXar/lQP+FRh8EtxrjNyfnhFEu8IpvkS/FCo
J1jAXBHRUZJchogh/MUiCsfwwsBO8TharkUqXs4voQOTKb8/TmBVCqcNL/YIqxpN/BxaGp+fLJ22
dX5SJI8cOcVvyYdk0+L5ZUNmT8LDLjE+r9OjBfBZwnvCidFwtyWiY9Y6TlSylM44qhbVW7hj9nJZ
aviBQLRAloxbM4Id6nivOin2RSQwoHBXw6h0JPf1AG3f01nf/Kh0h2cSsX13d/CDwMtnf3MAuR48
wTnfw1Ea+HnA/nEErjrRePoZuepu+YdPsJ8CMrSu19jyGyVaPvxJ45adTC6QkpB8ubj28TUFk+Q6
pl8qIJCA+s6Q3Tf6PZvu34lIbzpXHaG/7lSD4oPRCQSI4f6VXS6yE07OUrdzMqReUzdL9PqNkY8O
NND5p4gMZOA/Ot8cX5wygc0Z3bbZP1+XuVqN9Gl2QJudl/s2tgGG/zpp9sjRHh5ZNUQ+/O+v/9Ea
fjPwv08ki4IrmK0Ugumwv8UIlogL8jwNxZvh+vZRgmg55sjRw+u1rbiXpG/fB5+XCAtzZNOjWQP2
DRvJoDO1tNnFXfz4m+xPbfYuDZMUruSfHsPZFYQ+bIVBFjGnFEX24DoVoVfSAwkxJU7WR+h+HWpn
iBOiiQeGEIxQm+p874lOH9WG1lu4o/GyKEcKggfNJuEUGQvKbiwkSzF/PMbXI/AwDnIaBcmp/LEU
l123wZ/DlxowGNzRUR7OKeCarvANWyxbQkXh21DeIHAXqcrJZ1FyvX5kDbjQKb5lL2Tn7XGELwsX
+fC9Jr47GVHBOo5UF3m/RlI+GKP+vwAAAAD//4ST7W6bMBSGb8Xyr02qWkhYA6xE2lJtqtRNUXIF
LhzAEmDLPiRKr37HNsmipmn/IPt8vH79+CD43fLhbp+btVk+7HNcrtSAMCAbdSUQ8i9MGdnIQXSs
DJmb42Iq+eraMYj4r3ZCmpGmldWm4FE0S+JVsuDH0CPUYuzwMrM+C3mRYEpv8dABde9EV3BRJ87z
Ph/GPuRlt+uO2eiUe6qOsSSeLvm/Y3DJWhqLz3KAVSuMpbNDrxqxo+DzG1G6pQ7neVITMPOLqFgS
a+VANwJh8YeVl1Qf/7INaGUwZ+VojCNsUeBomarZirL2YBH67ye4PSAYOTQ3DLC8fZexhRLXhs6+
AnpLeUc0TZKfccodsxZEBWYDNZCF0jHFgwbyvYOBM5PLquDmqUoDiWvVVXjAs4YsNNRKketP5eOJ
9LXyS/3YP+F1//4pz/zEs48NXdTPQ71utq8EZV/wOM6iezeyLa3v0/mERDd/hEOOSlN8TufQhv6Q
lkjHaRS57YtCVD3tk8TvO6jPsgFqwReLzBUHBgXPMi/VjEgET7NYqs4Nl9WipFdKZt+CzUqVv410
Mxyez82rdWpusZZYkmnnLfzbYU78v/miqoNfkMLY0xQu/wEAAP//AwBQSwMEFAAGAAgAAAAhANhD
WGFdAQAAEgMAABAAAAB3b3JkL2Zvb3RlcjMueG1snJLLTsMwEEX3SPxD5H3rpEApUdNKKGLFouLx
AcaxGwu/ZDsJ/XvGzQNRpKpiEyszvufOeGa9/VIyaZnzwugCZfMUJUxTUwm9L9D729NshRIfiK6I
NJoV6MA82m6ur9ZdzoNLQK193kKiDsHmGHtaM0X83FimIcmNUyTAr9tjRdxnY2fUKEuC+BBShANe
pOkSDRhToMbpfEDMlKDOeMNDlOSGc0HZcIwKd4lvrywNbRTT4eiIHZNQg9G+FtaPNPVfGrRYj5D2
XBOtkuO9zl7iVjnSwSiU7MvujKusM5R5D9GyT07ELD3nPTxgREyKS0r47TlWoojQEyYuxsn8p+HN
YXi498YR9dMIvMUG1sgmXQ7rV70UKE0Xj7cPWYbGUMk4aWT4m9nF0H22KFd3PWTnjqzXcJAM1C2R
BSIE4RgVuoIQF86HZxFrvVmmMYPBPOriefzCRm++AQAA//8DAFBLAwQUAAYACAAAACEArCcbmpgD
AABOCgAAEAAAAHdvcmQvZm9vdGVyMi54bWy0Vstu20YU3RfoPxBcdBFUJilbLzZSYkuyESBqBdtB
N9mMyaE1CclhZkZSlH0LFGiaVdGNN0VWKYwGWQQw0qJfEzvNKr+QOw/SpiwYkpN4YZLDe88999wH
dfPW4yS2JphxQtO27a25toXTgIYkPWzb9/a3K03b4gKlIYppitv2DHP7Vufrr25O/UgwC7xT7k/g
xUiIzHccHoxwgvgazXAKLyPKEiTgkR06CWIPx1kloEmGBDkgMREzp+q6ddvA0LY9ZqlvICoJCRjl
NBLSxadRRAJsLrkHWyau9uzRYJzgVKiIDsMxcKApH5GM52jJddEgxVEOMrkqiUkS53bTbJloIUNT
KEUSa9pTysKM0QBzDqc9/bJA9NyrYhsBJUThsQyFcsycSYJIWsDIxpirf1G8NSieo2M7Euo8EdCi
A20kDmJzGTJz86M19adtu+a60I5gMcsgQBYI2zEGWwAEPaueaAYmExS3balJjKUHf9K2N9RNhgLw
VTABjSk0DBoLKoEcFfoi0kF8l9KHOZrr9d1zu4LbDiOhjHsI1y6NwRqYVlutliZXPm40GwuP1+vn
yDkgTNPUhzkMd4GvW93aaHmeFihQeQaGQmDU8Rr1y+rIpIyhlGUeLj/q4QiNYyEDNb31ehOGXBsP
5VHDq/aaNRU700GzPTGLMdgonRHSSZE0hKOIMC7uEtkD68BIK2v8ojjcI0mmXEnKBchv7d8Z9K37
t61vHo2p+G4Gf5VBJdRPlgpaEqJZr9ZqPX2uyaR0yCiNdCBmVOlUXa9a8SpeUxVWlZep/wUH9ZTp
whtJFwtbX69JPUptdw1h3UbX7ZaEbeojGXUVYbsjxLgcBwl2Qe7WnNqQL1RRC8K2aSo42I9ICnXG
iItNTlThIJVCtdOnv709eXN29Ob05c8f/v317Oivd/88+//vk9NnL9+ePD3974+SmuAIiPNNtXrU
s19enP3+vAQNpJYqjfxafHJpzHCZXD5Lz0/9BwHgqflg5HCkdpVMypRjdZHeHR+XFILSX56m4eZO
//LMbLc2ult6eZhKL54ZrxQA2BYRFN3VOb//83UJEppFggj5gfe53sUZwxyzCbY731qLjFds4J9e
zYEUOUA58p3z/b2BVGrPsu7fsDYZOiCBuh30d3f62z/sDjb3F6yeZWWslhh8ARlLwwEPQm828+W8
atEPYeMWC930++6F7X/tSZCUZGsX1ODXYOcjAAAA//8DAFBLAwQUAAYACAAAACEAHc4o4EcFAAAp
EwAAEAAAAHdvcmQvaGVhZGVyMi54bWzsWLtuHDcU7QPkHwbT2/uwHquBV4YelmxATgTJjkuDO8PV
EOaQA5K7K/kDYqQIkCpNqiCFy6ROkb9x/Bs5l+TM7krKyrFkpHFhzx0Oee+599wHVw8fnVcymXJj
hVbDtHe/myZc5boQ6myYvnh+cG+QJtYxVTCpFR+mF9ymj7a//urhLCsLk+C0stkUH0rn6qzTsXnJ
K2bv65orfBxrUzGHV3PWqZh5Panv5bqqmRMjIYW76PS73Y00qtHDdGJUFlXcq0RutNVjR0cyPR6L
nMdHc8J8jN1wcl/nk4or5y12DJfAoJUtRW0bbdWnaoOLZaNkusqJaSWbfbP6Y6wVhs1ARSUD7Jk2
RW10zq3F6n742GrsdVfZjgEkFe2Jj4GwbLNBUjGhWjWUGJf4b8m7D/I6wXaHVM0dQSy2kUZuJOPj
2EThZTLLZsN0vdtFOmLHRQ0Dde7STtywC0XIWXobaed0hV1TJocphUVyOmTfDNM1L9Qsx3GvKddS
I2fYxGnS1fHWF5QByx6X8hnzSCQfu4hkc46jOGcBhxFn5b9/D7oXtEH3kdavG6TwrTvH0Lp+aERB
bp3huadlMD9Y6weTS6vrg621a5Z7GwO/HAA0+pyBKlR5cYJQdPu7a1u9Hvnkl44Rk253c299s78T
ODEBUM6UO61RqCHw5gmPPpcnE4mg8nMGVhqXNgceJhmO513uCcuje/mc2Hk8I690Ku6rG1w3QW32
7fMxm0hH2x8cbOwc9BvPjr1fvf7+YN37VQcg9am7kLyBzQaRz/DRHGjlLD4ymwsxTPe1m1SkkDPr
dqxgC0vljrLtlpBQ3nM4E035RIrRuI3mWab0sdF67JmwitXPNXG7YHSWxW6BmNeZUFIonhTCuueI
Quql3VY6aiVKCApOnfFzhwaZ5OeonN5Wj4ovv2hlMow94zHP3eOwEwXX2+quYx/FPk0ojdJkRP+H
3YUGpYkosC9NFKuQMu9/+evDD28TvBfc5jjx5OWr429PX50c7r76jhsncibj6fyb6aFhdSnyA4Oz
RA9DYcxXjnT+2sbhAVoutaBrRtCldhqamNJ7JVNnfMfWcI2g+qDW2Sr7t7W64Mo+cyyZGOTRf3ag
FrmbGA76IGX4F2FBurU2NT0WviBJNUIRiQTBgUh8Jds3M9mcD9oYwQzEXQ180i4Zo2clZ4Vt+FjW
0qHXJYQjKeoDITFLWEZyYjJejThSzzwtPKUssyY/AcWgF7Iz3OUliWMci+udhQ/exlwtWbSo6mQ0
e6YLZLKfInT+fGwqemLKJagdRAhV40uAUU2tKCiYaw7XxrpDjklGAkADJ2hlGZseWUKMrc0WWlaa
nPWeSJVgWG6t96nHLX2phOMmkQIXmwHNm4iJ4vpYUetgmWNCBhkGpIKdxs8o4tUbj3VHybr4Drlt
NiTPexBkFCx1QpoBt+npy9PK9/QNeNtb6ul30GMJMTUZwtw8/VszxsiRZki5Zpg9wDXlyjTDdWRH
ijMwE+4l4ZYSGkur4o5jsnnTnBv5vorJUADWWCDPjjAihumDjdCvg/cIwt1OQBi7/pZxW8qguBQK
xdJM5hjfqNdt//3zWyLTeRJBLIDcAZhnp8mLp8mhdhhMRHy8BFxZXg3t198+A7T2rrJ0MbkxTu9/
+vEzgLkSkHBZurK8Mk4f3v2xBC30EyI1Xi2vrUncXb7UZEgGGh2X+trdlMEn5tqXmmx+T6zuXe9/
/34p8W9D2id1qA9/vlsCsFR5VIBhUsaf7//TLFs1QvZK/4eAZjjEn21xdbFB0tI1ZdI6jD90bf8D
AAD//wMAUEsDBBQABgAIAAAAIQBQCThqXQEAABIDAAAQAAAAd29yZC9oZWFkZXIxLnhtbJySy07D
MBBF90j8Q+R966RAKVHTSihixaLi8QGu7TQWfsl2Evr3jJsHAqSqYhMrM77nznhmvf1UMmm588Lo
AmXzFCVcU8OEPhTo/e1ptkKJD0QzIo3mBTpyj7ab66t1l9fMJaDWPm8hUYdgc4w9rbkifm4s15Cs
jFMkwK87YEXcR2Nn1ChLgtgLKcIRL9J0iQaMKVDjdD4gZkpQZ7ypQpTkpqoE5cMxKtwlvr2yNLRR
XIeTI3ZcQg1G+1pYP9LUf2nQYj1C2nNNtEqO9zp7iRtzpINRKNmX3RnHrDOUew/Rsk9OxCw95z08
YERMiktK+Ok5VqKI0BMmLsav+U/Dm8PwcO+NI+q7EXiLDayRTboc1o+9FChNF4+3D1mGxlDJK9LI
8Dezi6H7bFGu7nrIzp1Yr+EoOahbIgtE9gjHqNAMQpVwPjyLWOvNMo0ZDOZRF8/TFzZ68wUAAP//
AwBQSwMEFAAGAAgAAAAhAJPdAwhqAQAAswMAABEAAAB3b3JkL2VuZG5vdGVzLnhtbKST227DIAyG
7yftHSLuU8gmTV3UpDddH2CHB2CENGgBIyDJ+vZzjt2qqaq2GxJs/Pk3Npvtp66jVjqvwGQkWTES
SSOgUOaQkbfXfbwmkQ/cFLwGIzNylJ5s89ubTZdKUxgI0keIMD5t0VuFYFNKvaik5n4FVhp0luA0
D7h1B6q5+2hsLEBbHtS7qlU40jvGHsiEgYw0zqQTItZKOPBQhj4khbJUQk6fOcJdk3eM3IFotDRh
yEidrFEDGF8p62ea/isNS6xmSHupiFbX87nOXpOtcLzDfuh6lN2BK6wDIb1H6250LsSEXco9XWCP
WCKukfAz56xEc2UWTD8dZ/1fmrfC5tExN+1Rp0LwLvLTLEVdGo4WQV5a7ngAR9CkiozEyXDO4hZn
tXjOCGNP6/vkcd+fGEw7WfKmDt88Pdn1y4Kj+YYONlzt8D9N8W8iBJigTDPMyMu5IPYfPb+SL2hD
tfNry78AAAD//wMAUEsDBBQABgAIAAAAIQBiKvI7agEAALkDAAASAAAAd29yZC9mb290bm90ZXMu
eG1spJJLTsMwEIb3SNwh8j61AxIqUZNuSg/A4wDGcRqL2GPZTkJvz+RJqVBVwcaJ5/HNP57ZbD91
HbXSeQUmI8mKkUgaAYUyh4y8ve7jNYl84KbgNRiZkaP0ZJvf3my6tAQIBoL0ETKMT1t0VyHYlFIv
Kqm5X4GVBp0lOM0DXt2Bau4+GhsL0JYH9a5qFY70jrEHMmEgI40z6YSItRIOPJShT0mhLJWQ02fO
cNfUHTN3IBotTRgqUidr1ADGV8r6mab/SsMWqxnSXmqi1fUc19lrqhWOdzgQXY+yO3CFdSCk92jd
jc6FmLBLtacH7BFLxjUSftaclWiuzILp1+Ns/svwVjg8OtamPeq7EXyL/GSZoi4NR4skLy13PIAj
aFJFRuJkCLR4xW0tnjPC2NP6Pnnc9xGDaSdL3tThxNOjXX8sOJpv6GDD0w7/8x7/KkOACco0w5q8
nEti/1H0K/mSOhQ8S/X5FwAAAP//AwBQSwMEFAAGAAgAAAAhAFhgsxu6AAAAIgEAABsAAAB3b3Jk
L19yZWxzL2hlYWRlcjIueG1sLnJlbHOEj8sKwjAQRfeC/xBmb9O6EJGmbkRwK/UDhmSaRpsHSRT7
9wbcKAgu517uOUy7f9qJPSgm452ApqqBkZNeGacFXPrjagssZXQKJ+9IwEwJ9t1y0Z5pwlxGaTQh
sUJxScCYc9hxnuRIFlPlA7nSDD5azOWMmgeUN9TE13W94fGTAd0Xk52UgHhSDbB+DsX8n+2HwUg6
eHm35PIPBTe2uAsQo6YswJIy+A6b6hpIA+9a/vVZ9wIAAP//AwBQSwMEFAAGAAgAAAAhAFAJOGpd
AQAAEgMAABAAAAB3b3JkL2hlYWRlcjMueG1snJLLTsMwEEX3SPxD5H3rpEApUdNKKGLFouLxAa7t
NBZ+yXYS+veMmwcCpKpiEyszvufOeGa9/VQyabnzwugCZfMUJVxTw4Q+FOj97Wm2QokPRDMijeYF
OnKPtpvrq3WX18wloNY+byFRh2BzjD2tuSJ+bizXkKyMUyTArztgRdxHY2fUKEuC2AspwhEv0nSJ
BowpUON0PiBmSlBnvKlClOSmqgTlwzEq3CW+vbI0tFFch5MjdlxCDUb7Wlg/0tR/adBiPULac020
So73OnuJG3Okg1Eo2ZfdGcesM5R7D9GyT07ELD3nPTxgREyKS0r46TlWoojQEyYuxq/5T8Obw/Bw
740j6rsReIsNrJFNuhzWj70UKE0Xj7cPWYbGUMkr0sjwN7OLoftsUa7uesjOnViv4Sg5qFsiC0T2
CMeo0AxClXA+PItY680yjRkM5lEXz9MXNnrzBQAA//8DAFBLAwQUAAYACAAAACEA2ENYYV0BAAAS
AwAAEAAAAHdvcmQvZm9vdGVyMS54bWyckstOwzAQRfdI/EPkfeukQClR00ooYsWi4vEBxrEbC79k
Own9e8bNA1GkqmITKzO+5854Zr39UjJpmfPC6AJl8xQlTFNTCb0v0Pvb02yFEh+Irog0mhXowDza
bq6v1l3Og0tArX3eQqIOweYYe1ozRfzcWKYhyY1TJMCv22NF3GdjZ9QoS4L4EFKEA16k6RINGFOg
xul8QMyUoM54w0OU5IZzQdlwjAp3iW+vLA1tFNPh6Igdk1CD0b4W1o809V8atFiPkPZcE62S473O
XuJWOdLBKJTsy+6Mq6wzlHkP0bJPTsQsPec9PGBETIpLSvjtOVaiiNATJi7Gyfyn4c1heLj3xhH1
0wi8xQbWyCZdDutXvRQoTRePtw9ZhsZQyThpZPib2cXQfbYoV3c9ZOeOrNdwkAzULZEFIgThGBW6
ghAXzodnEWu9WaYxg8E86uJ5/MJGb74BAAD//wMAUEsDBBQABgAIAAAAIQB0LMNfnwYAAFEbAAAV
AAAAd29yZC90aGVtZS90aGVtZTEueG1s7FlNbxtFGL4j8R9Ge29jJ3ZqR3Wq2LEbaNNGsVvU43h3
vDvN7M5qZpzUN9QekZAQBfVAJcSFAwIqtRJIlF+TUlSK1L/AOzO79k68JkkbQQXNIfbOPu/3x7wz
vnjpTszQPhGS8qTlVc9XPEQSnwc0CVvejUHvXMNDUuEkwIwnpOVNiPQurb//3kW8piISEwT0iVzD
LS9SKl1bWpI+LGN5nqckgXcjLmKs4FGES4HAB8A3ZkvLlcrqUoxp4qEEx8D2+mhEfYKe/fzLi28e
eOs59y4DEYmSesFnoq95E4fEYIO9qkbIiewwgfYxa3kgKOAHA3JHeYhhqeBFy6uYP29p/eISXsuI
mFpAW6DrdLqNTi+jywiCvWUjU4TDqdBqr9a8sDnlbwBMzeO63W6nW53yMwDs+2Cp1aXIs9ZrVNs5
zwLIfp3n3anUKzUXX+C/Mqdzs91u15uZLpapAdmvtTl8o7Ja21h28AZk8fU5fK290emsOngDsvjV
OXzvQnO15uINKGI02ZtD64D28shMISPOtkrhDYA3KpkyMxRkwzS7tIgRT9SiXIvxbS56ANBAhhVN
kJqkZIR9SOMOjoeCYi0ArxFceGOXfDm3pGUh6Quaqpb3YYqhJGb8Xj39/tXTx+jw7pPDuz8d3rt3
ePdHy8ih2sJJWKR6+e1nfz78GP3x+OuX978ox8si/rcfPnn26+flQCifmTrPv3z0+5NHzx98+uK7
+yXwDYGHRfiAxkSia+QA7fIYDDNecTUnQ3E6ikGEaZFiIwklTrCWUsK/qyIHfW2CWRYdR482cT14
U0D7KANeHt92FO5HYqxoieQrUewAtzlnbS5KvXBFyyq4eTBOwnLhYlzE7WK8Xya7gxMnvt1xCn0z
T0vH8E5EHDV3GE4UDklCFNLv+B4hJdbdotTx6zb1BZd8pNAtitqYlrpkQIdONs2ItmgMcZmU2Qzx
dnyzfRO1OSuzepPsu0ioCsxKlB8Q5rjxMh4rHJexHOCYFR1+FauoTMn+RPhFXFcqiHRIGEfdgEhZ
RnNdgL2FoF/B0LFKw77NJrGLFIrulfG8ijkvIjf5XifCcVqG7dMkKmI/kHuQohjtcFUG3+Zuhehn
iANOFob7JiVOuI/vBjdo6Kg0SxD9Zix0LKFVOx04psnftWNGoR/bHDi7dgwN8PlXD0sy621txBuw
J5VVwtaR9rsId7TpdrgI6NvfczfxONkhkObzG8+7lvuu5Xr/+Za7qJ5P2mhnvRXarp4b7FBsRuR4
4YQ8ooz11YSRq9IMyRL2iaAHi5rOHA/J9MSURvA16+sOLhTY0CDB1UdURf0IpzBgVz3NJJQZ61Ci
lEs42JnlUt4aD0O6ssfCuj4w2H4gsdrmgV1e0cv5uWDKxuw2oTl85oJWNIOTClu5kDEFs19HWFUr
dWJpVaOaaXWOtKnJEMN502Bx6k0YQBCMLeDlVTiga9FwMMGMBNrvdu/Nw2KicJYhkhEOSBYjbfd8
jKomSHmumJsAyJ2SGOlD3jFeK0hrarZvIO0kQSqKqy0Ql0fvTaKUZ/AsSrpuj5QjS4rFyRJ00PKa
9eW6h3yctrwRnGnha5xC1KWe+TAL4WbIV8Km/bHFbKp8Fs1mbphbBFW4prB+nzPY6QOpkGoTy8im
hnmVpQBLtCSr/3Id3HpWBthMfw0tVhqQDP+aFuBHN7RkNCK+Kga7sKJ9Zx+zVsrHioh+FBygIRuL
XQzh16kK9gRUwtWE6Qj6Ae7RtLfNK7c5Z0VXvL0yOLuOWRrhrN3qEs0r2cJNHU91ME8F9cC2Ut2N
cac3xZT8GZlSTOP/mSl6P4GbgpVAR8CHe1yBka7XlseFijh0oTSifk/A4GB6B2QL3MXCa0gquE02
n4Ls609bc5aHKWs48KldGiJBYT9SkSBkB9qSyb5jmFWzvcuyZBkjk1EFdWVq1R6SfcIGugeu6r3d
QxGkuukmWRswuKP55z5nFTQM9ZBTrDenh0z3XlsD//TkY4sZjHL7sBlocv9PVSzZVS29Ic/33qIh
+sVszKrlVQHCCltBMyv711ThlFut7VhzFi/Xc+UgivMWw+J0IErhvgfpf7D/UeEzYtJYb6gDvgu9
FcEPDZoZpA1k9Tk7eCDdIO3iEAYnu2iTSbOyrs1GJ+21fLM+40l3KveIs7VmJ4n3KZ09Hc5ccU4t
nqWzMw87vrZrC10NkT1aorA0yg8yJjDmN63ir058eBsCvQn3+2OmpEkm+E1JYBg9+6YOoPitREO6
/hcAAAD//wMAUEsDBAoAAAAAAAAAIQDHGf7qJ40AACeNAAAWAAAAd29yZC9tZWRpYS9pbWFnZTEu
anBlZ//Y/+AAEEpGSUYAAQIBASwBLAAA/+EaSkV4aWYAAE1NACoAAAAIAAcBEgADAAAAAQABAAAB
GgAFAAAAAQAAAGIBGwAFAAAAAQAAAGoBKAADAAAAAQACAAABMQACAAAAFAAAAHIBMgACAAAAFAAA
AIaHaQAEAAAAAQAAAJwAAADIAAABLAAAAAEAAAEsAAAAAUFkb2JlIFBob3Rvc2hvcCA3LjAAMjAw
NjowNDoyOCAxMzo1MTowMQAAAAADoAEAAwAAAAH//wAAoAIABAAAAAEAAAEsoAMABAAAAAEAAAEs
AAAAAAAAAAYBAwADAAAAAQAGAAABGgAFAAAAAQAAARYBGwAFAAAAAQAAAR4BKAADAAAAAQACAAAC
AQAEAAAAAQAAASYCAgAEAAAAAQAAGRwAAAAAAAAASAAAAAEAAABIAAAAAf/Y/+AAEEpGSUYAAQIB
AEgASAAA/+0ADEFkb2JlX0NNAAL/7gAOQWRvYmUAZIAAAAAB/9sAhAAMCAgICQgMCQkMEQsKCxEV
DwwMDxUYExMVExMYEQwMDAwMDBEMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMAQ0LCw0ODRAO
DhAUDg4OFBQODg4OFBEMDAwMDBERDAwMDAwMEQwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAz/
wAARCACAAIADASIAAhEBAxEB/90ABAAI/8QBPwAAAQUBAQEBAQEAAAAAAAAAAwABAgQFBgcICQoL
AQABBQEBAQEBAQAAAAAAAAABAAIDBAUGBwgJCgsQAAEEAQMCBAIFBwYIBQMMMwEAAhEDBCESMQVB
UWETInGBMgYUkaGxQiMkFVLBYjM0coLRQwclklPw4fFjczUWorKDJkSTVGRFwqN0NhfSVeJl8rOE
w9N14/NGJ5SkhbSVxNTk9KW1xdXl9VZmdoaWprbG1ub2N0dXZ3eHl6e3x9fn9xEAAgIBAgQEAwQF
BgcHBgU1AQACEQMhMRIEQVFhcSITBTKBkRShsUIjwVLR8DMkYuFygpJDUxVjczTxJQYWorKDByY1
wtJEk1SjF2RFVTZ0ZeLys4TD03Xj80aUpIW0lcTU5PSltcXV5fVWZnaGlqa2xtbm9ic3R1dnd4eX
p7fH/9oADAMBAAIRAxEAPwD1VJJJJSkkkklMbLK6q3W2uDK2Aue9xgBoEuc5x+i1q4HrP+MHKuvf
R0eKcdp2jJe2bHx+fWx/sqZ/xjPU/wCL+grX+Mnq7qqKekVGDf8Apr/6jTFLP7VrXP8A+srg6OVS
5nmJCXBA1XzF6b4J8IxSwjmuYiJ8d+1jl8giP05R/S4nYGf1LIsbbdl3WWMcH1l9jnBrgd7XNrn0
/a4fuL0/pOeOo9OozANptb729g9p2WtH9Wxrl5Xj9l3f1Jyd+FfiHmize3+rYJ/8+MtTeUmfcIJ+
YfiFfHcETgE4xEfal0HD+rn6Zf8AP4HfyshmNj2Xv+jW0mPE9m/2nLlqX5W91oteyyxxe/Y4gFxO
53t+itr6w3BmIynvdYBHk39If+k1qyqNvdO5qZ4xEH5R/wA4uXyUOHDKZF8Z/wCbFu43WMqlwblf
pauC4CHj+V7fa9bVdjLa22VuDmPEtcOCCucu2xorXQcpwtswzqyDbX5agWN/tbt/+ejy+aXEISNg
7E91vMcvEwOSEeEx1kBsYu2kkkrjnqSSSSU//9D1VDyMinFosyL3BlNLS+x57NaJcdERcJ/jJ6y9
vo9HpfDXj1soDuJ/QVu/tNdZs/4lMy5BjgZfZ5trkOUlzfMQwg0JG5y/dxx+YtfG/wAYN5+sBuuB
b0q2KhUeWMn25J27v0v51rf+t/mVr0Fj2WMbZW4PY8BzXNMgg6tc1wXhK7z/ABffWPjomW4aScNx
/wA+ygu/8Eq/zP8ARKpy3MEy4Zm+I6H+t2d7418GxxwDPy0OH2YiOSA/SxR/yn9+H6bn/wCMqf29
TP8A3FZH+fcuYpMH5ruv8ZnTi6rE6mxv0CaLTrMO/SU/2WuF3/bi4Jhg/FQ8yCMsr66up8GyRyfD
sPD+jEwl/ehJ1Md3C6z6k2kdTsrnSygkjzY9m3/z49cZRYum+p10ddob/pGWN/6Pqf8AotDAayQ8
x+LB8UxXy2bwhKX+J63f+s1/69RT2ZWX/wCe7b/6KVGu+Am+tN8daLf3aax95scs9t+idmN5Z+f5
OZy2D+jYtN43/jep0n3yOVZ6C/d1YR2qeT97FiuyNOVt/VGl1l+RmH6DGilh8SYts/zW+ijhF5Y+
d/Yt5rGMfLZZH93h/wAf0vTEgCTwuH6n9dMlnV2W4WuDjF1bqyRF4JiyyfzPo/qjv++Xekr31065
6bD0jHJD7Gg5TweK3f4DT867/Cf8B/xy4W+xS8zzBEuCBrhPqPj2T8G+FxnA5s8BL3AY44S/cl/l
P8L9B9ewszHzsSrLxnb6bmh7D8fzXfy2/Rejrz//ABedZLMy7pFrzsvm3Haez2ibmj/jKx6n/Wl6
ArOHJ7kBL6Hzcn4jycuU5meE6x+bHL97HL5f+8f/0fVV459a7n3fWTqD7PpC4sH9WsCpn/QYvY15
j/jE6S/E6wOoNE0ZwBJjRtjA2t7P7bdln/birc5EnGCOh1dv/i5lhDnJRlpLJjMYeYInwvKqTHvr
e2xhLXtIc1wMEEahzSop1nPZDUPqGHmVfXL6r347i2rLLdlrQdG2tiym3853oWvY3/wWpeY21vqs
fVYNtlbi17TyCDtcFp/Vnrj+idUryTudjP8AZk1tPLD+d/Wqd+kb/wBt/nrU+vnSKsfNr6tie/E6
kPU3M1YLIDtwc32/rDT638v9MrGQ+7jE/wBOHpn/AHf0ZORyeP7jzk+W25fmrzct2hlj/O4f8R5u
qyF0H1Ru/wCyLAju94++q0LmVufU55P1lwB/Ld/1D1Fi/nIf3o/m3efgDyvMH/VZP/ScnY+td0fW
LJb+62of9Dd/35ZzchE+uVu360Zo8qv/AD0xZIyPNHKf1k/70vzafKYL5Tlz3xYz/wCNxdJ2Rp3J
7Ack+AXanIH1X+rVbrgHZR0azkOvt3W7Jb/g6vd7/wDRVLlvqf09vUep/abnBuL0/bfYSdC/U0NJ
kbWt2es//i1S+s/X/wBrdTfbW4nFpmvFGoG38+7afzr3f+BempMcvbgcn6UvTD/upNXmOW+9czDl
R/NYazcz/eP8zh/wv+g0r8h73OfY4vseS6x55c5xl73f1lSttlRsulCJJ5Ve3cx4hF0Pq897Ov8A
TnNME5VQJHg57WuH9prl7OvK/qH0h3UOtsyHD9BgRc8/y/8AtOz/ADx6v/WV6otDkgRAk9To8r/x
myQlzWOEdZY4evw4zxRi/wD/0ruB/jPvpzslmfT9pwnXvND69rba6y52yuPbXkbWbP8ARP8A+Eeu
yc/oX1q6Y+llrMqh4BO0j1KnHcGP2u99Fzfds3s/8DXieXQ7DzsjDf8ASxrX1O76sc6v/vqsYGdl
YOQzKw7XU31mWvafwP5rmO/PY72Kl78o2Jjii9NL4VhyiOTl5exliBKMo/ISPlP9X/Adjr/1a6h0
K/bePUxnmKcpohrv5LufTt/4NZMrt+k/4wcXLxXYH1kpFjLAGOvY2WuaeXX0t+jt+nvx/wDtpB63
9RQ+s9Q+rljcvEcC70Gu3OEf9x7Jd6/53s3er/xqinhErliPEOsf04uhy3xHJiMcPPx9rIfTDP8A
+B83+H/k5vHLtfqtlVde6Nk/VrPeDcxu/p736kQPaGaf9pn/AMvf6FllX8zWuLc1zXFjwWuGhB0I
Pmj9Pzb+n5tObR/O0PD2gzBj6THR+Y9vseosc+CWvyn0yHeJb3O8v94wmMTw5YEZMGT9zND1Ql/3
MkeRj3Y19mPe3ZdS4ssboYc07XDRbP1Ibu+tGCPOw/dVaVsfXXplGfhUfWjpzSa72N+1CdQCGsqs
LW7m76/5i/8ASf6P/hFmfUGov+s2O7/RMscfmx1f/oxPGMwzxjuOKJie8WtPm48x8Mz5a4ZjFlhl
h/m80YSjOH+Mr6+tLfrNknjeyo/9Brf++rBqbdbYyqppsssIaxjRJLidrWtA/eXS/wCMZm36wg/v
0MP4vb/31WPqN0imqu/6x9QaRjYbXOx5HJaCbbmt/P8AS+hX/wAL/wAJUjLGZ55RH7xJPaKMHNx5
f4VgzSHERihCEOuTLXBCAT/WC7/m19XMfoGOW/bM1pfm2NOsGBb+77bXfq9b/wDQUvXEEkq11bqN
vU+pZGfbo695cG87Wj21VyA3+brDWKqAXENaJJ0AHJKjyz4pafKPTEf1Q2eR5Y4MPrN5shOXPP8A
ey5Pm/wYfJFZanQfq9ndcyhTjtLKWn9NkuHsYP8Av9v7lX/ov9Itvon1Etcxuf154wsJo3uqc7Y8
jt6zne3HZ+9/hv8Ailc6j9esDp+I3p31boDWVAsbc9sMaP36mH33Pd9P1L/z/p+tvT4YQBxZTwx6
R/Tm1uY+IzySOD4fH38vyyzf+BuX/vZP05/1Hp8dnRfqv0xlL7a8algJc95h9rwP0lm3+cutd+6z
+ouU6n/jHvuurr6ZSaKQ9pfbZBe5oLdzBX7mVfnfnW/9bXIZudmZ+Q7JzLXX3P5c49udrR9FjP5D
EOis3X10t1dY9rAPNx2p8+akajjHBHYfvMHLfAsMDLNzcvvOaVykZfzYl+keH9P/AKo//9N/8Z3Q
LcXqY6zSz9VzNrbnD828Dbq2Pa26pjHf8b6q41j17/m4WLn4tmHmVNux7m7bK3cEf99c13uY9v0F
5Z9Yv8XHVem2G7pQf1HDMna0D1ma+1jqx/SPb/hKW/8AWa1VzYTZkBdu78M+IxEY4skuGUdIk7Sj
/wB8801y0uj9c6l0e/18G0sLo9Ss6seB+bYz87/q1lWMux7XU5FbqbWGHV2NLHA/ymPhyk148VUI
MTY0IehjPHlgYzEZwkNYy9UZPoVXV/qx9bWV09aYMDqQAa3IYdod+dDbnhzNuntqyf8ArNnqLF63
9SusdJD7mt+14jJcb6uWtE+62n6dftG5/wDOVM/0q5sOC6P6v/XbqfR4psJzMMQPRscdzABtAos9
3pt/4P8Am0/jhPTKKP8AnI/93Fr/AHfmOV9XJT48fXlMx9H/AKb5f8l/ddj/ABe9TquryegZh31X
tc6it3BBBblUjX85n6TY3/hlZ+rfQLOjfXK/GO51Axn241rolzC+pnu2/n17vTf/AJ60MTB+rXX7
aOr9IeMXNx3stf6IDHgzudXl44+l6n6Rnqf4T/S21rp9rdwdA3AEA94PP5FZx4rjCyD7ZuEx+lHs
4fO/EOHJzHtwlj+9w4OZ5fIOE4s8f8rH+88N9bei39Z+t2HiV+1jsVrrrP3a2WW+o/v+81lf/CKP
1/6hXgYGJ9XsM7atjXXN1JFdcNxmbj+89m9/5/6Ji7va3dugboie8Lms/p31c6Rk5HWutWDKyshz
n1NuhxhujKcXG/PdUz0a/Uf9D/gUsmKhMgge4fVM/owRyXxASyctHJCWSPKRrBgxjjln5k7ZP6vA
8V0T6ndY6vstaz7PiOInIt0Bb+9VX9O72/Q/wX/CrefnfVb6oiyvpzf2h1dssdY8yGH87dY0emxr
fzq6P0v+Ctesn6wfXjqPVZoxd2Fh6gsY473g+39NY2Pbt/wTP+uequalVeOGPTGOKX+cl/3EXoBy
vM836udl7WI/+BMJ+b/zpzf5T+5B0esde6n1m4W5tstb/N0t9tbf6lf738t36RZ6aVKqq26xtVLH
W2OMNYwFzif5LWqEkyNk2S6MI48UBCEY44RGkY+mMWK6j6g9Efn9Vbn21k4mEd4fw03CDVX/ACvT
/nvb/wAHv/nE3QvqH1XqFgsz2uwMUEbt4i137za6nfQ/4y3/AMFXpODg4uBi14mJWKqKhDWj8XOP
5znK1y/LyMhOQqI1AP6ThfGfjOOGKfL4JCeWY4ZyifTiifm9X77/AP/U9VWJ9aPrFd9XsWvMGC/M
xnEtusY8NFRO30vU9r/ZbLm+p+//AMbWttBy8XHzMa3EyWCyi9hrtYZEtcNrhLYc3+ygbrQ0V2Mx
EwZx4o36o7aPAW/42cawQ7pBsHg+5v8A6RchD/GliH/vDr/7eb/7yrG+tP1E6n0S2zIxmOy+mlx2
WMBdZW2N23KY0e3b9H12/ov+J9T0ly0jxVWWTKDRNfQO9g5TkckRKEeKJ/rz/wC+fQz/AIzqHfQ6
NU342g/+67Umf4xMzIsbTidIofc8wxgDrHE+DWVta5ywvq99Ruu9Z23Fn2PDJg5F4IJGkmmj22W8
/wDB0/8ACr0/oP1Y6T0Ks/Y65ve0NtyXmXuA1/q1s3fmVIxjmlvLhHkFnMZfhuAVHF7uX93jycI/
vy403RW9V+zm3qlWPj3WQW047TLR+7dY5722P/4v/prRSSVkChTiZJ8cjKhG/wBGPyjyUqPV29RO
L6nTaqLsmuSKsgGHCNa63tcz07Hfy/0avJJEWKVCfBISoSo7S+U+b53Z/jAzsW11GZ0mll1Zh9Z3
McO/0Xtcnb/jLr/O6RWfhaB/6Icux6z9Xul9aqDM6qXsBFdzDtsZP7rv++Wb615v136j9Z6SDbW3
7biD/C0g7mjxuo9z2f1merWqmQcxDUS4o+Qt6HksnwnmQI5MIw5f3TkyRhL/AGc+P/muz/45mP8A
+U7P+3h/7zojP8aGMzQdMLZ522j/ANJNXnpeFv8A1b+p/Uut2stex2P08EGy942lzZ9wxtzT6j/5
f80o4Zc8jQN/SLb5j4f8LxQM8kOGI75Mv/fvov1a+sbuv1XXtw341FRDW2OduD3Gd7WQ1v8AN/nf
11toGFh42Bi1YeKwVUUt2sYPD/vznfSe5HV+IIA4jZ6l5XPLHLJI4oe3jv0QviqPiZP/1fVUlwv1
F/xi5f1p6xd067Crxm047r97HlxJa+qrb7mt/wBMuw6nluwum5eYxoe7GosuawmA41tdYGl2v0tq
Sm0oejT6nq7G+rEb4G6PDd9Jcj9Qvr1k/WsdQN2IzG+wtqLdji7cbPW53D/gVW+on+MTL+tXVbsC
7Crxm045v3se5xJD66tu1zf+FSU90ks36x9Vs6N0PM6pVWLn4tfqNrcSAdQNXBY/1C+uWR9bMXLv
vxmYv2axrGhji6dzd2u4NSU9UkuW+vn11H1Tw8WyugZWTl2FtdTnFrQxgBus3NDvoufUzb/wih9Q
/r3V9bKslltTcXNxnAmhrtwdU76NzN21/ts9lv7n6L/SpKesSXC9W/xi5fRvrhX0HqGFWzDtsrDc
zeW/oroay8h42babD+m/4q1df1bPb0zpeZ1Fzd4w6LL9kxu9NrrNm7+Xt2JKbaS83+qv+Nq3rXXc
XpeZhVYteUXMbc2wmLNpdU3a9v8AhXt9H+vYu1+svW6+g9Dy+q2N3/ZmSyvjdY4iuln9q17NySm+
aKDaLjWw2jQWbRuj+v8ASRFyX1T+u56v0HM6/wBXZT03BxbDWH7nGdrWOe87m+7e+6uqllf6Sy39
GsD/AMd3qOflW09A6BdnMq13Ave8s4bY+jGps9H/ALdsSVZfTEl59gfX366ZOfjY9/1WyKKbra67
bnV3gMY5zW2WuLqQ32MO5Wfr5/jDy/qr1PHwqcOvJZfQLi973NIO99e3a1p/cSU//9bkPqH9YMvo
HW78zE6dZ1SyzHfSaKi5rmtNlNnreyrI9rfS2fQ/wi7Tqf8AjM63l9Ny8V/1WyqWX0WVutL7CGBz
HNdY79Tb/Nt9/wBJY/8AiYa5v1sy9zS2cG2JEf4bFXrH1hn9gdSjn7JfEf8AFvSU+b/4kOOuf1cb
/wB2ln/4k/8AxS5n/hJ3/n3HWl/iOaQ7rQcIkYvPgftSxuijqn+LX60W3dUwrcjCsrdjnIqadr63
OrtbdjvdtqfY30m/oX2JKfT/AK//APiN6t/4XP5WrlP8SH/JnU/+Pr/6gql9bP8AGdhfWHol/Ruh
4OW7JzdrHOsYzRgc179jKH5LrXWbPS/wahhVZv1H/wAXWYc1pp6t1ywsxaGki1jH1hm+xo99VtNX
r3e3+ae/Hrs9O1JTXd1Cr62f4zm5F+VVT0npFgdVY+xnpmvGeNnpvdtrt+3Zf6T/AML2f8CgZPUM
T6nf4yzn4VjLOlZp32ei9jm+jkH9Yb+i3NZ9my2Ouro/4CpXfqX/AIqendY6BT1Pq92TTdlFz6a6
HMaBT9Gp1jbqLffZtfb7X/zL6kvrr/iq6d0boF3U+kW5V92K5r7q7nMePR+ja9jaaKnbq3OZY73b
PR9VJTe/x1dC9XFw+vVNl2OfsuTEk+m8mzHf+61ldvqs/wDQhil9b/rY3I/xX4NrLC/J6u2vHtcT
D91X9PfH5zPWx/Rd/wCGFb+qtx+un+LvJ6Hlv/Xsdn2UueYMs23dNyLNg3+nuZXW/wD032e5eXdO
xuodSzenfVy/eykZprALfdW691NOX/223G3bP+MSU7XXui/82+m/VPruKP09tTb7fbA9Vr29QodY
8fSs9PJ9D/i8VdT/AI3vrBjZX1f6Ti4ji8dTc3NaQQD6LWfohZX9L9M/J9n/AIXeul/xk9DZ1P6n
ZVVLALOngZWO0GABSD6rYH0v1R17WM/f2Lyv6ldPy/rJ9Zuk4WZL8XpzJMiIope/KFTv32vyL/R/
qWpKeu+vXTn/AFb/AMWPT+j1wHPvqrzCDIc9zbs2/wB35zftNXs/kLI+o/1xzPq/0UY+D9W7843W
OfbnVueBYQdrW+3Fv/mW/o9vq/8AnxekfXn6uWfWP6u39PoLRlBzLsZzzDQ9h7kT9Op1tX9ted/V
b67dY+pVFnQeudLvfTTY40x7H17jutY3cPSyKH2fparGP/P/AMLXZX6aU9Lgf4zOtZWdjYtn1Xya
GZFrKnXOfYQwPc1hsdOGz6G7d9Jcx/jt/wDFBg/+Ex/58tXT4P8Aje6dm52Nht6bksdlXV0te4sg
Gxzag4/5y5r/AB1se76wYO1pP6mOBP8AhLUlP//Z/+0e+FBob3Rvc2hvcCAzLjAAOEJJTQQlAAAA
AAAQAAAAAAAAAAAAAAAAAAAAADhCSU0D7QAAAAAAEAEsAAAAAQACASwAAAABAAI4QklNBCYAAAAA
AA4AAAAAAAAAAAAAP4AAADhCSU0EDQAAAAAABAAAAB44QklNBBkAAAAAAAQAAAAeOEJJTQPzAAAA
AAAJAAAAAAAAAAABADhCSU0ECgAAAAAAAQAAOEJJTScQAAAAAAAKAAEAAAAAAAAAAjhCSU0D9QAA
AAAASAAvZmYAAQBsZmYABgAAAAAAAQAvZmYAAQChmZoABgAAAAAAAQAyAAAAAQBaAAAABgAAAAAA
AQA1AAAAAQAtAAAABgAAAAAAAThCSU0D+AAAAAAAcAAA/////////////////////////////wPo
AAAAAP////////////////////////////8D6AAAAAD/////////////////////////////A+gA
AAAA/////////////////////////////wPoAAA4QklNBAgAAAAAABAAAAABAAACQAAAAkAAAAAA
OEJJTQQeAAAAAAAEAAAAADhCSU0EGgAAAAADWwAAAAYAAAAAAAAAAAAAASwAAAEsAAAAEwBIAFcA
XwBQAE8AUwBfAFIARwBCAF8AVgBlAHIAdABpAGMAYQBsAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAB
AAAAAAAAAAAAAAEsAAABLAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAB
AAAAAAAAbnVsbAAAAAIAAAAGYm91bmRzT2JqYwAAAAEAAAAAAABSY3QxAAAABAAAAABUb3AgbG9u
ZwAAAAAAAAAATGVmdGxvbmcAAAAAAAAAAEJ0b21sb25nAAABLAAAAABSZ2h0bG9uZwAAASwAAAAG
c2xpY2VzVmxMcwAAAAFPYmpjAAAAAQAAAAAABXNsaWNlAAAAEgAAAAdzbGljZUlEbG9uZwAAAAAA
AAAHZ3JvdXBJRGxvbmcAAAAAAAAABm9yaWdpbmVudW0AAAAMRVNsaWNlT3JpZ2luAAAADWF1dG9H
ZW5lcmF0ZWQAAAAAVHlwZWVudW0AAAAKRVNsaWNlVHlwZQAAAABJbWcgAAAABmJvdW5kc09iamMA
AAABAAAAAAAAUmN0MQAAAAQAAAAAVG9wIGxvbmcAAAAAAAAAAExlZnRsb25nAAAAAAAAAABCdG9t
bG9uZwAAASwAAAAAUmdodGxvbmcAAAEsAAAAA3VybFRFWFQAAAABAAAAAAAAbnVsbFRFWFQAAAAB
AAAAAAAATXNnZVRFWFQAAAABAAAAAAAGYWx0VGFnVEVYVAAAAAEAAAAAAA5jZWxsVGV4dElzSFRN
TGJvb2wBAAAACGNlbGxUZXh0VEVYVAAAAAEAAAAAAAlob3J6QWxpZ25lbnVtAAAAD0VTbGljZUhv
cnpBbGlnbgAAAAdkZWZhdWx0AAAACXZlcnRBbGlnbmVudW0AAAAPRVNsaWNlVmVydEFsaWduAAAA
B2RlZmF1bHQAAAALYmdDb2xvclR5cGVlbnVtAAAAEUVTbGljZUJHQ29sb3JUeXBlAAAAAE5vbmUA
AAAJdG9wT3V0c2V0bG9uZwAAAAAAAAAKbGVmdE91dHNldGxvbmcAAAAAAAAADGJvdHRvbU91dHNl
dGxvbmcAAAAAAAAAC3JpZ2h0T3V0c2V0bG9uZwAAAAAAOEJJTQQRAAAAAAABAQA4QklNBBQAAAAA
AAQAAAABOEJJTQQMAAAAABk4AAAAAQAAAIAAAACAAAABgAAAwAAAABkcABgAAf/Y/+AAEEpGSUYA
AQIBAEgASAAA/+0ADEFkb2JlX0NNAAL/7gAOQWRvYmUAZIAAAAAB/9sAhAAMCAgICQgMCQkMEQsK
CxEVDwwMDxUYExMVExMYEQwMDAwMDBEMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMAQ0LCw0O
DRAODhAUDg4OFBQODg4OFBEMDAwMDBERDAwMDAwMEQwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwM
DAz/wAARCACAAIADASIAAhEBAxEB/90ABAAI/8QBPwAAAQUBAQEBAQEAAAAAAAAAAwABAgQFBgcI
CQoLAQABBQEBAQEBAQAAAAAAAAABAAIDBAUGBwgJCgsQAAEEAQMCBAIFBwYIBQMMMwEAAhEDBCES
MQVBUWETInGBMgYUkaGxQiMkFVLBYjM0coLRQwclklPw4fFjczUWorKDJkSTVGRFwqN0NhfSVeJl
8rOEw9N14/NGJ5SkhbSVxNTk9KW1xdXl9VZmdoaWprbG1ub2N0dXZ3eHl6e3x9fn9xEAAgIBAgQE
AwQFBgcHBgU1AQACEQMhMRIEQVFhcSITBTKBkRShsUIjwVLR8DMkYuFygpJDUxVjczTxJQYWorKD
ByY1wtJEk1SjF2RFVTZ0ZeLys4TD03Xj80aUpIW0lcTU5PSltcXV5fVWZnaGlqa2xtbm9ic3R1dn
d4eXp7fH/9oADAMBAAIRAxEAPwD1VJJJJSkkkklMbLK6q3W2uDK2Aue9xgBoEuc5x+i1q4HrP+MH
KuvfR0eKcdp2jJe2bHx+fWx/sqZ/xjPU/wCL+grX+Mnq7qqKekVGDf8Apr/6jTFLP7VrXP8A+srg
6OVS5nmJCXBA1XzF6b4J8IxSwjmuYiJ8d+1jl8giP05R/S4nYGf1LIsbbdl3WWMcH1l9jnBrgd7X
Nrn0/a4fuL0/pOeOo9OozANptb729g9p2WtH9Wxrl5Xj9l3f1Jyd+FfiHmize3+rYJ/8+MtTeUmf
cIJ+YfiFfHcETgE4xEfal0HD+rn6Zf8AP4HfyshmNj2Xv+jW0mPE9m/2nLlqX5W91oteyyxxe/Y4
gFxO53t+itr6w3BmIynvdYBHk39If+k1qyqNvdO5qZ4xEH5R/wA4uXyUOHDKZF8Z/wCbFu43WMql
wblfpauC4CHj+V7fa9bVdjLa22VuDmPEtcOCCucu2xorXQcpwtswzqyDbX5agWN/tbt/+ejy+aXE
ISNg7E91vMcvEwOSEeEx1kBsYu2kkkrjnqSSSSU//9D1VDyMinFosyL3BlNLS+x57NaJcdERcJ/j
J6y9vo9HpfDXj1soDuJ/QVu/tNdZs/4lMy5BjgZfZ5trkOUlzfMQwg0JG5y/dxx+YtfG/wAYN5+s
BuuBb0q2KhUeWMn25J27v0v51rf+t/mVr0Fj2WMbZW4PY8BzXNMgg6tc1wXhK7z/ABffWPjomW4a
ScNx/wA+ygu/8Eq/zP8ARKpy3MEy4Zm+I6H+t2d7418GxxwDPy0OH2YiOSA/SxR/yn9+H6bn/wCM
qf29TP8A3FZH+fcuYpMH5ruv8ZnTi6rE6mxv0CaLTrMO/SU/2WuF3/bi4Jhg/FQ8yCMsr66up8Gy
RyfDsPD+jEwl/ehJ1Md3C6z6k2kdTsrnSygkjzY9m3/z49cZRYum+p10ddob/pGWN/6Pqf8AotDA
ayQ8x+LB8UxXy2bwhKX+J63f+s1/69RT2ZWX/wCe7b/6KVGu+Am+tN8daLf3aax95scs9t+idmN5
Z+f5OZy2D+jYtN43/jep0n3yOVZ6C/d1YR2qeT97FiuyNOVt/VGl1l+RmH6DGilh8SYts/zW+ijh
F5Y+d/Yt5rGMfLZZH93h/wAf0vTEgCTwuH6n9dMlnV2W4WuDjF1bqyRF4JiyyfzPo/qjv++Xekr3
10656bD0jHJD7Gg5TweK3f4DT867/Cf8B/xy4W+xS8zzBEuCBrhPqPj2T8G+FxnA5s8BL3AY44S/
cl/lP8L9B9ewszHzsSrLxnb6bmh7D8fzXfy2/Rejrz//ABedZLMy7pFrzsvm3Haez2ibmj/jKx6n
/Wl6ArOHJ7kBL6Hzcn4jycuU5meE6x+bHL97HL5f+8f/0fVV459a7n3fWTqD7PpC4sH9WsCpn/QY
vY15j/jE6S/E6wOoNE0ZwBJjRtjA2t7P7bdln/birc5EnGCOh1dv/i5lhDnJRlpLJjMYeYInwvKq
THvre2xhLXtIc1wMEEahzSop1nPZDUPqGHmVfXL6r347i2rLLdlrQdG2tiym3853oWvY3/wWpeY2
1vqsfVYNtlbi17TyCDtcFp/Vnrj+idUryTudjP8AZk1tPLD+d/Wqd+kb/wBt/nrU+vnSKsfNr6ti
e/E6kPU3M1YLIDtwc32/rDT638v9MrGQ+7jE/wBOHpn/AHf0ZORyeP7jzk+W25fmrzct2hlj/O4f
8R5uqyF0H1Ru/wCyLAju94++q0LmVufU55P1lwB/Ld/1D1Fi/nIf3o/m3efgDyvMH/VZP/ScnY+t
d0fWLJb+62of9Dd/35ZzchE+uVu360Zo8qv/AD0xZIyPNHKf1k/70vzafKYL5Tlz3xYz/wCNxdJ2
Rp3J7Ack+AXanIH1X+rVbrgHZR0azkOvt3W7Jb/g6vd7/wDRVLlvqf09vUep/abnBuL0/bfYSdC/
U0NJkbWt2es//i1S+s/X/wBrdTfbW4nFpmvFGoG38+7afzr3f+BempMcvbgcn6UvTD/upNXmOW+9
czDlR/NYazcz/eP8zh/wv+g0r8h73OfY4vseS6x55c5xl73f1lSttlRsulCJJ5Ve3cx4hF0Pq897
Ov8ATnNME5VQJHg57WuH9prl7OvK/qH0h3UOtsyHD9BgRc8/y/8AtOz/ADx6v/WV6otDkgRAk9To
8r/xmyQlzWOEdZY4evw4zxRi/wD/0ruB/jPvpzslmfT9pwnXvND69rba6y52yuPbXkbWbP8ARP8A
+Eeuyc/oX1q6Y+llrMqh4BO0j1KnHcGP2u99Fzfds3s/8DXieXQ7DzsjDf8ASxrX1O76sc6v/vqs
YGdlYOQzKw7XU31mWvafwP5rmO/PY72Kl78o2Jjii9NL4VhyiOTl5exliBKMo/ISPlP9X/Adjr/1
a6h0K/bePUxnmKcpohrv5LufTt/4NZMrt+k/4wcXLxXYH1kpFjLAGOvY2WuaeXX0t+jt+nvx/wDt
pB639RQ+s9Q+rljcvEcC70Gu3OEf9x7Jd6/53s3er/xqinhErliPEOsf04uhy3xHJiMcPPx9rIfT
DP8A+B83+H/k5vHLtfqtlVde6Nk/VrPeDcxu/p736kQPaGaf9pn/AMvf6FllX8zWuLc1zXFjwWuG
hB0IPmj9Pzb+n5tObR/O0PD2gzBj6THR+Y9vseosc+CWvyn0yHeJb3O8v94wmMTw5YEZMGT9zND1
Ql/3MkeRj3Y19mPe3ZdS4ssboYc07XDRbP1Ibu+tGCPOw/dVaVsfXXplGfhUfWjpzSa72N+1CdQC
GsqsLW7m76/5i/8ASf6P/hFmfUGov+s2O7/RMscfmx1f/oxPGMwzxjuOKJie8WtPm48x8Mz5a4Zj
Flhlh/m80YSjOH+Mr6+tLfrNknjeyo/9Brf++rBqbdbYyqppsssIaxjRJLidrWtA/eXS/wCMZm36
wg/v0MP4vb/31WPqN0imqu/6x9QaRjYbXOx5HJaCbbmt/P8AS+hX/wAL/wAJUjLGZ55RH7xJPaKM
HNx5f4VgzSHERihCEOuTLXBCAT/WC7/m19XMfoGOW/bM1pfm2NOsGBb+77bXfq9b/wDQUvXEEkq1
1bqNvU+pZGfbo695cG87Wj21VyA3+brDWKqAXENaJJ0AHJKjyz4pafKPTEf1Q2eR5Y4MPrN5shOX
PP8Aey5Pm/wYfJFZanQfq9ndcyhTjtLKWn9NkuHsYP8Av9v7lX/ov9Itvon1Etcxuf154wsJo3uq
c7Y8jt6zne3HZ+9/hv8Ailc6j9esDp+I3p31boDWVAsbc9sMaP36mH33Pd9P1L/z/p+tvT4YQBxZ
Twx6R/Tm1uY+IzySOD4fH38vyyzf+BuX/vZP05/1Hp8dnRfqv0xlL7a8algJc95h9rwP0lm3+cut
d+6z+ouU6n/jHvuurr6ZSaKQ9pfbZBe5oLdzBX7mVfnfnW/9bXIZudmZ+Q7JzLXX3P5c49udrR9F
jP5DEOis3X10t1dY9rAPNx2p8+akajjHBHYfvMHLfAsMDLNzcvvOaVykZfzYl+keH9P/AKo//9N/
8Z3QLcXqY6zSz9VzNrbnD828Dbq2Pa26pjHf8b6q41j17/m4WLn4tmHmVNux7m7bK3cEf99c13uY
9v0F5Z9Yv8XHVem2G7pQf1HDMna0D1ma+1jqx/SPb/hKW/8AWa1VzYTZkBdu78M+IxEY4skuGUdI
k7Sj/wB8801y0uj9c6l0e/18G0sLo9Ss6seB+bYz87/q1lWMux7XU5FbqbWGHV2NLHA/ymPhyk14
8VUIMTY0IehjPHlgYzEZwkNYy9UZPoVXV/qx9bWV09aYMDqQAa3IYdod+dDbnhzNuntqyf8ArNnq
LF639SusdJD7mt+14jJcb6uWtE+62n6dftG5/wDOVM/0q5sOC6P6v/XbqfR4psJzMMQPRscdzABt
Aos93pt/4P8Am0/jhPTKKP8AnI/93Fr/AHfmOV9XJT48fXlMx9H/AKb5f8l/ddj/ABe9TquryegZ
h31Xtc6it3BBBblUjX85n6TY3/hlZ+rfQLOjfXK/GO51Axn241rolzC+pnu2/n17vTf/AJ60MTB+
rXX7aOr9IeMXNx3stf6IDHgzudXl44+l6n6Rnqf4T/S21rp9rdwdA3AEA94PP5FZx4rjCyD7ZuEx
+lHs4fO/EOHJzHtwlj+9w4OZ5fIOE4s8f8rH+88N9bei39Z+t2HiV+1jsVrrrP3a2WW+o/v+81lf
/CKP1/6hXgYGJ9XsM7atjXXN1JFdcNxmbj+89m9/5/6Ji7va3dugboie8Lms/p31c6Rk5HWutWDK
yshzn1NuhxhujKcXG/PdUz0a/Uf9D/gUsmKhMgge4fVM/owRyXxASyctHJCWSPKRrBgxjjln5k7Z
P6vA8V0T6ndY6vstaz7PiOInIt0Bb+9VX9O72/Q/wX/CrefnfVb6oiyvpzf2h1dssdY8yGH87dY0
emxrfzq6P0v+Ctesn6wfXjqPVZoxd2Fh6gsY473g+39NY2Pbt/wTP+uequalVeOGPTGOKX+cl/3E
XoByvM836udl7WI/+BMJ+b/zpzf5T+5B0esde6n1m4W5tstb/N0t9tbf6lf738t36RZ6aVKqq26x
tVLHW2OMNYwFzif5LWqEkyNk2S6MI48UBCEY44RGkY+mMWK6j6g9Efn9Vbn21k4mEd4fw03CDVX/
ACvT/nvb/wAHv/nE3QvqH1XqFgsz2uwMUEbt4i137za6nfQ/4y3/AMFXpODg4uBi14mJWKqKhDWj
8XOP5znK1y/LyMhOQqI1AP6ThfGfjOOGKfL4JCeWY4ZyifTiifm9X77/AP/U9VWJ9aPrFd9XsWvM
GC/MxnEtusY8NFRO30vU9r/ZbLm+p+//AMbWttBy8XHzMa3EyWCyi9hrtYZEtcNrhLYc3+ygbrQ0
V2MxEwZx4o36o7aPAW/42cawQ7pBsHg+5v8A6RchD/GliH/vDr/7eb/7yrG+tP1E6n0S2zIxmOy+
mlx2WMBdZW2N23KY0e3b9H12/ov+J9T0ly0jxVWWTKDRNfQO9g5TkckRKEeKJ/rz/wC+fQz/AIzq
HfQ6NU342g/+67Umf4xMzIsbTidIofc8wxgDrHE+DWVta5ywvq99Ruu9Z23Fn2PDJg5F4IJGkmmj
22W8/wDB0/8ACr0/oP1Y6T0Ks/Y65ve0NtyXmXuA1/q1s3fmVIxjmlvLhHkFnMZfhuAVHF7uX93j
ycI/vy403RW9V+zm3qlWPj3WQW047TLR+7dY5722P/4v/prRSSVkChTiZJ8cjKhG/wBGPyjyUqPV
29ROL6nTaqLsmuSKsgGHCNa63tcz07Hfy/0avJJEWKVCfBISoSo7S+U+b53Z/jAzsW11GZ0mll1Z
h9Z3McO/0Xtcnb/jLr/O6RWfhaB/6Icux6z9Xul9aqDM6qXsBFdzDtsZP7rv++Wb615v136j9Z6S
DbW37biD/C0g7mjxuo9z2f1merWqmQcxDUS4o+Qt6HksnwnmQI5MIw5f3TkyRhL/AGc+P/muz/45
mP8A+U7P+3h/7zojP8aGMzQdMLZ522j/ANJNXnpeFv8A1b+p/Uut2stex2P08EGy942lzZ9wxtzT
6j/5f80o4Zc8jQN/SLb5j4f8LxQM8kOGI75Mv/fvov1a+sbuv1XXtw341FRDW2OduD3Gd7WQ1v8A
N/nf11toGFh42Bi1YeKwVUUt2sYPD/vznfSe5HV+IIA4jZ6l5XPLHLJI4oe3jv0QviqPiZP/1fVU
lwv1F/xi5f1p6xd067Crxm047r97HlxJa+qrb7mt/wBMuw6nluwum5eYxoe7GosuawmA41tdYGl2
v0tqSm0oejT6nq7G+rEb4G6PDd9Jcj9Qvr1k/WsdQN2IzG+wtqLdji7cbPW53D/gVW+on+MTL+tX
VbsC7Crxm045v3se5xJD66tu1zf+FSU90ks36x9Vs6N0PM6pVWLn4tfqNrcSAdQNXBY/1C+uWR9b
MXLvvxmYv2axrGhji6dzd2u4NSU9UkuW+vn11H1Tw8WyugZWTl2FtdTnFrQxgBus3NDvoufUzb/w
ih9Q/r3V9bKslltTcXNxnAmhrtwdU76NzN21/ts9lv7n6L/SpKesSXC9W/xi5fRvrhX0HqGFWzDt
srDczeW/oroay8h42babD+m/4q1df1bPb0zpeZ1Fzd4w6LL9kxu9NrrNm7+Xt2JKbaS83+qv+Nq3
rXXcXpeZhVYteUXMbc2wmLNpdU3a9v8AhXt9H+vYu1+svW6+g9Dy+q2N3/ZmSyvjdY4iuln9q17N
ySm+aKDaLjWw2jQWbRuj+v8ASRFyX1T+u56v0HM6/wBXZT03BxbDWH7nGdrWOe87m+7e+6uqllf6
Sy39GsD/AMd3qOflW09A6BdnMq13Ave8s4bY+jGps9H/ALdsSVZfTEl59gfX366ZOfjY9/1WyKKb
ra67bnV3gMY5zW2WuLqQ32MO5Wfr5/jDy/qr1PHwqcOvJZfQLi973NIO99e3a1p/cSU//9bkPqH9
YMvoHW78zE6dZ1SyzHfSaKi5rmtNlNnreyrI9rfS2fQ/wi7Tqf8AjM63l9Ny8V/1WyqWX0WVutL7
CGBzHNdY79Tb/Nt9/wBJY/8AiYa5v1sy9zS2cG2JEf4bFXrH1hn9gdSjn7JfEf8AFvSU+b/4kOOu
f1cb/wB2ln/4k/8AxS5n/hJ3/n3HWl/iOaQ7rQcIkYvPgftSxuijqn+LX60W3dUwrcjCsrdjnIqa
dr63OrtbdjvdtqfY30m/oX2JKfT/AK//APiN6t/4XP5WrlP8SH/JnU/+Pr/6gql9bP8AGdhfWHol
/Ruh4OW7JzdrHOsYzRgc179jKH5LrXWbPS/wahhVZv1H/wAXWYc1pp6t1ywsxaGki1jH1hm+xo99
VtNXr3e3+ae/Hrs9O1JTXd1Cr62f4zm5F+VVT0npFgdVY+xnpmvGeNnpvdtrt+3Zf6T/AML2f8Cg
ZPUMT6nf4yzn4VjLOlZp32ei9jm+jkH9Yb+i3NZ9my2Ouro/4CpXfqX/AIqendY6BT1Pq92TTdlF
z6a6HMaBT9Gp1jbqLffZtfb7X/zL6kvrr/iq6d0boF3U+kW5V92K5r7q7nMePR+ja9jaaKnbq3OZ
Y73bPR9VJTe/x1dC9XFw+vVNl2OfsuTEk+m8mzHf+61ldvqs/wDQhil9b/rY3I/xX4NrLC/J6u2v
HtcTD91X9PfH5zPWx/Rd/wCGFb+qtx+un+LvJ6Hlv/Xsdn2UueYMs23dNyLNg3+nuZXW/wD032e5
eXdOxuodSzenfVy/eykZprALfdW691NOX/223G3bP+MSU7XXui/82+m/VPruKP09tTb7fbA9Vr29
QodY8fSs9PJ9D/i8VdT/AI3vrBjZX1f6Ti4ji8dTc3NaQQD6LWfohZX9L9M/J9n/AIXeul/xk9DZ
1P6nZVVLALOngZWO0GABSD6rYH0v1R17WM/f2Lyv6ldPy/rJ9Zuk4WZL8XpzJMiIope/KFTv32vy
L/R/qWpKeu+vXTn/AFb/AMWPT+j1wHPvqrzCDIc9zbs2/wB35zftNXs/kLI+o/1xzPq/0UY+D9W7
843WOfbnVueBYQdrW+3Fv/mW/o9vq/8AnxekfXn6uWfWP6u39PoLRlBzLsZzzDQ9h7kT9Op1tX9t
ed/Vb67dY+pVFnQeudLvfTTY40x7H17jutY3cPSyKH2fparGP/P/AMLXZX6aU9Lgf4zOtZWdjYtn
1XyaGZFrKnXOfYQwPc1hsdOGz6G7d9Jcx/jt/wDFBg/+Ex/58tXT4P8Aje6dm52Nht6bksdlXV0t
e4sgGxzag4/5y5r/AB1se76wYO1pP6mOBP8AhLUlP//ZOEJJTQQhAAAAAABVAAAAAQEAAAAPAEEA
ZABvAGIAZQAgAFAAaABvAHQAbwBzAGgAbwBwAAAAEwBBAGQAbwBiAGUAIABQAGgAbwB0AG8AcwBo
AG8AcAAgADcALgAwAAAAAQA4QklNBAYAAAAAAAcAAQAAAAEBAP/hEkhodHRwOi8vbnMuYWRvYmUu
Y29tL3hhcC8xLjAvADw/eHBhY2tldCBiZWdpbj0n77u/JyBpZD0nVzVNME1wQ2VoaUh6cmVTek5U
Y3prYzlkJz8+Cjw/YWRvYmUteGFwLWZpbHRlcnMgZXNjPSJDUiI/Pgo8eDp4YXBtZXRhIHhtbG5z
Ong9J2Fkb2JlOm5zOm1ldGEvJyB4OnhhcHRrPSdYTVAgdG9vbGtpdCAyLjguMi0zMywgZnJhbWV3
b3JrIDEuNSc+CjxyZGY6UkRGIHhtbG5zOnJkZj0naHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8y
Mi1yZGYtc3ludGF4LW5zIycgeG1sbnM6aVg9J2h0dHA6Ly9ucy5hZG9iZS5jb20vaVgvMS4wLyc+
CgogPHJkZjpEZXNjcmlwdGlvbiBhYm91dD0ndXVpZDplNjUwZWE2NC1kNjdhLTExZGEtYmI1YS1l
MmZlYThiNzYxZTMnCiAgeG1sbnM6eGFwTU09J2h0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9t
bS8nPgogIDx4YXBNTTpEb2N1bWVudElEPmFkb2JlOmRvY2lkOnBob3Rvc2hvcDplNjUwZWE2Mi1k
NjdhLTExZGEtYmI1YS1lMmZlYThiNzYxZTM8L3hhcE1NOkRvY3VtZW50SUQ+CiA8L3JkZjpEZXNj
cmlwdGlvbj4KCjwvcmRmOlJERj4KPC94OnhhcG1ldGE+CiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAK
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIAo8P3hwYWNrZXQgZW5kPSd3Jz8+/+4ADkFkb2JlAGSA
AAAAAf/bAIQADAgICAkIDAkJDBELCgsRFQ8MDA8VGBMTFRMTGBEMDAwMDAwRDAwMDAwMDAwMDAwM
DAwMDAwMDAwMDAwMDAwMDAENCwsNDg0QDg4QFA4ODhQUDg4ODhQRDAwMDAwREQwMDAwMDBEMDAwM
DAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwM/8AAEQgBLAEsAwEiAAIRAQMRAf/dAAQAE//EAT8AAAEF
AQEBAQEBAAAAAAAAAAMAAQIEBQYHCAkKCwEAAQUBAQEBAQEAAAAAAAAAAQACAwQFBgcICQoLEAAB
BAEDAgQCBQcGCAUDDDMBAAIRAwQhEjEFQVFhEyJxgTIGFJGhsUIjJBVSwWIzNHKC0UMHJZJT8OHx
Y3M1FqKygyZEk1RkRcKjdDYX0lXiZfKzhMPTdePzRieUpIW0lcTU5PSltcXV5fVWZnaGlqa2xtbm
9jdHV2d3h5ent8fX5/cRAAICAQIEBAMEBQYHBwYFNQEAAhEDITESBEFRYXEiEwUygZEUobFCI8FS
0fAzJGLhcoKSQ1MVY3M08SUGFqKygwcmNcLSRJNUoxdkRVU2dGXi8rOEw9N14/NGlKSFtJXE1OT0
pbXF1eX1VmZ2hpamtsbW5vYnN0dXZ3eHl6e3x//aAAwDAQACEQMRAD8A9VSSSSUpJJJJSkkkklKS
SSSUpJJJJSkklm9d67h9Ewzk5J3PdpTSPpPd4D+T++9AkRBJNAL8WKeWcceOJnOZqMR1buRk4+LS
6/JsbTUwS57yAB965XqP+MTCqca+m0uySNPVf7Gf2W/zj/8AoLiur9e6h1rIN2W/2A/oqG6MYP5L
f3v5aqsCo5eckTWP0jv+k9Ryf/F3FjiJc0fcn/m4nhxx/wAL5pvR5H106/kyG3Nx2ntU0Aj+2/e5
Ubc/PyiTkZNts8hzyR/mztVJgR2BV5ZJy+aRP1dCPLYMX83ihD+7EA/4z6H9UuqOzumiq1034sMc
TyW/4J//AHxbi89+quacPq1QJivI/RP+f82f89ehLR5bJx4xe8fSXk/ivLjDzMuEVDJ+sj9fmj/j
KSSSUzQUuT6lm2ZWa6xji1jPZXBjQfnafvLf6xknHwXlph9nsb8+f+iuYYxVObntAf3j+x0vh2IA
Sykf1I/902KeodQqjbe4gdne4f8ASV6jr+S3S6ttg8W+0/xas9rFLYq8cuSO0i2cmLDP5oR+g4T/
AM16LF6li5WjHbX/ALjtD/5krS5IsI1GhHBWp07q7gRRlGQdGWn8j/8AyStYuZBNT0Pfo0c/JUDL
EeIdYn5vo7KSSSstJSSSSSlJJJJKUkkkkpSSSSSlJJJJKf/Q9VSSSSUpJJJJSkkkklKSSSSUpJJJ
JSHLyqMPGtysh2ymlpe93kF4913rOT1nqD8u4kM+jTX2Yz81n/k12H+Mrqrq6Mfpdbo9b9LcB+60
xU3+0/c7/ra8+WfzmUmXANo7/wB567/i5yMYYfvUx+sy2Mf9TEP+/SMR2IDEdiqu5NsMVhirsVhi
LVm2KyWkObo5pkHzC9OwskZWHTkD/Csa75ke5eYsXc/VHI9XpfpE60PLY8j+kb/1StcnKpmP7w/6
Lg/HMXFhhk645V/gzdxJJJX3nXA+sN27IqoHFbdx+Lv/ADlZ7ApZ13rZ11kyNxaPg32j8iTFmZZc
U5HxdvFDgwwj2GvnL1FKxqIGJmIwiE1ZKRtruagvarT4Vd6C+BdbouebWnFtMvYJYT3b4f2VqrkK
7nUXMuZ9Jhn+8Lra3tsY2xurXgOHwKvctk4o8J3j+Tn87hEJicfln/0urJJJJWGopJJJJSkkkklK
SSSSUpJJJJT/AP/R9VSSSSUpJJJJSklC22umt1trgytgLnvdoABqXFeX/WX645fUc9hwbHUYmI8O
ojQue3/Dv/74xRZc0cYs6k7BvfD/AIbm53IYw9MYi55JfLH92P8Aek+ppLH+rP1gp65gC3RmTVDc
irwd++3/AIOxbCkjISAkDYLVzYZ4cksWQcM4HhkFJJJIsb5L9d8o5P1kytfbTtqb/ZaN3/T3rBWn
9ZST9YOoT/3Is/6pZix8hucj/WL6NyURDlcERsMcB/zEjEdirsR2FNZJtlisMVZhR2FFqzDZYup+
pd5GRkUdnsa8fFp2/wDf1yrCtz6q3en1ioTAsa5h+7cP+pUuA1kifGv8ZzPiOPj5bKP6pl/iev8A
7l7tDvs9Kiy39xpd9wlEVDrdvp9LvMwXANH9ohq0pmoyPYEvK4o8eSEf3pCP2vLsdOp5OpVhhVRj
kZrllvQTi22uRN6qtep70mAwSuegvcmL0Nz0l0YMXldJ0O02dNrnlhLPuK5d7l0f1bM9PP8Axjv+
+qflT+s+hYfiEf6OD2kHVSSSV9x1JJuNSuD+s/1jty8ptOFYWY+M7c2xpgusb/hP6jPzFHlyxxxs
69g2uS5LJzWTgh6QBcpn5Y9nvUlh/Vn6xM6tR6NxDc6ofpG8B4/0rP8Av63E6ExOIlE2CxZ8GTBk
liyDhlH+XFFSSSScxKSSSSU//9L1VJJJJSkklh/W7rv7G6U59ZjKyJrxx4GPdb/1pqEpCMTI7Bkw
YZ5ssMWMXPIeEPMfX76zG+13RsN/6Go/rTx+c8f4H+pX+f8A8IuKSLi4lziS4mSTySUlk5MhySMi
+g8lymPlcEcOP9H5pdZz/SmXQ6J1nJ6N1BmZQZA0tr7PYfpMP/fV7BgZ2N1DEqzMV2+m5u5p7jxa
7+U1eHrp/qT9Zf2VmfY8p8YOSdSeK7Do2z+o76Nim5XPwHhl8sv+aXN+O/DPvOP38Q/X4hqB/lcf
7v8Afj+g+opJk60XjHx/630up+sme0/nWbx8HgWf9+WOuu/xk4fpdXpywPbk1AE/yqztP/QdWuRW
Rmjw5JjxP4vofw3KMvJcvMf5uMT/AHoeiX/Oiu3lHYUBFYUxsyGjZYVYYVVYUdhRa0w2mFaPSLhV
1LFsPAtZPwJ2/wAVlscrFFmyxj/3XA/cZRiaIPYtTNDihKP7wMftfVVi/Wm3bgVs7vtH4BxWyCCA
RwdQuc+uFgAxa51l7o/zQtPmDWKXl+byPIR4uaxjxJ/xY8TiNciteqjXorXrNeglBtB6l6irB6fe
kxmCcvQ3PQy9RL0kiDJzl1X1dYW9Lrcfz3Od+Mf99XHOeu86fR9nwaKYgsY0H4x7v+krPJi5k9h+
bS+Knhwwj+9K/wDFH/oTYSSWb17rFfSsI2CDkWS2hh7u/fP8hiuykIgyOwcjFjnlnHHAXKRoByfr
f100MPTcV0W2D9YePzWn/B/1rP8AqFxDyi32vtsdZY4ve8lznHkk8lV3lZeXIckjI/Qdg9lyPKR5
bEMcdTvOX7818fMyMLJrysZ2y6o7mu/K138l35y9S6J1ejq/T2ZdXtd9G2vux4+kz/yK8leVq/VP
rp6R1VosdGJlEV3jsD/g7v7Dv+gn8tm4JUfllv8AxY/i3w77zgM4D9fiFx/rw/Sx/wDePqiSSS0n
jlJJJJKf/9P1VJJJJSl5L9dernqfXLWsdOPiTTUO3tP6V/8AasXpvWs37B0nLzO9NTi3+tG1n/TX
iZJJkmSeSqfOz0jDvqXo/wDizywM8vMEfJ+rh/el86k6ZOqL1QUkkkkl9J+oX1k+24w6VlOnJx2/
oXE6vrH5v9er/wA9rr14biZV+Hk1ZWO7ZdS4OY7zC9h6F1ijrPTq8yr2uPttr/cePps/8itDlc3F
Hgl80dvGLx3x/wCG+xl+8Yh+qyn1Af5PL/3uRxv8YnTzk9EblNEvw7A4/wBR/wCjf/0vTXmK9yzM
WvMxLsW0TXex1bvg4bV4nmYtmJlXYtoiyh7q3fFp2qHnYVIT/eFfUOj/AMWeZ48GTlyfVilxx/2e
T/0NCpsKgnGiqvQNhjkdjlUY5HY5JhnFtscih2hVVrkVrkWvKL61hO34dD/3q2H72hcx9c7P13Hb
4VE/e7/zFdD0Sz1OkYb/ABpZ/wBSFyn1zt/yuxv7tLfxLytDmD+oHjwvJ/Dcf9PkP3Pc/wC9cxr1
MPVVr1MPVB35QbQen3qsHp96SzgTl6iXoO9MXpJEG/0ug5nUaKOWl4c/+q33u/Iu/XLfUzE3G/Oc
P+Cr/wCrsP8A1C6laHKwrHf7xv6PP/FsvFzHANsQ4f8ADl6pI8i+rGoffc7ZXWC5zj4BebdZ6rb1
PNfkv0b9Gpn7rBwP/JrW+uHXPtF37Ox3foaT+mcPznj8z+rV/wBWuXc5Qc1m4jwD5Y7+MnV+Dch7
UPfyD9ZkHpH7mP8A76az3IL3J3uQXuVV3YRYvcgPKm9yC4yUGxAPrH1K6sep9Er9R26/F/Q2+J2j
9G/+1Wt9eb/4ts01dVvwyfbk1bgP5VZn/qHPXpC1OXnx4ok7j0n6PC/GeWHL87ljEVCf62Hlk/8A
Q+JSSSSmc5//1PVUkkklPOf4wLHM+rN4Bje+tp+G4O/76vKAvWfr7S636sZO0SazW8/APbP5V5MF
n85/OD+69f8A8WyPucq392V/4sF06ZJVXdC6SSSS5S3fqj9YHdF6kPUJ+x5EMyG+H7t3/W/+oWEk
jCRjISG4Ys+CGfFPFkFwmOE/x/wX3drmuaHNILXCQRwQV5x/jG6T9n6hX1KsRXljbZHaxg/7/X/1
C1f8X31h+0456Rkum6gTjE8urHNf/Wv/AD3/AFF0H1j6SOr9IvwwP0pG+k+Fjfcz/O+gtGdZ8Njf
cf3h0eN5Yz+F/EhHIfRfBOXSeDJ8uT/u3xpJO5rmOLHAtc0w4HkEJlmvbsmmEVrkBSa6EkEW2muR
muVRrkVr0mGUH1n6su3dAwT/AMEPwXJfXJ/+XXjwqr/IV1P1TM/V3BP/AAf/AH5y4/66Pj6w3DwZ
X/1Kv8x/ueH+D/0Xlfhkf+FOYHb3f/Srmh6kHqqHqYeqLvmDZD0+9Vt6fekt9tsb0zS572sYNz3k
NaPEnQBA3rf+pnTjl9ROW8TTiajwNh+h/mfTT4RM5CI6li5iccGGeWW0BfnL9GP+E9n0vCbgYFOK
Oa2+8+Lj7nu/zln/AFo62Ol4Wyo/reRLav5I/Pt/s/m/y1q5WTTiY9mTe7ZVU0ue7yC8s6t1W7qe
dZl26bjFbP3WD6DFe5jKMcBGO5FD+rF5/wCF8nLm+Ylmy644Hjnf+UyS9XB/37Xc+eTJPJQnPUXP
Q3PWc9dGC7nILnJOchOcgzRipzlBJJJldv6mWOr+suCWmNz3NPwcx4Xrq8k+pNLrfrNh7Rowue74
NY5etrQ5L+bP954//jPX3vH39oX/AI81JJJK04L/AP/V9VSSSSU1uo4bM7AyMN/F9bq/m4QCvELa
rKLX02jbZW4se09i07XBe8Lzb/GJ0E4uaOr0N/QZR23x+baB9L/rrf8ApqrzeO4iY/R38ne/4u82
MeafLyNDNrD/AGkP0f8ADi8cnTBOqD1oXSTJ0FykkkkkpsPLvwsqrLx3bLqXB7D5heydG6rR1fp1
WdTp6gixn7rx/OVn+qvFV031G6/+zOpfZL3RiZhDXTw2ziuz/vj1Y5XNwT4T8svzcf478P8AvPL+
7AfrsAMh/Xx/pw/7qDP6/wDRfsPVfttTYx82XacCwfzrf7f84uWXsv1i6QzrHSrsQx6sb6HeFjfo
f530F45Yx9b3V2AtewlrmnkEaEIc1i4J2Pllr9eqfgPPfeOVEJH9bgqEv60P8nP/ALlikkkoHXZB
0IjXoKcGEkEW+v8A1Q1+rWB/xZ/6py4r67uj6yXj+RV/1K7T6of+JrA/4s/9U5cP9ezH1lv866v+
pV7mP9zw/wAH/ovKfCRfxfmh/tv/AEtFyA9SD1WD1IPVF6QwbO9Leq+9Leit4GwHOcQ1oLnEw0Dk
k8BepdA6YOl9LqxiP0pG+4+L3fS/zfoLi/qL0n7d1E51rZowoLfA2n6H/bbf0n+Yuv8ArP1tvRum
PvaR9pt/R47T3efz/wCrX9NXOViIRlll9PJ5341llmz4+Rw+qVgz/wBpL5I/4EPXJ5r69dd9a8dK
x3foqSHZBHez82v/AK1/1a5EvQ32ue4ve4uc4kucdSSdSSoF6q5MhnIyPV3uT5OHLYYYo/oj1S/f
n+lJI56G56gXqBJKY2xBk5ygkkkvUkki42PdlZFeNQ0vttcGMaO5KSCQASTQGpJey/xadOLsnJ6k
5vtraKaz/Kd77P8ANa1v+evQVQ6H0qrpHTKcGvUsE2P/AHnnWx6vrWwY+DGI9dz5vn/xPm/vXN5M
o+S+HH/s4emP+N86kkklI0n/1u4v+uXRcbrNvSMqz0Latv6Z382XOG/YX/4Nzd3563Gua9oewhzX
CQ4GQQfBeEdcy3ZnWs7JcZ9S+wj4bi1v/RV3oX1r6x0VwGNbvx/zsayXVn+qP8H/ANbVYczUiJDS
9CHcn8E4sUJYZVk4RxQn8spV6uGX6L7WgZuHj52LZiZLBZTc3a9p/wBfpNWF0H69dH6ttptd9jyz
p6Vp9rj/AMFb9F39pdIpxKMxoQQ5GTFm5fIBOMsc4mx9P0oyfHPrJ9W8voOWWPBsxbCfs98aOH7j
/wB21qyF7lnYGJ1DFfiZdYtpsEFp/wCqafzXtXlv1n+qOZ0Ow3VzfgOPsuHLZ/Mvj6P9dUc/LmHq
jrH/AKL1fwn4zHmAMOYiOcaA7RzeX+s/quAnUU6qu4F0kkklykkkklPqv1J69+1elim505eJDLJ5
c3/BW/8AfXrm/wDGH0P7LmN6rQ2Kco7bgOBaB9L/AK61YH1f6xb0bqlWY2TXO29g/OrP0x/39q9Z
z8PE610t+O4h9GVWCywaxPuqtb/V+kr0D7+EwPzx/lF5XmYn4V8SjzEB/Rs98UR0Ev52H+B/OY3x
RJHzsK/Ay7cPIbttpcWuHw/OH8l30kBUSKNF6mMhICUTcZCwR1BUkkkkl9h+qYj6uYH/ABQP3krg
/wDGDp9ZLD41Vn8F6B9WW7fq908f8Aw/eJXBf4xWx9YZ/eorP4vH8Ff5j/c8f8H8nkvgx/4Xz+Pv
f+lHmQ4pw9QSVB62km9EortyLq6KWl9trgxjR3c4w1V13P8Ai66Dve7rWQ32smvFB8eLbf7P823+
2n4sZyTER9fJqc/zUOU5eeaX6IqEf38h+SL2HRumU9H6XViNI/Rt3W2cbnn3W2FeZfWrrx6x1V9j
HTi0zXjj+SD7rP8Arrl1/wDjA679iwB02h0ZGYP0hHLavzv+3fof9uLzRWObyAVijtHf9gcj4Byc
pcfP5vVkzGXt32P85k/w/lZF6aSmSVR6JSSSSSlJJIuNjX5V7MfHrdbdYYYxokkpIJABJNAaklG1
rnuDGAuc4w1o1JJ7BemfUv6pnpdf7Qzm/r1ohjD/AIJp/wDRr/z1P6qfUynpIbmZu23PI9o5bVP7
n71n/CLqFf5bluGpz+boP3Xk/jPxr3hLluWP6rbJl/zv9SH+r/6aklV6h1PA6bQb825tLO27knwY
we5/9lcJ1z/GJlZG6jpDDj1HQ5D4Nh/qN+hUp8maGP5jr+6PmcvkvhvM82f1UPR1yy9OOP8Ahfpf
4D2nVuv9L6PXuzbg15Etpb7rHf1a/wDvzlW/50Yf/N79u7D6X+hkbt2/0vTn95eSW3W32Otue6yx
5lz3Ekk+bitb9ov/AOaf7PnT7Zuj+Ts3R/24qo508RPD6QNuu/d3Jf8AFnGMUIjKTmlL1ZCPQI8E
/THH/f4H/9fjN5c4udqXGT8SptKn1Gk43UcrHIj0rrGR/Vc5qE0rOkHtMU7APdKCul6D9eOr9J21
WO+2Yg09G06tH/BW/Sb/ANQuYBUwUwSlE3E0zzxYs8ODLATieh/7n919p6J9aekdaaBjW7MiJdj2
e14/q/6T+wtayuu2t1djQ9jxDmuEgg9iCvBGPexwexxa5plrgYIPkV2HQf8AGJn4e2jqgOZjjT1R
pa0fH6N39v8Az1ax80DpkFePRw+c+ATjeTlZcYGvtyP6wf3J/pN36zf4v3MLszojS5nL8OdR/wAQ
T9L/AItcO5rmOLHgtc0w5pEEEdiF7X0zrHTuq0etg3ttb+c0aOb5WVn3MWf9YPqj03rbTYR9nzI9
uQwc/wDHM/wn/VpuXlRIcWOten6J/usnIfHcmCXsc6JVH0+4R+th/tY/p/8ATfI060es/V/qfRbv
TzK/0ZMV3t1rd8HfvfyHLNVKUTE0RRenxZYZYCeOQnCW0omwukkkgyKXof8Ai66562O/pF7v0lA3
488lh+nX/wBbcvPFZ6dn39Ozqc3HMWUODh4EfnMP8l7fapMOQ45iXTaXk0/iXJjm+Wni/T+bGf3c
kfl/717v/GH0H18dvWMds20Dbkgd6/zbP+tf9R/UXna9vxMnF6n0+vIrizHyq52nXRwh7Hf9Q9eT
/Wfob+i9UfjgH7PZ78Z57sP5s/vV/QU/N4tRkjtLf+Llf8XueJjLks2mTDft8W/APnx/3sbkJJJK
o9C+2dGZs6RhN8Meof8AQauC/wAZbI6xjv8A3scfg969DwmbMOhnG2tg+5oXBf4zmRnYL/Gp4+53
/mS0eZH6jy4Xi/gc7+KA/v8Au/kZPFJJJLOe0b3RulX9W6jTg06eoZe/91g/nLP7LV6+BhdI6bpF
WJh1/c1o/wCqcsP6jdA/ZnTvtd7YzMwBzgeWV811/wBr6b1l/wCMfrcCvo1DuYtyo8P8FV/6M/7b
V7FEYMJyS+aX8oxeU57LL4p8QhymI/qMRPFIf1f57L/6jxvH9X6nd1XqN2dd9K13tb+60aV1j+q1
UkklRJJJJ3L1MIRhGMIDhjACMQOkYqSSSSXKSUmMfY8MY0ve4w1rRJJ8gF2f1e/xfXX7crrM01ct
xRo8/wDGu/wf9X+c/qJ+PHLIaiL/ACa3N87g5WHHmmI/ux3nP+5F53on1e6l1q708RkVNMWXu0Y3
+1+c7+Q1endB+rXT+h0xQ31Mhwi3Id9I/wAlv+jr/kLSx8fGw6G00MbTRWNGtENAC5rrv1+6dgbq
MCM3JGkg/omn+VYP5z/ravQxY8A4pkcXf/vQ8tzPPc78Vyezy8JRw/uR/wClnyPTZGRRjUuvyLG1
VMEue8gAfMri+u/4xa2bqOjM9R3BybB7R/xVf53/AFxcd1XrnU+r2+pnXF4H0axoxv8AUrCoKDLz
kjpD0jv+k6fIf8XMWOp80fen/mx/Mx8/84nzM7MzrzfmXOvtP5zzPyb+6gJJKqSTqXejGMQIxAjE
aADQBSLud9l2/m+pPzhCWh9kd+wPtke37X6U/wDW96IG6JSAMR3ND/FkX//Qzf8AGH092F9aMl0f
o8sNyGH+sNtn/grHrnGlerf4zuhuzuks6lS2bunkl8cmp385/wBtu9//AG4vJ1SzRqZ8dXpfh+f3
OXgb9UBwS/wUzSpgoLSiAqEh1Mc0oKkChgqQKYQ2YybWJmZWHe3IxbXU2t4ewwV3XQf8ZDXbcfrT
Np4GVWNP+u1D/qq/+2156CpAp0Ms8Z9J+nRi5rkeX5uNZYWf0Zx9OSPlJ90nB6liaenlYtw8nscF
xXX/APF0RuyeiukcnEef/PNrv+os/wC3FyHSeu9T6Pd6uDcWA/TqOrHf1616J0H6+9M6ltozIwso
6AOP6Nx/kWH6P9WxWRkxZhwzHDL+XyycOXJfEPhkjl5WRzYd5Rq9P9bh/wDUmN8yvovxrXU5Fbqr
WGHMeCCD8CoL2jq3Qul9Zp9PNqDyB7LW6Pb/AFLF57136h9T6buvxJzcUaywfpGj+XV+d/1tQZeV
nDUeqPhu6nIfHeX5ioZP1GX92R/Vy/uT/wC+eZSS1Bg6EJKu7D3X+LjrW19nR7naOm3Gnx/wtY/8
+f8Abi6X61dCb1rpb6mgfaqZfjO/lD/B/wBW36K8mw8u7Dyqsqg7baXh7D5gr2jpmfT1LAozqfoX
sDo8Dw9n9h/tV/lpDJjOKWtf9H/0F5T45gnynN4+ew+njNn+rnj/AOrYf+pHxN7H1vcx4LXtJa5p
0II5BUsdm/IqZ+89o+8rsv8AGF9XvSt/bOK39HaQ3KaOz+G3f9c+i/8Alrlui1et1jCq/fvrB/zm
qpPGYZOA99HoOW52HMcp94hp6SZx/cnAeuL7UBAjwXD/AOM+v9F0+zwda37xWf4LuVx/+Myrd0nG
t/cvj/Oa7/yC0OZF4Z+Tx3wWXD8RwHvKUf8AGhKL5uul+o/1f/anUftV7Zw8Qhzp4e/mur/v9iws
DByOoZlWHjN3W3ODWjsPFzv5LW+5ex9I6XR0rp9WDQPbWPc7u55+nY7+s5U+Vw8cuI/LH8S9J8d+
I/dsHtYz+uzCh3x4/wBKf/cwSdSz6enYN2bd/N0MLiPE/msH9d3tXi+bmXZ2Xdl3ndbe4vcfj2H9
Vdj/AIx+tb7auj0u9tcW5Ed3Efomf2W+9cOjzeXinwDaP/SWf8XuS9nlznmP1mfUf1cI+X/H+dSS
SudN6T1Dql/oYNLrXfnEaNaPF7z7WKsASaAsu1OcYRMpyEYx1MpHhiPq01s9D+qvVOtODqWeljT7
smwQ3+x/pXf1V2HQv8X2Fh7b+qEZd41FQ/mmnz/Ot/texdF1DqnTekYwty7W0VgQxg5Mfm1Vt+kr
ePlNOLKeEdv4lwOc/wCMFy9nkYHNkl6Rkq43/q8f6bT6F9Vel9FaHUs9bKiHZNgl3/W/9E3+qm65
9a+ldGBZa/1srtj16u/64fo1f2lx3Xv8YGfnbqOmg4eOdDZP6Vw/rD+a/sf565MkuJc4yTqSeSU6
fNRgOHCB59GLlvgWfmJ+/wDEMkiZa+3dz/w5/wCTj/Uxu11z629V6yTW9/oYp4x6yQCP+Fd9K1Yi
SSpylKRuRsvRYcGLDAY8UBjgP0YqSSSQZFJJJkkKXoP7Df8A+N36e0+tt+2xGvO//wBt1x/1f6TZ
1fq1GE0exx3XO8K262H/AL6vZfSr9L0to9Pbs2dtsbdqmhD9Vkn5RH+NFzeZ5off+T5YHUnJln5D
DljD/u3/0fU3sZYx1djQ5jwWuadQQdHNK8U+uP1Zt6B1RzGNJwcgl+LZzp3pcf36l7aqHWujYXWs
CzBzWzW/Vrx9Jjh9Gys/vNTMmPjHiNm1yXNnl8lnWEtJj/unwMFTaVpfWL6t9Q6BmHHym7qnE+hk
NHsePL91/wC/WsoGFSlEg0XpcWWMgJRPFE7EJgVMFBaUQFMIbcJpQU4KGCpAphDYjJICnUAVIFNp
lEnougfXTqvSC2p7vtWGNPRsOrR/wNn0mf8AUL0bov1k6X1qucS2LgJfjv0sb/Z/Pb/LYvGJU6br
abG20vdXYwy17SQQfJwU2LmZw0Pqj2P7HN574Ny/NXOI9nMf04j0y/2kH1vrf1P6R1jdY5n2fKP/
AGoqABJ/4Rn0bF5/1v6odY6PNj6/Xxh/h6pIA/4Rv0q1ufV//GLZXtxutD1GcDKYPcP+OrH0/wCu
xd3jZONmUNvxrG3UvGj2EEFWDDDnFx9Mvx/wouRHmviXwqQx5h7uDaPF6sZH+qy/of3P+Y+GruP8
W/WdtlvR7ne18248/vD+dZ/ab71t9b+ovSOpbrccfYsk676x7Cf5dP0f8zYuJyuhde+rOdVmmovZ
jvD2ZFfurMH8/wDOr3f8IoBjyYJidXEbmP7rqS53k/inLT5cS9vNIXCGT0y92PycEv031bIx6cqi
zHvaH1WtLHtPBBXm2D0C/pX12xMJ8urFvq0WH86todY0/wBZuza9ej4OXVnYdOZT/N3sD2/McIjq
aX2MtcxrrK59N5ALm7vpbHfm7lcyYo5OGX7pEgf6rznJ89l5MZ8RBMcsJ45QP6GWuCM/8H9Jmua/
xhVGz6uPd/ora3n7/T/9GLpVC6mm+s1XMbZW76THgOBjX6Lk+ceKEo9xTX5XP7HMYs1cXtzjOv3h
Ho8r9Q/q59gxP2llMjKym/o2kasqOv8An2/SXSdSz6enYN+bcfZQwujxP5rP7bvarK4j/GDnZOTb
j9CwmPtsfF1zGAuJ/Nqb7f7T1HKsOL09NB/WkW3i9z4lz4OU0Jniya+nFgx7x/xfS8JmZV2ZlW5V
53W3PL3nzJSxMPKzLhRi1Outdwxgkrr+i/4uci3bd1ez0Gc/Z6yC8/17PoV/2d67jp/S+n9Mp9HB
obSzvA9x83vPveqmPlJz1n6Qf8Z6DnPj/K8uPb5cDNOI4Rw6YIV/X/S/wHjuif4uD7b+s2eYxqj/
AOfbf/Sf/bi7SmjB6bi7KWV4uNUJMQ1oH7znf+SWX1763dL6M01ud9oy+2PWRIP/AArv8F/1a846
39Zeqdaf+tWbaAZZjs0YPl+e7+U9TGeHAKgOKf8AL5pObj5X4j8VkMmeZxcvuLHDD/qOH9L/AGk3
revf4xKad2P0Zous4OS8ewf8Wz/Cf2vYuEzM3LzrzkZdrrrXcueZ+Q/dQElUyZp5D6jp26PRcl8O
5blI1ih6j82SXqyS/wAJSSSSjbikkkkkKSTJJKtSQBJAAknQAJAEkACSdAAvQfqX9THY7mdU6oyL
h7sfHd+Z/wALaP8ASfuM/MUmLFLJKh9T2afPc9i5TEcmQ6/oQ/SyS7B1PqT9XD0fAN+S2M7KANgP
LGfmU/8AfrF0iSS0vaj7ft/o1TxP3/P97+93+t4uL+rXy+3/AHOD0P8A/9L1VJJJJTW6h07C6liv
xM6pt9D+WO8f3mn8x7f3mrzT6xf4ss/Ec7I6MTmY/PoOgXNH8nht3/nxeqJJk8cZb/a2OX5vLgPo
Pp6wPyl+d7arsew1XsdVY3RzHgtcPi1yQcvfc/pPTOpM9PPxq8hvbe0Ej+q/6bVzmZ/ix+rd5LqP
WxHHgVv3NH9m4Wf9UoJcvLoQXWw/GcX+UjKB8PXF8oBUwV6I7/FNhz7Oo2AedbT/AN+akP8AFPj/
APlk/wD7aH/pRRnl8nb8W7H4zyfXIf8AFn/3r56CpAr0H/xqaP8Ayyf/ANtD/wBKp/8Axqsf/wAs
X/8AbQ/9KJv3bL+7+IZR8b5H/On/ABMn/evnwKeV6D/41eP/AOWL/wDtof8ApROP8VmP/wCWL/8A
tof+lEPuuX938QyD47yH+dP+Jk/718+laPR+vdS6Nf6uFaWg/Tqdqx39dn/fl2H/AI1uN/5YP/7b
H/k0/wD412L/AOWFn/bY/wDJpDlswNgUfNE/jXwzJEwyT44S0MZY5kf9F2Pq99c+m9ZDabCMXNP+
BedHH/gX/nf1PproCA4EESDoQVxA/wAV+KDI6hZI4IrH/k113TcOzCwasW29+U+oR61n0nCdN39X
6KuYjlqskf8ACeb5+HIiXHyeUkE64pRmODxjOf6LYaxrGhjAGtboGgQAFJJJStBSSSSSlKOxm/ft
G8iN0ax4SpJJKQZmbiYNDsnLtbTSzlzjHyH7zl599Yf8YOVl7sbpM42OdDef5139T/Qt/wDBF0fX
vqaOt5pyb8+1jAAK6NocxkCHbNW/SWb/AONhif8Ac+z/ALbb/wCSVbN78rjAcMe9+qTt/DT8JwiO
Xmchy5t+A45+1iP+L+sk+fOcXEucSSdSTySkvQf/ABsMT/ufZ/223/ySX/jYYn/c+z/ttv8A5JVf
uub938Q7v+n/AId/nT/4Xk/718+SXoP/AI2GJ/3Ps/zG/wDkkv8Axr8T/ufZ/wBtt/8AJJfdc37v
4hX+n/h3+dP+Jk/7189SXoX/AI1+J/3Ps/7bb/5JL/xr8T/ufZ/mN/8AJJfdc37v4hX+n/h/+dP+
Jk/7189SXoX/AI1+H/3Ot/zG/wB6NT/iy6Sx03ZN9o/dG1v/AH1yP3TL2H2rT/xg+HgaZJHwEJvm
y0+k/VzrHV3gYmO70zze/wBtY/tu+l/YXp2B9UPq9gEOqw2PePz7ZsP/AIJLf+itgAAQBAHAClhy
X78vpH+Ln8z/AMZhRHL4jf7+X/1XD/v3nPq79Sen9HLci4jKzRqLHD2sP/As/wDRjl0iSStxhGAq
IoPPZ+Yy8xM5M0zOR6n8oj9FSSSScxP/0/VUkkklKWR1360dM6Aah1D1Wi8H03MYXNJb9Ju797Va
6y/rJ0LH690q3Bthrz7qLf3LB9B//fX/AMhCV0a3ZMXt+5H3L4L9XDu4jv8AGj9WBwMh3wrH8Xob
v8av1dHFOS7+wz/0qvLc3CycDLtw8phrvocWWNPiP++u/NQFVOefg7Y+F8qQCOIg/wBZ9Ud/jZ6I
Po4mS74hg/8ARiEf8bXT/wA3p9x+L2j/AMkvMEkvfn3/AAXj4Xyo3iT/AIUn0w/42sf83prz8bQP
/Ragf8bB/N6b99v/AKiXnAcphyac2T978AzQ+Gcl/mr/AMKf/fPfu/xrZh+hgVD42OP/AH1qif8A
Gn1M8YdA+Jef+/LhA5SBTDmy/vNiPwzkf8yPtn/3z2zv8aHWj9HGxh8nn/0Ygu/xlfWF30W47Pgw
n/qrCuRBJMDUngLsvq1/i+zM7ZldV3YuKYLaeLXjz/0LP636RKM80zUZFGbl/hnLQ48uLHEdARxS
l4Ri2ekfWj67dbyPQwRVA/nLTWAxg/lvO7/NXolYsFbRYQ6wAbyBALo9xaELCwcTAx242HU2mlnD
GiPmf3nI6uY4SiPVIyJ7vN87zOLNMezhhgxx+URHrl45JKSSSUjUUkkkkpSSSSSnB+tF/wBZsWpu
T0UV2VVtPr1Fu6z+uwfnN/ktXGD/ABi/WJphwoJHINZ/g8L1Fc39Y/qV0/rAdkURi5x19Vo9rz/w
zB/58+moM2PIfVjmR/Vv/out8N5zkogYub5eEo9M/Dcx/tf3v7zzDf8AGZ1ofSoxnf2Xj/0Ypj/G
d1TviUH4bx/35cz1XpHUek5HoZ1Jrd+a7ljh+9W/6LlSlUzmzA0ZEHxejj8M+G5IiccOOUZaiUSe
E/4sntx/jQzvzsGo/B7h/eit/wAaNn5/Tx8rT/6TXBylKX3jN+9+AQfg3w4/5AfSWT/v30Bv+NKr
87pzvlaP/SamP8aOH+dgWD4Paf8AvoXncpSj95zfvfgFh+B/D/8ANEf4eT/vn0lv+M/pR+niXj4b
D/39qI3/ABmdCPNOQP7LP/Si8xlKU771l7j7GM/AuQ6RkP8ADL6kP8ZP1ePLcgf2B/5NEZ/jF+rb
iAHXSdAPTJJP9kleUSuz/wAXv1ZOXkDrGWz9Wx3fq7T+fYPz/wCpT/58T8efLOQiK+xqc38J+H8v
hllmcgEdhx/PL9GI9L6Sx25odBbuAMHQifFSSSV15lSSSSSn/9T1VJJJJSkkkklPK/Xf6m19dx/t
eIAzqdLYaeBa0f4Gw/vf6J68hvoux7n0XsdVdWS19bhDgR2cF9ELnvrR9TOm/WBnqu/V85ohmS0c
x9Flzf8ACM/6ahy4uLWO/wCbpcj8Q9qseXXH+jLrD/0F8USWx1v6p9b6I8/a6C6gH25NcurI/rf4
P/rix1WIINEU7cJxnEShISB6hScFMrnTekdT6pd6PT8Z+Q/uWj2j+vYfYz+0hVrjIRFkiIHUtYOW
r0T6v9V63d6eDSXMBh97tK2f17P++N967P6v/wCK2motyOuWes8a/ZaiQz/rtujn/wDW13mPjY+L
S2jGrbTTWIZWwBrQPgFNDlydZaDt1c/mPjMYAxwjjl++fkH/AHzz/wBXPqN0zoobfaBl5w19Z49r
T/wNf5v9f6a6VJJWYxERQFOJmzZM0zPJIzke/wCxSSSSLGpJJJJSkkkklKSSSSUpJJJJTXzun4fU
Md2NmVNuqdy1w4P7zT9Jjv6q88+sP+LzMxN2T0kuysfk0H+daP5P+m/8+L0tJR5MMMg1Gvfq3OT+
IcxykrxyuB+bHLXHL/vXwNwcxxa4FrmmCDoQU0r2Prv1S6R1sF99fpZMaZNUB/8Ab/Nt/trzzrf1
F630susrZ9txh/haQS4D/hKfpt/6apZOWnDUeodw9Nyfxnl+YAjI+1k/cmdP8Cbz0pSokwYOhHIT
SoqdAzZSmlPTVdkWCqit1tjtGsYC5x/stXa/V3/FvlXubk9bJop5GM0/pHf8Y4fzTf8AwT+onwxS
maAa3M87h5ePFlmB2j+nL+7Fyfqn9VMnr2SLLA6vp1R/TXcbo/wNX8v/AM9r1zHx6cWhmPjsFdNT
Q1jG6AAJsbGoxaGY+NW2qmsbWVtEABFV/FiGMdydy8nz/P5ObyWfTjj8kP8Aupf11JJJKRpKSSSS
U//V9VSSSSUpJJJJSkkkklLEAggiQeQVlZf1U+rmY4vyOnUOe7lzW7Cf7VWxaySBAO4tdGcom4yM
f7p4XDo+pP1VoMs6dU4/y9z/AMLHPWxTRTRWK6K21VjhjAGgf2WoiSQAGwATPJOfzylL+8eJSSSS
KxSSSSSlJJJJKUkkkkpSSSSSlJJJJKUkkkkpSSSSSlJJJJKc/O+r/Reou3ZmHVa/98th3/bjNr1R
r+o31WrduGA1x8HOe4f5rnreSTTCJ1MR9jLHmc8RwxyzjHsJyAa2H07AwWbMPHrx2+FbQ379qspJ
J1UxmRkbJJJ6lSSSSSFJJJJKUkkkkp//1vVUl447/Hb1wOI/Z+LoY5s/8mu7+oH1ry/rT0u/Ny6a
6H03mprat0EBrH7jvLv30lPUJJJJKUkuI/xg/XzP+qmVh04mNTkNymPe427pBaWt9uxzf3lh/V3/
ABt9X6t1zC6bbhY9deXc2p72l+4B3du5ySn1NJCyrTTjW3NEmtjngHgloLl5D/493XP/ACvxfvs/
8mkp9jSXPfUb6yZP1l6GOp5VTKLDa+vZXO2GbdfeXfvLoUlKSSXmn1v/AMaXVegfWHK6Vj4ePbVj
7Ntjy/cd9bLTu2uDfz0lPpaS8/8AqJ/jH6l9Z+sv6dlYtFFbaHW76y/dLXMbt97nfvr0BJSkklyH
+MD69O+qlWLXi1V5GZlEu9OwmG1N0Lzsj6T/AGs/tpKevSXjn/j3dc/8r8X77P8Aya9J+qP1ir+s
nQqOptaGWulmRU3UMsb9Nuv/AG4z+Q9JTtJJLyrr3+Nj6w9G6xl9MtwMUuxbHMDv0g3N5qs+n+fX
tekp9VSXA/UL/GTk/WbqtvTc7Hqxnio20Gou9xaR6jD6jnfmu3rvklKSSXmX1r/xs5/Ruv5fTMHE
ovpxXBnqWF+4v2tNo9jmt9jzsSU+mpLx1n+O7rO9u/p+NskboNkx+dt969ex768iivIqO6u5jbGH
xa4bmpKSJJLC+un1kP1a6Db1JjG2372V0VPJDXOce+33e2sPekp3Ul45/wCPd1z/AMr8X77P/Jr1
fo+RmZXSsTKzq205V9TbLamTtaXDfs90u9spKbiSzutdf6R0LG+09UyW47DOxp1e8j82qpvvs/sr
gepf47sRjnM6Z059wB9tl7xWD/1qsW/+fElPp6S8cP8Aju61Pt6djAeZsP8A35L/AMe7rn/lfi/f
Z/5NJT7GkvKeif43+sdS6xhdPswcZleXfXS97S+QHuDC5su816skpSS8jz/8c3WsXOycZuBjObRa
+tribJIY4sk+/wAkD/x7uuf+V+L99n/k0lP/1/K3/Td8SvZv8Sn/AIncz/w2f/PdS8Zf9N3xK9W/
xSdf6J0zoWVT1DOoxbX5Rc1lrw0luysbod8ElPqaSxf+ef1T/wDLfE/7db/el/zz+qf/AJb4n/br
f70lPnf+PD/lDpX/ABNv/VMXIfUX/wAWHSP/AAyz8q6T/HB1fpfVM7pr+nZVWW2uqwWGlweGkubG
7aub+ov/AIsOkf8Ahln5UlP0L1D/AJPyf+Js/wCpcvl1fUXUP+T8n/ibP+pcvl1JT7n/AInv/Ec3
/wAM2/8AfF264j/E9/4jm/8Ahm3/AL4u3SUpfP8A/jR/8XHUf+s/+eal9AL5/wD8aP8A4uOo/wDW
f/PNSSnS/wATP/irt/8ACln/AFdS9uXiP+Jn/wAVdv8A4Us/6upe3JKWc5rWlziA1okk8ABfOX11
6+frB9Y8rPaSccO9LFHhUz2s/wC3P53/AK4vW/8AGp9Yf2R9Wn41LtuV1MmiuDqK4/WbP8z9F/11
eJ9J6bf1XqeL07H/AJ3KsbW0+G4+5/8AYb70lLZXTMzExcTLvr2U57HWY7v3mscan/8ASau2/wAT
31h+wdas6Pe6MfqQ/RTwL2CWf9u172f9tLsf8Yv1ToyPqWyrCrh/RGB+OANfSY3Zez/tpvq/9aXi
WNkXYuRVk0OLLqXtsreOQ5p3Nd/nJKfqZeP/AOOnovo9SxOtVj2ZbPQuP/CV61n+3U7/AMCXpv1b
6zV13omJ1SqB9oYDY0fm2D2XM/s2Ncs//GB0X9s/VXNx2t3XUt+0UeO+r3wP+Mr9Sv8AtpKfDfqt
1Y9G+sOD1GYZTa31fOt36O4f9tPcvpQEOAc0yDqCOCF8rL6H/wAX3V/2v9U8G9xm2ln2a7+tV+j/
AOnX6diSndy8mvExLsq3SvHrda/+qwF7v+pXzDnZdmbm35lpmzIsfa8+byXn8q93/wAaPVP2f9T8
prTFmaW4rPg87rf/AAFli8K6bhWZ/UMbBqE2ZNrKm/F7gz+KSmuve/8AFb1cdS+qONW4zbgE4rx3
hnup/wDAXsXmP+M/odfR/rQ8UM9PFyqmXUtAgCB6NjR/1yrd/bWx/iW6v9n6zldJe6GZtXqVgnT1
KvD+tU9//baSn2VeS/47erB+TgdHYdKmuybR5u/RU/8ARbavWl84/Xbq37Y+tHUM0O3Veqa6TyPT
q/Q1x/W2b0lMPqd0c9a+suBgEbqn2h93f9HX+ltn+s1mxfQnWOqY3R+l5PUsnSnFrLy0aEkfQrb/
ACrH+xq8z/xJ9H3W53WrG6MAxaD5mLb/APo+itb/AB09QdR9X8XBYY+2ZEv82VN37f8Atx9SSnyn
r/Xuodf6lb1HPeXWWH2M/NrZ+ZVU381jVf8Aqp9R+s/Wixxww2nFqO23KtkMB/cZt91tn8lc+vfP
qr1v6pdI+ruBgN6piMdXS02g2sB9R49S4u1+l6jnJKear/xG1bR6vV3bu+2gR/0rlL/xjcb/AMt3
/wDbA/8ASy7n/nh9Vf8Ay2xP+3mf+SS/54fVX/y2xP8At5n/AJJJTyPSf8TuP0zqmJ1FvVH2HEuZ
cKzSBu2OD9m71Xbd0L0ZZVH1q+reRcyijqeLZda4Mrrba0uc46Na1oP5y1UlPzD1r/lnP/8ADN3/
AFblTVzrX/LOf/4Zu/6typpKf//Q8rf9N3xKNj9Pz8phfjY1t7AYLq2OeAfCWAoL/pu+JXs3+JT/
AMTuZ/4bP/nupJT5J+xesf8AcDJ/7Zf/AORS/YvWP+4GT/2y/wD8ivp5JJT8tZGHl4paMmiygu1a
LGOZIH7u8NWx9Rf/ABYdI/8ADLPyrr/8eH/KHSv+Jt/6pi5D6i/+LDpH/hln5UlP0L1D/k/J/wCJ
s/6ly+XV9SZdZsxLqxy+tzfvBC+W3AtJadCDBSU+5/4nv/Ec3/wzb/3xduuA/wATGXVb9WL8UEer
j5Li9vcNsaxzHf2tr136SlL5/wD8aP8A4uOo/wDWf/PNS+gF87/4w8uvM+ufVLajuY20VSPGpjKH
/wDTrSU7X+Jn/wAVdv8A4Us/6upe3Lxb/ErQ5/1kyrvzasRwPxc+qP8AqV6J/jB+sX7A+rWRfW7b
l5H6vi+O9491g/4mvfYkp8j/AMZH1i/bv1luNTt2Hhfq+NHB2n9LaP8Ajbf/AAP01s/4n8DBHU8j
rOddVUMRvpYwte1pNlg/SWNDz+ZV7f8Arq89S2nwKSn6df1boz2lj83Gc1wIcDayCDyPpL52+tHS
qukddy8GixtuOx5dj2McHA1v99XuZ+c1rtj1l7T4FKD4JKfTf8TH1iNWVkfV+93syJvxZ7PaP07B
/XrG/wD60vXCARB4Xy/0zqGR0zqGP1DGO27FsbYzz2n6J/kv+ivpbpXUcfqvTcbqOMZpyq22M8pG
rD/KY72OSU/Pf106KeifWXOwA3bSLDZj/wDFWfpKv8zd6a7T/En1jZlZ3RbHaXNGTSD+8z9HdH9Z
jq/+21Z/x2dF3VYPW626sJxchw8DNuOT/a9ZcB9Turno31lwM+dtbLQy7/i7P0Vv/Qekp7T/AB29
U35vT+ksOlLHZFo/lWH065/sVv8A89Yv+Kbpf2/63VXubNeBW/IdPG7+aq/6du/+wsv6+9UHVfrZ
1HJa4Pqbb6NThwWVfoWlv9bZvXoX+JPpnpdKzupuHuybhSw/yKhuMf8AXLv+gkpn/jp6T6/RcXqj
Gy/Ct9Ox3f07dP8Az8yv/PXln1c6q7o/XcHqQ4xrmuf5sJ2XD/tpz19D/WPpber9CzumkScilzWe
TwN1R/7daxfND2OY4seNrmkhwPIISU/SH1s6wzpX1YzupMcNzaT6B8X2fo6P+m9q+bl3P1m+tv7Q
+oHQ+miycguc3LE6xi/oaN//ABrbGWf2FgfU3o5619ZcDAI3VOtD7v8Ai6/0tv8AnNZsSU+4/UTo
/wCxvqtgYjhFr6/Wv/r2/pXf5m701xf+PGfT6R+7N/3xSvUgABA4Xn/+Ofpr8j6vY2cwScG/3+TL
R6Zd/wButpSU+LK43ovWHAObg5JaRIIpeQQf7Kpr6R+p/VaOrfVrp+XS4O/QsrtEyW2VgV2sd/aa
kp+e/wBidZ/7gZP/AGzZ/wCQS/YnWf8AuBk/9s2f+QX06kkp+d/qp0jq1X1n6VZZhZDGMy6S5zqn
gAB7fc5xavohJJJT8w9a/wCWc/8A8M3f9W5U1c61/wAs5/8A4Zu/6typpKf/0fK3/Td8SvZv8Sn/
AIncz/w2f/PdS8qd/wA3txn7ZMn/AES9b/xPfYv2Dl/Y/V9P7UZ9bbM+nV9H0/zUlPepJJJKfIf8
eH/KHSv+Jt/6pi5D6i/+LDpH/hln5V3P+OX9m/b+m/bfXn0rNno7Ijc36XqLlPqX+xP+dfSvQ+1e
r9pZs3+ntmfztvuSU+/r5++v/wBUcz6vdZutFZd03KsdZi3gS0bjvNDz+bZV/wBNi+gVU6r+y/2f
d+1/R+wbf0/2jb6cfyvU9qSn52+rf1n6r9Ws77Z0549423UvE12N/dsaC3+y9q9Ao/x4s2D7R0g7
+5ru0+59S5j60f8Ajbeu/wDYv23fJn0o9D+x9q/TrkrPT3n0t2ztuifwSU+ida/xz9Vy8d9HS8Rm
AXgtN7n+rYAf9F7a2Mf/ACvevOvfY/u+x5+JJP8A1TnK1g/sneP2h9o2d/Q2T/4KvWP8X3/jaevX
+yp/aumz7f8Az0/8B/2n3f8AEfpElN7/ABVfVPJ6F0u7Oz2GrN6gWn0nfSZU2fTa/wDdse5+97Vw
3+Nr6xftT6w/s+l04vSwatODc6DkO/sw2n/ra9tyvW+zXeh/PbHelx9KDs+l7fpL5uyP2J69n2j7
b6+93q7vT3b59+7+VuSU7H+LLoI6z9aaDa3djYI+1XA6glhHos/tXFn9he9fZ6P9Gz/NC8+/xN/s
X7B1H7B6n2n1Wev623ds2n0Nvp/mbvWXoqSkf2ej/Rs/zQqnVej4XVOm5PT7q2ivJrdWXACQSPa9
v8pjver6SSn5czsO/Azb8LIbtuxrHVWD+U07SvU/8TH1i9THyPq/e73UzkYs92OP6esf1LP0n/XF
z/8AjM/5uf8AO7Kn1/X21/afR2bPU2j9/wDO9P096p/Ub7D/AM6+nfsr7X9q9UfS9Pb6cH7R6kf4
P0PUSU+zfWro7et/V/N6aRL7qiafKxv6Sk/9uNavmxzXMcWOBa5phwOhBC+qV89/Wv8A5s/85Ope
l9q2/abJ9P09m7d+l9Pd7tnq79qSnmV9H/Urpf7K+q3TsMt22CkWWjvvt/T2f9KxeD4P/Nn7bj+t
9r9L1Wepu9ONu4bt0fm7V9JN27Rt+jGkeCSl188/4w+k/sr63Z9LWhtV7/tNQHG239IY/q2eoxfQ
y8o/xyfsT9p9P+1+t9q9B8+js/m936Pf6n8v1UlPli9S/wASfR5sz+tWN0aBi0EjuYtvj/wFeff9
j3/dz/wJe3/4sv2b/wA0MT9nbvT3Wer6kb/U3u3+ps9v0dn/AFtJT1SrdS6fjdTwL+n5bd+PksNd
g7w4ct/lN+k1WUklPzh9avqp1L6s9QdjZbC7HcScbKA9ljfj+bZ/pK0P6v8A1r659XLXP6XkGtlm
tlLhvrcR3dW787+Wz3r6E63+xf2bb+3PR+wR+k+0Rs8vpfn/ALmz3rxT6w/+Nj67v2V+0Jkz6O30
f7H2v9Okp0Wf46vrK1oD8XDef3ttg/8ARyf/AMev6x/9w8P/ADbP/Sy44/8AN2dPtkf9aS/7Hv8A
u5/4Ekp7/oX+N3r3Uus4PT7sTFbVlX10vc0Wbg17gxxbNrvdqvWl89fVX9h/85ulej9r9X7XTs3e
ntne2N0fmr6FSU/MPWv+Wc//AMM3f9W5U1udX/YH7Wzd/wBr3/aLd0enE73TCqf9j3/dz/wJJT//
2VBLAwQUAAYACAAAACEAViimTPgJAAAuJAAAEQAAAHdvcmQvc2V0dGluZ3MueG1sxFrdb9tGEn8/
4P4HQfd6jvd7SSFOseQumxRpL4ja3mNBS7TFC0UKJGXF/etvSIlVrP5UGPdyfjE1szM7O9/L4dvv
vm6r2VPRdmVT3835GzafFfWqWZf14938l5+zm2g+6/q8XudVUxd38+eim3/37u9/e3tYdEXf07Ju
RizqbtHczfdtvehWm2KbdzfbctU2XfPQ36ya7aJ5eChXxenf/ETR3s03fb9b3N6eiN40u6Imbg9N
u8377k3TPt4eKX2z2m+Lur8VjJnbtqjyngTuNuWum7ht/1dutNVmYvL0V4d42lbTugNnf7XydNxD
067/oHiNeAPBrm1WRdeRZrfV8bjbvKwnNl31Gj5HfX4s79u8ff6GyTsy2+9Ns50dFruiXZFCyeaM
zW8HxD1tTo7gm5+afrlv22Zfr98XOcGuorOm6U9oErt5WPZ5XxDzbldU1ehCq6rISfjD4rHNt9uc
TH6EjCy7/rkqPuV1kY0Gz8qKuNHap5xOKTPGj4Kti4d8X/U/5/fLvtlNeCVOcq/b/EB7fd+W61+L
ti9XebXc5SsCTUu5NidOZber8uf3TVv+3tR9XvkzbaAoeJ4oJtbH9RPba6vFkftqk7f5io5w2j6l
LdqmmnhSHOxasuynfb3q96MDH+k263a5yXeFP56ze/e2WXQD4HTwbva0KL6SqYp12VM47sr1Nv96
NxfGCDOfPZRVtWqqhnR72JR9MX/39mkxAGcvoKTx2wu+BDks/rz7YfFAdq3JtJ/awTGmX3SQcn03
vzmZ5QI8qoz4TeAjbVGvz4xOPy74vIRObF4QDqrL+0GWjiw7uMsvH49OmVd5vSqWZOyqSJ77wjf7
++PTv8t1vxkXrQeP/ljkT0WSr750Vd5t3JDNRuS++rnNy9FjjoBxdfi6o5y33JQP/eeip7w2rs3X
/9l3/ceyLt4X5eOm/1CTT1YnPl2RhY/5c7PvaS3p4SwzJdc1GfWwGB4+k2onj2As80r4k/sM2DOG
MelFdHSQCwxFbHZy0EuMEPYKjQipx9ystxpilEz4ydYX+2juvYU0hjOOuRlpE0xjuQsBcou15Pg8
jvsMn8dZo2PILRHmikZTFjIHabxQ9grGmiyFNMGIFNMEilgsdTARU5BbxpS8gtGBQz/gUtkI6oAr
HjTkxiPtsLV5bKSEUvPEqgR6L/eWSeg7PNNcQr0JroyANELxVEHfEQOJRHoTWkYM6kAYLVIotYi4
ijGNUx7rQCSSxdDaIlEx1qhIrc9OJellZAlvsghjglEmQyeV3DqsNyl47GFkEcbhfCClZBLqWiod
AvQ3Sb6TQL3JiLILtI+MjdbQ2tJLHaB9ZBBW4PNkxmWQRjGhFZRaMRk5qGvFjNBwH8V1wJlPSZM4
vI8xmkGNKmMjA/WmIp7aBFlbRdIwLHWkvME0MTcK5lHlmMJRopxQEfQ35bQM0KYqoW2wDjzTHuYd
5XmMM4XKVMDxozKjLLSPpgQXXcEYF0MJtBQOV02tuEgxjZFxArOYNja5IoFlykON6kgy7PHaUUHF
EiSSG+hVOiUEtA85jsM1yygWHIxGyjoswdy00LjWG20krsFGk3Y88mvjbYrrgqF6aqEOLOPMQ4+3
nCsFo8Ry4SONJLDcuARGiRVG4pNaYbmH2d9KEU3Xjpc53lJfFUHfsUpnMbSCpUYohnqzMUkA/cA6
xmLobzbVVmCpAw8cyxaEwBXDBsUc3CdiTEi4T0TdQYAnpTbIB3ieiOmAu5qI2ZRBqSMuTQRtGlHf
gHuKSKkEd52RFjaG/hZRwdDQPpGVUYJprEw91kHEHM7KUWQczqNRZG0skF9HsUyuaCc2wcH6EzmK
baw3p2OFde0Mu2KflDucLaNUJBzrzXPn8T5eORwlUaCqhf0t047BqKcWLbVQBzH5DocapXIacB8S
c5tyzE1oavGRfWJJUSIxhscCRlasqOnD+ygpA9RBbKlsQ13HEfcRpomVT7FsiUxwhxInOsG9WOx5
wJk8zjjH3RO1tpTKkXYIo3CmIEzmoOWoqYlx1XSKPB5WDKc0dYpQAk2NKtQOhYKRmNvQDUI/cFZo
XM1cJBzuQ1zCBe4GXULNBsyjzrMMd/guUD8KY85lLMEdZCJ4gjMSNSjUWSG9JdLaDEZjQla4so/W
DOfexNAtFPZ8CXVcBlqOMEkEvYqutCZgDPXkhsPzOB1SvI8zBndPSWID7kOSwLyBekuZyPBJU2El
7mGp1McJlDq1VuGbUUppDMdPmtCbBSxbol0MYyFNmU1gFktT6RzUW+qZctB7qWRa/K4m9TK5xs3E
EcwHaUYNCoxTT3dAnA8819ZCbl7wCHfRXpBs0Ks83b9SGAv0avBKX+WVMbg/8IbeYEBde0PvKbDU
iRAMxo9PVXAYE8gT8XmCNLhTpbaOC1izCJPiXOUzluE+xGc85liCTAvmUZwGZhWH3hvoXTqWOkjr
8PvRoJnGb+aC5kJCXdPLtwjneHo1mCawLpDaXMDniehWAK0dIkUVFeogsgFbO6Q2w++iQyZj3OHT
jObK26+M2TiD+SCjGd41jMjwHYNMmuBuMKOrZgozRTZcwaB2MmtkCv06o94WZ8ssNglLkEaz2Br8
FjSjewS+B2eeWXwTvz4RyAKNxGC2zIIObqzbNHcYLps0bdguhrnmMIQ5PmU0jJptj6O1NN/et2U+
+3GYfNK0Yru4b78kZT3h7wuawBbfYpb7+wl5c3NEdNu8qjIaeE0IGnoeMWuamtEsa2Rc/Zi3j2fO
o4NvFy2E0rjrhz+4DcPJov2eppC7I9dDm+8+1GsCTxtSX3jiV9Y0ldlO8G5/v5yoahqAfoPa1+t/
PbUDw9uzgg6LnmbWNFUiLvl5aljUN78sh9FlkXe968r8bv775ib9aaCm8U7VLodRd/FjvtsdZ433
j/xuXg2DIT6Q9fRrnbdfxh/3j+KEEyOOfg248Ue+Gg5Lq08Pw4LjI606PZxhcoLJM0xNMHWG6Qmm
zzAzwWhsSJO/ZxoC05j2C42Up8cB/tBUVXMo1u8n4N38T6CjEsZZ2Yd6Ve3XBbnIull1H+phCNyN
OhqHmP+HqeZpzEmTXprGvRieDqPVYXq6ewGdrfOerDtm7GbR7qvi5ch1oGpnw+zz18VnwlJmnfWk
HBpmN3VdrPqmnRO6LR7u5v/47Sujv986wYUetHRBShZ/DekY5xekZPDXkI5F8YKU/OI1pGN1viAl
93kN6ZjqL0jJy15DOgbxBal9Helksm+NQ1+svGbXsTpe7Bq/jtT82a70McKrdh0rFc3hRycbnaN4
HHJcPwySXzjlgKS0QN8k1MWBPtOYz5qKPHBU1sDgG7rzZP/o8uP8+SL4Dot1sSopZy+ft/fnjxLe
HAO5Krt+Wezo+wXyZUoB4xcY/xxwVFGmD33e/RcAAP//AwBQSwMEFAAGAAgAAAAhABXKYR82DQAA
Ck4AAA8AAAB3b3JkL3N0eWxlcy54bWzsXF+P28YRfy/Q7yDo3Tn9uTudjZyD+xsbsC+O79w+FhS1
OtFHkSpJ+Ww/9aGIGzRF0AZwW6Qo7CYtXKDxSwrEtd32y/hk+ylfobOzy+WSQ664stwUaPKQMynO
j7sz85uZ3R3p3fduj/3GLRbFXhhsNtvvtJoNFrjhwAuON5s3jvbPbTQbceIEA8cPA7bZvMPi5nsX
f/iDd08vxMkdn8UNAAjiC9Fmc5QkkwsrK7E7YmMnfiecsAA+G4bR2EngMjpeCYdDz2W7oTsdsyBZ
6bRa6ysR850EXh6PvEnclGinddBOw2gwiUKXxTGMduwLvLHjBc2LMLxB6O6yoTP1k5hfRtcieSmv
8M9+GCRx4/SCE7uet9k88sYwowN22rgejp2gCZ8wJ062Ys/ZbJ49/uWLf37G7422grj8aTemICv8
Tb4THIPkLcffbLLg3I3DPPbd0bmdA36r7w0A2YnOHW41QXAFB57+1SYwUdMRTxVmCzoFDR8KC4Eu
2PBK6J6wwWECH2w2wcp488bla5EXRl5yJ7t3yMbeJW8wYOAP6rlg5A3Yj0csuBGzQXb/w320rrzh
htMg2Wx21ntoAD8e7N122YRbF14XOGN48wEX8Pnrf5rKtvlEQUNlj4+Yw12x0baW6FhLdK0lVrlE
rOkLhzktKMt+7GtvCXf9LeH23hIuxJ63ot/zS8Z1HXTyJaMeeYnPOGYtphxO+4mdQBKFwXFt/L3x
ZOTEHoTomgO65jsuG4X+gEWNI3Y7KdeOlwWg8+cNgeAgbBxOHBdiAceZamL16XXFOx4ljcMRhpQi
zHrL8HYhecWLcRb629dN0UuIvR95A/K2juFtV9nAm47TgYrYl3tnt74whsGc8Op8YT7Rkteu1ZSk
71yfL8m1VPLOXk1J+s6NmpIY9nMaMvnhrhOdNMocoWfyn53QD6Ph1E9tWnSHnsmLlHDpa02OpCTL
XLBn8qIcVRpbrgvVRIl1THPOOFMtb5p2Rp5qedPkiyyqRjEpooDSqUapzatqCBPBrrNbHi/SuevQ
kkOLh8Ywisy+5kTOceRMRkU37GJBUyvdfDgNE0xOOnM6mFhryV8OoECNWaMUp4t1Zy0caR+cl8E4
tQNQtXFqR6JqiNohqRqiVmyqFLcKUtUoJtqqmIMmqYocPRNzFQTmhEoIE21L4xfNEXbxi8qbFEHj
F5U3aaEQedqpOSiKSREFFEURimIdvyiEKX6VEpVCWBOVQlgTlUJYE5VCWBGViC9EVIpi8k/FMp2o
FMLkogpCJyqFMPlnKVFpSWZHVCpvUgQlKpU3aaFAMUVUimJSRAFFEZWiWBOVQlgTlUJYE5VCWBOV
QlgTlUJYEZWIL0RUimLyT8UynagUwuSiCkInKoUw+WcpUbFe1CvAmqvoNJdReZMiKFGpvEkLBYop
olIUkyIKKIqoFMWaqBTCmqgUwpqoFMKaqBTCmqgUwoqoRHwholIUk38qlulEpRAmF1UQOlEphMk/
S4mKO8pvQFQqb1IEJSqVN2mhQDFFVIpiUkQBRRGVolgTlUJYE5VCWBOVQlgTlUJYE5VCWBGViC9E
VIpi8k/FMp2oFMLkogpCJyqFMPlnKVHxiOYNiErlTYqgRKXyJi0UKKaISlFMiiigKKJSFGuiUghr
olIIa6JSCGuiUghrolIIK6IS8YWISlFM/qlYphOVQphcVEHoRKUQJv/kR2s+a+gnYDpD2/a7nlVQ
nfqHWXJQ19mQRdCxwchebn2odC+2GgvX9LX2Y7fD8KShTi51NXVxvVEPxOv7Xohb1Hfm7nd38fSZ
HrpXNxUcfbDTuCQaC+ajo3EpOjkFhU4NvemCdzRggww8mNyZQOfDRN91h4YM3pkCHTc4At6ncRn6
Kpw2dk7wVgmQw2YR2TCBs5HKw39Dx84gfabV2up11na3xYkXtIbwt596g/B0B/pbotBXD4onnGkS
8tNUtrtX+clB8ZPBzWmcXOdHqJeD7M0CMBZHsyDie7xLqAvrQHlxferDDf5G/ihoSQ4P+nFQR4Ez
OQqRm3LCcqsnvpsOAEiBM47v7vBWHdSKuKd1x6Cu5ygdn+FqplrO+kzwVQGcV6tXibdX6363vdZL
97Pl5E4YmxwAhgCbjsVcg+n4stJcFxMRzADuSlUoJfYZdE2B1jqrqEZnmDDor+JXCHjTTcfWD5OR
uBdOE676K7fytib6jgodT1uRJ9pxsj6n189+ne9zEs/gq/v4/8w2XRl6dduIe4vapkMYkNpGvkq3
DfAFB7QE43h+pjqJ+t+1F7502fbKLNWRtZNuKXFvUUt1Ky0lk2nfgWaxD3jvF3I2NdZyDcipdgU8
P0ZHUGTSzZk6jpl+WlBEqOz6QFznAiDeysKeYqwIfCljszC42VxtY9XBWZqPiacX6hEap1HtIITA
OMT+jlDMCYuUIRRnU8uU+UZdFrsjyGsuBCge8KvSGiW1bGVsqLPoBg9Mgs1Z1ZAOMM0AqtUO55bP
ynALlFORBxKnL3qpqkZInVkk3sYRSor3qVP2dFzpWXt2IF8+MNUyhR8nfV9EfPiHyKXQ74qRXlQK
g9uOUAR8vsN8/6qDqTIJJ/Deikd9NuQZCz5tt7BtrgAFmSIJx9XyEfZGVQKAZvXBiEs+iWqVAxX7
LJINW1VqXyUxBBq9eLFf5Ql1NV49rlwd5kJNE44Pef1FarEWGdurh49mD56//tPvZl8/EgMsi2WF
ymy73WnJFaNWHcwJWXJxl8tAcwsGxOQOIOsED+s07hnoFVjJjqAPGWzCSzS5+suCD2/yAR4LIuXL
tMqyQbZCayVClnLachZ6yhH3wDpY/y3BSmsVVprdvzf7w9+ElfIWWV/rrLXlNoCcJYQHbBKHvym1
B8z1ZKfyJISubtQpjFc8Cv9YWEFBeC0KwyGGgkxZS6tyTS69Xqassy//bqWsZTtMf4maAOuIOG/S
gmhP1xdZSOxvzp5/KvylWLjIeibvRhsbvd55mUqlL2SaSQtz4ieZweezA2Nu7UyxDV+HgO9x8DWQ
yBRYdPGvRgiNxHehBsHVJl/5qZZ9aEzMlmewfhN5ZCFZlWMWkk4z0ELCHnwhY8AupeS1nbUQ/9Fi
4jwo9H1d/f/7aRuG7MpV50gtSV2fOVjGubxdVXoF6GTo+fCtlXQJD9X8lu8dq2oylzUEKmikog6r
nXxJoJp9+puzz//1xrm3dIEgd2PrZlus7v8vk+0GMQvYZPagOnLKwn1u5MxtmOg7VyI3kJ2RDX1j
BC7wsSz85nwSg/MSXPI8mTxPmw+/mD24B25ZnjlK51+vAOGRmIfrkuoD6EU00oWqD56WK09+NVcn
sgiDP2+2KZTltK4sdPWKT9wD/dtUfGqrznGI1ochtB5jmQqY+V1QWmuXVHbSOVLVrq61pa606g+T
kdL++VZneeVfpq35FUBNr8201Sfa4rtny9RWe01WQ5Xa2uiK9Q4MPtV+XLXLm1G2smKqdM7vfOnh
EmXPvvoCYsK3z3/x+i/3Xz385MWTX718/tdX//782+cfW8SHrLKsaX9TsTugg/z60csvn0LVv9CQ
1MJnsj2IahebeMwhy4oW/sdfDtNLUd7EEcjOF1B7vnPAy20ikknHrELH+rKquKYozQyZ5Yma6xbX
czUtdgWGXhQnvIYRhzWFXGFPxswGsz9/A8cHP3l/u9Nt47nhfFNAPtPOdpa+RTAk5nn5/P7ZR78/
e/b01ePHFizodXrt3S0hIJd8RJurKlUsskEAnsu/Qq5tp3hoGqQOfIiLI86f/X1JoDk+DBFYLv/U
GeOQbmwd8aca/BSuXBulC+D97fbGmtxEknPVKzc1WgGZbad/d2eOsEz7fjldZxfg++V06TY6r2i0
tb24NO+C545HTDlkSA+lxcLqxZOflZNS7j3lS+C9bm+rJzc3MTTQ6p78joW8gYGmL/4vT41I2EmX
FVkRW7e8r68IelgkFfH0k3JFgOpw1FWaADvV2Y7IqughPQ3adnw/hN+xwG+yCx2Vn276XnCShr4d
OBsrG1p3f31rP791mC3oACA7+ucX+dNCmEw+sWSmeJP1RF3z4JR42advn84+/geciMwefiQKnkY2
72LVIyetK8kBbZeYT9eRdOTKVcRyVJDbm9KcgZ5RYTeaOrksd0rpk/QsE75ki9Ot7lnYaK3tbeQT
a6HM4AqGMgF+QIevUbVarlB92Hs+PU5Jf66ncdWZGKeqG5UPUK6q86zcaXX2t/ekwUUyLi7w9HC0
HMu6+hm1IQiLURPvxl2el88+m/3x57DdZevdaTDO7VTYqqFq/fwW9KN5Pj0ucoIghJ8Q4r/oE6nO
w9JwWJqdVlvdjS25R5SS+i32eTlDetSjzSDhv0pSOnhJ3qJHl+aZbE7VbLNxwJI6QITX+/fOvvqt
rffJre08CbMhyzUwJMj5Hra0E0rNw+iGsmafeNq/ydwKE4Fp0XS5E/j0ZtFwdZ2xL7wBqx+w5py1
lY1VS4oatOqLJ89g87jaqjwklXijM5Rn63MNa55S6rLxxf8AAAD//wMAUEsDBBQABgAIAAAAIQCK
bKlh4AAAAFUBAAAYACgAY3VzdG9tWG1sL2l0ZW1Qcm9wczEueG1sIKIkACigIAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAJyQQWvDMAyF74P+B6N76qS4cVrilHZh0OvYYFfXcRJDbAXb
GRtj/30OO3XHnaT3hPQ9VJ8+7ETetQ8GnYBimwPRTmFn3CDg9eUpq4CEKF0nJ3RagEM4NZuHugvH
TkYZInp9jdqSZJhUr62Ar0PBzm3JecarA8/Y7rzPKpa6kl0KxnlZ7h8v30AS2qUzQcAY43ykNKhR
Wxm2OGuXhj16K2OSfqDY90bpFtVitYt0l+clVUvC2zc7QbPm+d1+1n24l2u0xZv/Um7mNhkcvJzH
T6BNTf+gVn33iuYHAAD//wMAUEsDBBQABgAIAAAAIQBV44HkGw8AAKrDAAASAAAAd29yZC9udW1i
ZXJpbmcueG1s7F3LjuO4Fd0HyD8UDHiRxbhEPa3GVA/kF9CDzgOTzge4XaouI35BVlVPzyoBspll
EGQRZJVd/iEz+ZlkgMxqfiGXpCiTLumakuUqVZmbctkSJfHw6vLwvvj5F18vFxf3cbKdr1dXHdKz
Ohfxara+nq8+XHV+927yWb9zsU2nq+vpYr2Krzqf4m3ni9c//9nnH1+t7pbv4wROvIBrrLav7uHw
bZpuXl1ebme38XK67a038QoO3qyT5TSFr8mHy+U0+f3d5rPZermZpvP388U8/XRpW5bfyS6zvurc
JatX2SU+W85nyXq7vklpk1frm5v5LM4+RItE57685Wg9u1vGq5Td8TKJF/AM69X2dr7Ziqst614N
ungrLnKPdeJ+uRDnfdzo3O06mX4EnJcL/tgf18n1JlnP4u0Wfh3xg/kViYXdOwOQXiJvofMI6j3F
kyyn81V+GSoee+OfD14PBu+S3/uSXmrXEcDiNQjT9P02Taaz9Fd3ywvl25vrq47FTllt59dw7H66
gF8iZxQEttO5pI2Xd4t0/ja+jxfvPm1icQ77dUF/5Wely81CHBs4I2voBj4/srinB+bwIe4FIp+k
4mTCzwJ5nyzzH6/j2Xw5zS4NLd/FX+fHulkL+PnLmbjKIr5J+YU2v0noU6fQ5+xTnAO36MD/m/X2
quM6Nj39cnfifEX7T6/Dj8KX2+nqA3tVd2dnV0/4TZLJepVu6ZnzFTSLp9s02s6n2ZXZSXAHeFD6
JPABZ3IcCMP8WBx6XdYJdun6UHh8oEqgoEdlKHZnNwSF3RQUvW4msEcJRmBbiGDQozIau7MbQsNp
Ag23x1+Fo4Dw/AABInRUsfD7AraGgHCbAML76btvnz8UniYUi/XHOHkbp2mc5J1WFKf/IuDwq8Dx
1Xo5XRWjwcT7WN15ipfkOr6ZwpybvX3ILBJoIoHPpjCLgOrsdd1e1+t1/V436HX7OWT15xXiukIn
iMlYnmPZYVmXSuc3pEP6p8On1w2bgMjru9kwF0JEDysQ7c6vDRHM8xIdpMRE+go3k75RdsiZisIO
B/7QsyZsbOuwQ+KG0SjydtQFblqNHd5tNqii+/Fvf/rh+782whOJ3WesooQdscPyCL1opkhcG+ME
7LAMxkvnisQL2FqkTDjoYRmPF8wWiUvYnFEGRdAPFCheNF98cjDaxhifHJA2ccYTgfFCWKNth5hO
ZYdlnXqGrNF2HQ9hjeywAtHjs0Zu1JFZI7H9YRA4GWsutinefnqfzK9/Se2NJZbFoRUNncmQkWbV
osbsMulmMbvqhGQ0JH6o+87jq6P//OGPOkRfXtkEfU+Z6xxfrINQ0q79AjNSwbtquVZoWSSk90vB
8g4Gd+oo0DUvHrQY2L+o2nkYZ7X3bm5QY70HglBgD7VZB9joZV0aHNMlZNXvFNnGkvmHW9V8LI8n
8fcGVKtLzn6XrEnNLqECqmfrk7tjE6vGCLGF6OMInVdZ6GzPrtEl1kTp0qmEzq8udHbo1ugSM8rK
XTqN0AVF/XnggpGFznHqqAW2ZpC7czpN168sdM6+ntdSC+DxBVX9GJouLBokXNNBD7SEDpS4ZCY6
aEPiTg2FDYwHxB5ZR7IBm4z9sdffGSsBWmFJylE2bMCwgVIfgGEDhg1kKsOwAR78wVbfLP5INx7D
sAGI/TBsIAswOsgGuGdfZgO25XrOuL5HaQKLTsu3h/laVeIB9HEOx9k8okdp5yEqcvnRo/D0JvLo
qmMij+QFFPUdyYLxgn1JJvKIjjQLU3xiKNrmR3piOHQtysysitggTeQRjTLuEhN5hJp2i/E5m8gj
rnwUnjgivueOssihuj6kqD+JrBGP/y71IQVuNOzbg6iRCLIaPiTqM5Kn+7PyIfUhcFDuvJZhtd0u
JFigVO9Sa11I4BGr0Z12u5CIVaNL7TYaeXtrBq33qLUuJPCI1RihdruQ9tW81gi12oUEdjCtUaro
QuLUWyYDjgUTNIkibvQpJgPs15JQEs+yI8hyM0lqCUuDKwjK0PUbHSSxuV2ufjLBLpS4yGpmTEXG
VAQOaWrrlYEwSWrAonlG68mhMKYiJcHXmIpEKIZJUgOqgyTxmSQ1noevQFSRHXIZU9ghLXrh8QTE
Oklqw4ltBePhEUlqOCtqgBKd3JH48dV7mD94RQj+vPO979tvxA+QDs6KRWy/GdJ6CKwR/w2X/rPi
mJUAzfIrFUBFHLnyrqg2zHMrp1AFUzvLG5Ax5b/hQnpGRRkqwZmFCilwst9wOM+qtMNjAHp23Psx
QD0nBv8YeJp1QBWUeTY1hK5JfCrPsEamf7OaYDRUn7jqA11xTcJHQlmTgEM5CMmYM+XqFussM26U
rx2AaIt0B70wx9OvSSAPCymr9iBLS5yNpAa2aYVACCRllfcvDNXgzV24EtK/lhF2EjqCwhVZ2lny
OshdHqCq1cdGCPQuHCZ/Aer7E+yAsBUrvNSF3Qz3Ku8ElihvgQxlI8R2102ox9RATx3PwgbU9jy1
SEnfE6tMpKe6jBNXOHvxXw101rX72LA6+2EYhDhsTQhigPRWlwpW6i3U2Wqgw56FlqBx+lCxR35d
iZ1Xe0Q63G6u5vU9bJBdJ9yrmrWrTYP0ufXMyfddZo4sUVkesdTZhwQQ0cInK6XbFXlMyELgFB4T
TRw7nByZvOn6jgWZ6OP8JZDYDPzLU2SdSTiM7GjYSBhe9QzfvaAtE4THQgUUadq3PuZD18I6Dvva
XyvaxgTh0XoibFWl1mg+WDykRh0HE4THF3QmCK+u0NWo42CC8Hhp+IOZm+RhqXjXnoy9wIq4gqhu
0yBkMh6MR5n3EILQJBZQw6axvbu5gSsw9+NqnUJZ9g85v1BKIZOLi/wAsoqUo4lYWOPNPNmmb+e0
Gr9CrjIXP3yIkvDT7Ww+v+pEyRzq2MMzieLwV50f//Xn/37/F0bLI+BV0jm8jLxUCKqKyfBRXbD6
QMMi79Gxvm0UVzbQqinWEYOPmGJrGXYq4Qplm583tI/qkK0KLVTEfuboZrRN9iHYwhaECG4jZiyv
qPrQgxJRwvYGn7nalnbpkMwz4kxZI7PD0C43RjpErJDZGuWwUm5UUXBsVUWhg3cjxjRfa3UrUDxr
vHXNeQcXOHqFTwzmkOxb16L4VawU78im4s1v00+LfCuiKWdysn7fbqazOCd4MvP74e//LtzdYAbV
M8W+GZnykFVN28mfvhtTNdjUtXpqjYsgThIpLx+Y//3jn4V7KpzNwFS0yxLuGJUNs57tEs8bZfXw
qi/GosiPoBBPYT09vcXYCraxK3zt+JYM0UV+EFl0HVCY7jE+5oeU4Cm27opgUdQAEtTfXNsbfTwS
tRY4sMSX1bEoAtEAGofqqlMvkEwW9zzXx+PRPi83oW7scglhh2VI9rzcx0PSyFKiaY84c3mXo3LI
I348Ko0QfgUV2LepgTeIucfLgTnoPT8eGV1q/gSeduZKR8A55Gk/Hpy6HLpI3za/3Rdzu5fjc9Ar
fzw+dbmsBj7NFN1hLvpyiA568OtAVJVV8kldYZWR69ruKItWKWaVh3duGEyC8Whg72gPzDsieBH+
5e7+oRv1R57fDLHQMojIa7x9P5Bx9xt3v2WdpoJ+9W0bTM2dZ1Go2dTcoca+XKOfeoMa4+7ntadP
UXOH8AXtHhfwwMKU5VHW5QJOOBjagwHOBYLB0A3HQTMrSFOBj9rvwPaCmN1kKmQq8HFuaoL/6sZh
meA/UMywL5WWYlZ9Mib4r67QGTZwQjbAZ2KZDfiO50JSApoIwDhCSQm+ydj1PScq5AF6/ibFDqh6
hbPglwf+hgbspCevvAJGGyQ+p1ZGpArODvIHxq8G8HmOxfpUfNie70wrP8AHgu4agGi3fYPwdsr0
y2z28Fhl6xpZXng/ffdtAzKxc0gWyYSp4Pe4tR11XVFmswcw9rCAe2UyfjDv7iX7QvprrzAQq3KM
qMurvMGcWfTasP3B4QnzGNEz3DCc7DYAL4SIHlYg2p1fEkd7OGimqt+J+8QVdun5I9eDHV/BY1ZW
wu+w34n0B5FtQUYwuwpIJXT0gd/JJmHkjYJmHIiak4E83xvPk/SCamVlmt0euM7NVjlqZiaqiI3n
iRpmTKKpSTQVvKXwHToY/G9sTSe0NXHyLbOBwHbcYOxHGBvAbE2j0Bu6WUUM1dzKk1qVNINCibjb
bOLkbZzmaQJ7FJeHOHeztnAQ8bUIGgafQgpBhVMSxrb9e44Gpj00WGBvTrrqQ/EcbUlFUBizUbZK
7OoRkANvyBPbSozZKF9G6QYzH5xQfc2FU7slw5iNxPq63eHKkhlICJS8IjdmI/DB7sxAhRC1wWzE
hUwhioNgZA/G2WbwdUOURhPPdQYubjYKyWhI/FD3lUeX5aRGiFLQZ/sP5qbNswpYJrav9v4F2I38
vQHV6lJrY5TAyFJjhNq9S6j3oISxqASDVMVr9y6hIdQ0BjaXaxEtoWttjJLj1FEL7Y5Y3tfzWiPU
6l1CoQdaQlfVi8Q9OAodGFtjZ2BlZUaL6QBmNxr79njSn+wCZvKFj67dyOTEp4DZQQ+xyYkXKHUJ
xaIZg5FJiue1ACXjqkmKF4Imm5xNUjyscYuAMUnxNPyiDBya9V6e0Xyw/HydjGbVe9NuK5NJit9F
ZZWIUDuS4kEVwssPf99c0+QDKvKwoXSa8Mq4b66FZsh3DYFT6UlKO563Xr0dz7Or3o57IYrbiWVp
0XNykz3Sjr1iv76Pk2R+TcusiYAp2mPmKJWOsfmVOT2BrIPnSxyi50pf86tI0evi1BzcTJVIzcqu
IpUIOOIqUsWeI64iOYOOuIrkRzniKpIt8oirSEr1iKtI4XRaVwH5KZJW3qViaRX7gBS1452o3o4/
dvV2fKOM6u2yqtqFDUX1pqL+ZQXgqrdD1BR6P0RNCRJQ+JyImhKbKhW2Q9QUpt1ARVOtUYgLJi8E
Exih4AofFJMYtCEmMlhDyCos7WK+S2DRo0I52poNEaHB74hIDd4QERsQfU46C/uIyA3eEBEc/FEx
ycFE3MYkB22ISQ6CDdQtLx9/DFTYFK5mw7qC42CCg+kp2NGr/FHRhpjgoA0xwfGw4cAEh6WZlcyJ
DiI4LGC+rB0iN9iDQopFOabMNlpyQxeRG6yDsNVV+Q2x2cZFxEZ5UP7A7+ME9oh4/X8AAAD//wMA
UEsDBBQABgAIAAAAIQB0Pzl6wgAAACgBAAAeAAgBY3VzdG9tWG1sL19yZWxzL2l0ZW0xLnhtbC5y
ZWxzIKIEASigAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAhM/BigIxDAbgu+A7lNydzngQ
kel4WRa8ibjgtXQyM8VpU5oo+vYWTyss7DEJ+f6k3T/CrO6Y2VM00FQ1KIyOeh9HAz/n79UWFIuN
vZ0pooEnMuy75aI94WylLPHkE6uiRDYwiaSd1uwmDJYrShjLZKAcrJQyjzpZd7Uj6nVdb3T+bUD3
YapDbyAf+gbU+ZlK8v82DYN3+EXuFjDKHxHa3VgoXMJ8zJS4yDaPKAa8YHi3mqrcC7pr9cd/3QsA
AP//AwBQSwMEFAAGAAgAAAAhAKnIXKqMAAAA2gAAABMAKABjdXN0b21YbWwvaXRlbTEueG1sIKIk
ACigIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALJJsgrOLy1KTi1WCE7NSU0uSU0J
LqnMSbVVinEMcNSLCPZRUgAL+CXmAgWBYkoKFbk5ecVWSbZKGSUlBVb6+sXJGam5icV6+QWpeUC5
tPyi3MQSILcoXT8/LS0zOdUlP7k0NzWvRN/IwMBMPykzKSczP70osSCjEmoYVYyys9GHe8aOlwsA
AAD//wMAUEsDBBQABgAIAAAAIQBYjYehlQEAAOsCAAAQAAgBZG9jUHJvcHMvYXBwLnhtbCCiBAEo
oAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJySTU/kMAyG7yvtf6h6hqad/dCAPEFo0IoD
7CJNgXOUuG20aRIlGWD+/TqUKUXcNif7tfXaeRK4eBlN8YQhamc3ZVPVZYFWOqVtvynv21+n67KI
SVgljLO4KQ8Yywv+9QvcBecxJI2xIAsbN+WQkj9nLMoBRxErKluqdC6MIlEaeua6Tku8cnI/ok1s
Vdc/Gb4ktArVqZ8Ny8nx/Cn9r6lyMu8XH9qDp4U5tDh6IxLy33kdUymXRmCzCq1LwrR6RH62Jn3O
4E70GPkK2BTAowsq8m/ND2BTCNtBBCETIeTNuq6BLQS49N5oKRLR5bdaBhddl4o/rxyKbABs2QLE
ZodyH3Q6cLJapnCjLa2SJ08R7RZEH4QfIv+eF5wz2ElhcEsEeCdMRGDvAmzd6IU98Ou9eEZdtCgH
64zr80tuXXVyk1RFl3jrylP/xnvfuquM783uo7hA8KjTsPNCZmZN0yxhLEqwI2ao6HZHw3cBrunJ
gslTCaTtUR17Phcy3ofp7/JmVdV0XnkeNYIyfyr+DwAA//8DAFBLAwQUAAYACAAAACEAniOoeYcB
AADxAgAAEQAIAWRvY1Byb3BzL2NvcmUueG1sIKIEASigAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAjJJRT8IwFIXfTfwPS99H1yFKFhiJGp4kMRGj8a22l1HZuqatjvnrvdtgMOODb709p1/v
Pe1ssS/y4AusU6WeEzaKSABalFLpbE6e18twSgLnuZY8LzXMSQ2OLNLLi5kwiSgtPNrSgPUKXIAk
7RJh5mTrvUkodWILBXcjdGgUN6UtuMfSZtRwseMZ0DiKrmkBnkvuOW2AoemJ5ICUokeaT5u3ACko
5FCA9o6yEaMnrwdbuD8PtMqZs1C+NjjTod1zthSd2Lv3TvXGqqpG1bhtA/tn9HX18NSOGirdZCWA
pDMpEq98DumMnpa4cp/vHyB8t90XKAgL3Jc2rbjOaq5zFSA7jifxpCUc5Sb4HdRVaaVDyKBCigQn
rDIen7O7YrCB7pw7v8L33SiQt3X6vVX4OlkL+iU1N1n4Us3XSBlrLX2N87Vxdm2DDDCgpIvzqLyM
7+7XS5LGOEYYsTC6Wkc3CbtOouitGWlwvgms2ygOzf2DyKYNcTwZEo+ALp3hJ01/AAAA//8DAFBL
AwQUAAYACAAAACEAvgoM9rkCAACeCwAAEgAAAHdvcmQvZm9udFRhYmxlLnhtbNSWz2/TMBTH70j8
D5HvLE7atWm1bOqPdXBgB7qdkZu6jaXYrmJ33W7cd0ATR85cuXJA/DeAhOCP4MVOS9c2tKlAg0SV
2tfnF+fj7/txdHLNE+eKpopJESLvACOHikgOmRiH6PKi9yRAjtJEDEkiBQ3RDVXo5Pjxo6NZcySF
Vg6sF6qZhijWetJ0XRXFlBN1ICdUwH8jmXKi4Wc6duVoxCLaldGUU6FdH+Oam9KEaHi2itlEoTza
bJdoM5kOJ6mMqFKwWZ7YeJwwgY7z3TmzpiAcdn3BOFXOOZ05LyQn1mFChFTUA58rkoQI+3DXcAUf
4ip8fPhWRW4WKYpJqqheOGJrHhHOkpu5NTVxjf+E6Sie269IysggoXaNYmP4Y6oGOETw+thvBXVk
LV6IArBkV27xYVP2gjMwqyoLi/GJTBzj4vV6mQ9YIE6+yuzTtee0RqQF20oMqHUObeBQNTwyJn4p
DmrGlLIv+19w+Pbx7vOnNwYESfQ5qAUwGkH0GX9KWf4qa1rxgBEGjXjz2zquaCWoWfN9rZCplqUQ
5Qfq/ZIKDvBpZl2Vije3FEmlOl+1u1S+vL8tRtSfin8JUeUPITKod0f09d0HQPTyrO1XPL8gqzLF
NPZUDJdDms5BL5eXEbumQ3sAy7XlYQXTlXrKN6XU99e3P96+KtJLVnaya0tKeXb9/ZQqX3ba2aP8
limbUJChbNYa3Xq902uv5lRlW05B3W6ULL/P+87lM+dM6phFBXqxOOpQfw8h24uqcLCxG5XH0TI4
2ss4gk69twHH1hIDja0sDqOYTkx/g6KxpzLKps7D6qJD+AA6cwGHbDCxA0o2qBRJArqCmUPuZ0j5
AcVK4nRJEqZ54Opahiz6UFHXgbMrK4kOSRigKCDRMyOaGU5Kk9g3OVZJ+Lj+d0jkM5s6/gkAAP//
AwBQSwMEFAAGAAgAAAAhACiHcaXPAAAAHwEAABQAAAB3b3JkL3dlYlNldHRpbmdzLnhtbIyPy04D
MQxF90j8wyh7moFFhUadqYRQ2VCoxGOfZjydSIkd2YHQfj3msWHH8tpXx8er9UeKzTuwBMLeXC5a
0wB6GgMeevPyvLm4No0Uh6OLhNCbI4hZD+dnq9pV2D9BKdqURikoHfdmLiV31oqfITlZUAbU3USc
XNHIB0vTFDzckn9LgMVete3SMkRX1EDmkMX80up/aJV4zEweRFQkxR9ecgHNoI6US0jhBBviG6Yq
wPZrrPeOj/i6vf9OLkaqu4c7DfbPW8MnAAAA//8DAFBLAwQUAAYACAAAACEA09m9HDECAACqAwAA
EwAIAWRvY1Byb3BzL2N1c3RvbS54bWwgogQBKKAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAC0k0FvokAYQO8m/gfCaTfGAgOINGpTARUtQi214qWhwwAiDMgMKjb735em29002b3sZo+Tb/Ly
vZeZwc05S5kjKskux0NWuOJZBmGYBzscDdlHd9LtswyhPg78NMdoyNaIsDejdmvglHmBSrpDhGkQ
mAzZmNLimuMIjFHmk6tmjJtJmJeZT5tjGXF5GO4g0nNYZQhTDvB8j4MVoXnWLX7i2Hfe9ZH+LTLI
4dt2ZO3WRbPuaPADXjNhRnfBkH3VZU3XZV7uAkPVugIvjLuqqCpdvs/zYAy0iXprfGOZ4u0yYBns
Z436c0aeC1N/VoAsSmKDPdLrtDgRWo6+gK/z+AlymxkCyfJwK3jhxhYVaPPerZGHHm92ZvNoWiU7
F+qBadYntI9L62ClJFrFdT+1fSnWnL1+wO3WUhQtlFlif+PcHaQsBAnUlgGaniTDBRfDy86xoaDA
9YQwXpWqUNzDy32pXly+lNb5bptsnUXPupzarXqVaRX2VSqpa1o+CiI/qzpwlWK6TLleInBbO0zG
sic61bi24z7SqZ2Dnmzpy9l6fj6vVxHcXkTQbh2eNGmiLzp3RzvaaCa4c4kvD7hfBQbcR+R/zC3+
PrfwqTfag4nlmOv9sjMzlOBxmoyV8AlO0sNDcFT5NN56yUKU9wssk4faSXuq0241vvbUN0UsvVSL
Ra7mRoIkBcQO92LMNRP+Fx3pQ4eEqR99khBE0Osrgqr+MST39ozfP9noOwAAAP//AwBQSwECLQAU
AAYACAAAACEAjIpokfYBAADiCgAAEwAAAAAAAAAAAAAAAAAAAAAAW0NvbnRlbnRfVHlwZXNdLnht
bFBLAQItABQABgAIAAAAIQCZVX4FBAEAAOECAAALAAAAAAAAAAAAAAAAAC8EAABfcmVscy8ucmVs
c1BLAQItABQABgAIAAAAIQBvAaUfeQEAAFIIAAAcAAAAAAAAAAAAAAAAAGQHAAB3b3JkL19yZWxz
L2RvY3VtZW50LnhtbC5yZWxzUEsBAi0AFAAGAAgAAAAhABW0GtzqEwAAn3IAABEAAAAAAAAAAAAA
AAAAHwoAAHdvcmQvZG9jdW1lbnQueG1sUEsBAi0AFAAGAAgAAAAhANhDWGFdAQAAEgMAABAAAAAA
AAAAAAAAAAAAOB4AAHdvcmQvZm9vdGVyMy54bWxQSwECLQAUAAYACAAAACEArCcbmpgDAABOCgAA
EAAAAAAAAAAAAAAAAADDHwAAd29yZC9mb290ZXIyLnhtbFBLAQItABQABgAIAAAAIQAdzijgRwUA
ACkTAAAQAAAAAAAAAAAAAAAAAIkjAAB3b3JkL2hlYWRlcjIueG1sUEsBAi0AFAAGAAgAAAAhAFAJ
OGpdAQAAEgMAABAAAAAAAAAAAAAAAAAA/igAAHdvcmQvaGVhZGVyMS54bWxQSwECLQAUAAYACAAA
ACEAk90DCGoBAACzAwAAEQAAAAAAAAAAAAAAAACJKgAAd29yZC9lbmRub3Rlcy54bWxQSwECLQAU
AAYACAAAACEAYiryO2oBAAC5AwAAEgAAAAAAAAAAAAAAAAAiLAAAd29yZC9mb290bm90ZXMueG1s
UEsBAi0AFAAGAAgAAAAhAFhgsxu6AAAAIgEAABsAAAAAAAAAAAAAAAAAvC0AAHdvcmQvX3JlbHMv
aGVhZGVyMi54bWwucmVsc1BLAQItABQABgAIAAAAIQBQCThqXQEAABIDAAAQAAAAAAAAAAAAAAAA
AK8uAAB3b3JkL2hlYWRlcjMueG1sUEsBAi0AFAAGAAgAAAAhANhDWGFdAQAAEgMAABAAAAAAAAAA
AAAAAAAAOjAAAHdvcmQvZm9vdGVyMS54bWxQSwECLQAUAAYACAAAACEAdCzDX58GAABRGwAAFQAA
AAAAAAAAAAAAAADFMQAAd29yZC90aGVtZS90aGVtZTEueG1sUEsBAi0ACgAAAAAAAAAhAMcZ/uon
jQAAJ40AABYAAAAAAAAAAAAAAAAAlzgAAHdvcmQvbWVkaWEvaW1hZ2UxLmpwZWdQSwECLQAUAAYA
CAAAACEAViimTPgJAAAuJAAAEQAAAAAAAAAAAAAAAADyxQAAd29yZC9zZXR0aW5ncy54bWxQSwEC
LQAUAAYACAAAACEAFcphHzYNAAAKTgAADwAAAAAAAAAAAAAAAAAZ0AAAd29yZC9zdHlsZXMueG1s
UEsBAi0AFAAGAAgAAAAhAIpsqWHgAAAAVQEAABgAAAAAAAAAAAAAAAAAfN0AAGN1c3RvbVhtbC9p
dGVtUHJvcHMxLnhtbFBLAQItABQABgAIAAAAIQBV44HkGw8AAKrDAAASAAAAAAAAAAAAAAAAALre
AAB3b3JkL251bWJlcmluZy54bWxQSwECLQAUAAYACAAAACEAdD85esIAAAAoAQAAHgAAAAAAAAAA
AAAAAAAF7gAAY3VzdG9tWG1sL19yZWxzL2l0ZW0xLnhtbC5yZWxzUEsBAi0AFAAGAAgAAAAhAKnI
XKqMAAAA2gAAABMAAAAAAAAAAAAAAAAAC/AAAGN1c3RvbVhtbC9pdGVtMS54bWxQSwECLQAUAAYA
CAAAACEAWI2HoZUBAADrAgAAEAAAAAAAAAAAAAAAAADw8AAAZG9jUHJvcHMvYXBwLnhtbFBLAQIt
ABQABgAIAAAAIQCeI6h5hwEAAPECAAARAAAAAAAAAAAAAAAAALvzAABkb2NQcm9wcy9jb3JlLnht
bFBLAQItABQABgAIAAAAIQC+Cgz2uQIAAJ4LAAASAAAAAAAAAAAAAAAAAHn2AAB3b3JkL2ZvbnRU
YWJsZS54bWxQSwECLQAUAAYACAAAACEAKIdxpc8AAAAfAQAAFAAAAAAAAAAAAAAAAABi+QAAd29y
ZC93ZWJTZXR0aW5ncy54bWxQSwECLQAUAAYACAAAACEA09m9HDECAACqAwAAEwAAAAAAAAAAAAAA
AABj+gAAZG9jUHJvcHMvY3VzdG9tLnhtbFBLBQYAAAAAGgAaAJUGAADN/QAAAAA=

--Boundary_(ID_kA3u8lsle6xzBQcNQN6vjA)--

From cdl@asgaard.org  Wed Jan 18 17:00:49 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id CBA6011E80D7 for <dc@ietfa.amsl.com>; Wed, 18 Jan 2012 17:00:49 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Level: 
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 72CJmHMZ6dHg for <dc@ietfa.amsl.com>; Wed, 18 Jan 2012 17:00:48 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id D8C1011E80D1 for <dc@ietf.org>; Wed, 18 Jan 2012 17:00:48 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id 5AEB2AE6168; Thu, 19 Jan 2012 01:00:48 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Qq8bQ9eWiTHJ; Thu, 19 Jan 2012 01:00:47 +0000 (UTC)
Received: from fenrir.bigswitch.com (74-93-4-129-sfba.hfc.comcastbusiness.net [74.93.4.129]) by asgaard.org (Postfix) with ESMTPSA id 53711AE615A; Thu, 19 Jan 2012 01:00:47 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: multipart/signed; boundary="Apple-Mail=_A059F4C4-DF02-46D5-A108-473D06831E15"; protocol="application/pgp-signature"; micalg=pgp-sha1
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com>
Date: Wed, 18 Jan 2012 17:00:44 -0800
Message-Id: <F6F7A4AA-E0FA-4EF5-8BAF-2941F7F89C93@asgaard.org>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com>
To: Bhumip Khasnabish <vumip1@gmail.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Thomas Narten <narten@us.ibm.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 19 Jan 2012 01:00:49 -0000

--Apple-Mail=_A059F4C4-DF02-46D5-A108-473D06831E15
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

Greetings Bhumip,

=09
On 17Jan2012, at 22.33, Bhumip Khasnabish wrote:

> Tom,
>=20
> Thanks.
>=20
> Yes, seamless migration of VM and VNE can be problematic in both =
intra- and
> inter-data-center environments, especially in multi-hypervisor case.

Agreed, but it's more than just a network problem, is it not?  =
ESPECIALLY in a multi-hypervisor environment.  So, the big question is, =
are you interested in just addressing the network portion, or the larger =
problem (including containers, storage concurrency, etc)?  If the later, =
do we have the experience and remit to work on that space? =20

	Chris

>=20
> It may be very helpful to bring one or more of these
> proprietary VM migration approaches to IETF for consideration
> for standardization, if that is appropriate.
> Sure, we'll update the draft to articulate these requirements.
>=20
> Best.
>=20
> Bhumip
>=20
>=20
> On Tue, Jan 17, 2012 at 10:40 AM, Thomas Narten <narten@us.ibm.com> =
wrote:
>=20
>> Bhumip,
>>=20
>> I skimmed this document and am having trouble figuring out what it is
>> intended to do.
>>=20
>> The draft name itself has "problem" in it, but there is no single (or
>> small set of) succinct problems listed. It's all very high level and
>> hand wavy. I need help making the connection to an IETF action that
>> could come out of this document.
>>=20
>> For example, it talks about VM migration.
>>=20
>> Is VM Migration a "problem" today? There are properietary approaches
>> that the market seems to like OK.
>>=20
>> What is wrong with the current approaches? What is "broken" that =
needs
>> fixing? Why should the IETF get involved in this space? What value
>> would the IETF bring?
>>=20
>> Do you want to be able to do VM migration from one vendor's =
hypervisor
>> to another vendor's?  If so, please just say so. Then we can see
>> whether others here think that is an area the IETF (or some other =
SDO)
>> should get involved in.
>>=20
>> Thomas
>>=20
>>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


--Apple-Mail=_A059F4C4-DF02-46D5-A108-473D06831E15
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP using GPGMail

-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)

iQEcBAEBAgAGBQJPF2s9AAoJEGmx2Mt/+Iw/absH/joBvOyUdpZw08uDCB8k2rl0
MomAUMESOYYIVsBPosl+F51098X3PnoP6MQ4JMuM6bZYjqFThy4UgBNhUZKwQQTy
k8DIxd/huGyFgjj4fPyUK6Mk4VJSvvuXXQre17mHaNUGxSxXtKC3nmcKqMps16IA
4blDRfGpre8QJcTPcgK37Bl+WaB33d8OhD60TGR0qfZS80Nqz42DXOr5sk5VAGzb
PkWJew8fzXuC1imjDyTBfjB5KUIwrxt+VC5yxVicb12qwI9fSyI3tu9Czeo2Wdtw
A9K9IE5TcMHscu9L8iU0+NlEBLjEdnxmTK2nb+JMpu1vp3jQFZJRCfwWoe5WlLg=
=ajTg
-----END PGP SIGNATURE-----

--Apple-Mail=_A059F4C4-DF02-46D5-A108-473D06831E15--

From narten@us.ibm.com  Thu Jan 19 06:19:57 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1E55E21F862A for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 06:19:57 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -108.999
X-Spam-Level: 
X-Spam-Status: No, score=-108.999 tagged_above=-999 required=5 tests=[AWL=1.600, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pRMwRaN6C66u for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 06:19:56 -0800 (PST)
Received: from e39.co.us.ibm.com (e39.co.us.ibm.com [32.97.110.160]) by ietfa.amsl.com (Postfix) with ESMTP id 8F06721F8629 for <dc@ietf.org>; Thu, 19 Jan 2012 06:19:56 -0800 (PST)
Received: from /spool/local by e39.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Thu, 19 Jan 2012 07:19:54 -0700
Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e39.co.us.ibm.com (192.168.1.139) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Thu, 19 Jan 2012 07:19:53 -0700
Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 313C23E4005A for <dc@ietf.org>; Thu, 19 Jan 2012 07:19:52 -0700 (MST)
Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q0JEJj5V151738 for <dc@ietf.org>; Thu, 19 Jan 2012 07:19:47 -0700
Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q0JEJec0021179 for <dc@ietf.org>; Thu, 19 Jan 2012 07:19:40 -0700
Received: from cichlid.raleigh.ibm.com (sig-9-76-45-53.mts.ibm.com [9.76.45.53]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q0JEJbJK020524 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 19 Jan 2012 07:19:39 -0700
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q0JEJTLF010649; Thu, 19 Jan 2012 09:19:31 -0500
Message-Id: <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>
To: Bhumip Khasnabish <vumip1@gmail.com>
In-reply-to: <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com>
Comments: In-reply-to Bhumip Khasnabish <vumip1@gmail.com> message dated "Wed, 18 Jan 2012 01:33:50 -0500."
Date: Thu, 19 Jan 2012 09:19:28 -0500
From: Thomas Narten <narten@us.ibm.com>
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12011914-4242-0000-0000-0000008D5F51
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 19 Jan 2012 14:19:57 -0000

> It may be very helpful to bring one or more of these
> proprietary VM migration approaches to IETF for consideration
> for standardization, if that is appropriate.

Well, let's start with this as a premise. Sure, in the utopia of open
standards everwhere, having all this stuff standardized would be
great.

But let's be realistic. Are any of the vendors/implementors of these
systems coming to the IETF saying they want or need a standard? (If
so, I must have missed this.)

If the market heavyweights are not indicating that they will
participate and/or implement such a standard, would anything the IETF
does in this space be relevant? (Based on experience, I fear the
answer is a very clear "no".)

Finally, do we see operators who use the existing systems coming to
the IETF saying they want open standards here? So far, the silence has
been deafening so far...

> Sure, we'll update the draft to articulate these requirements.

Frankly, until and unless the questions above can be answered in a
more positive manner, IMO there is little point in spending any cycles
in this area, other than getting positive answers to the questions.

Or am I missing something?

Thomas


From sblake@extremenetworks.com  Thu Jan 19 08:07:58 2012
Return-Path: <sblake@extremenetworks.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E7F7321F8613 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 08:07:58 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Prngc-M6beJW for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 08:07:58 -0800 (PST)
Received: from ussc-casht-p1.extremenetworks.com (ussc-casht-p1.extremenetworks.com [207.179.9.62]) by ietfa.amsl.com (Postfix) with ESMTP id 2286A21F8611 for <dc@ietf.org>; Thu, 19 Jan 2012 08:07:58 -0800 (PST)
Received: from [10.5.2.53] (10.5.2.53) by ussc-casht-p1.corp.extremenetworks.com (10.0.4.73) with Microsoft SMTP Server id 8.3.83.0; Thu, 19 Jan 2012 08:07:57 -0800
From: Steven Blake <sblake@extremenetworks.com>
To: Thomas Narten <narten@us.ibm.com>
Date: Thu, 19 Jan 2012 11:07:56 -0500
In-Reply-To: <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>
Organization: Extreme Networks
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.0.3 (3.0.3-1.fc15) 
Content-Transfer-Encoding: 7bit
Message-ID: <1326989277.2513.4.camel@ecliptic.extremenetworks.com>
MIME-Version: 1.0
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 19 Jan 2012 16:07:59 -0000

On Thu, 2012-01-19 at 06:19 -0800, Thomas Narten wrote:

> > It may be very helpful to bring one or more of these
> > proprietary VM migration approaches to IETF for consideration
> > for standardization, if that is appropriate.
> 
> Well, let's start with this as a premise. Sure, in the utopia of open
> standards everwhere, having all this stuff standardized would be
> great.
> 
> But let's be realistic. Are any of the vendors/implementors of these
> systems coming to the IETF saying they want or need a standard? (If
> so, I must have missed this.)
> 
> If the market heavyweights are not indicating that they will
> participate and/or implement such a standard, would anything the IETF
> does in this space be relevant? (Based on experience, I fear the
> answer is a very clear "no".)
> 
> Finally, do we see operators who use the existing systems coming to
> the IETF saying they want open standards here? So far, the silence has
> been deafening so far...
> 
> > Sure, we'll update the draft to articulate these requirements.
> 
> Frankly, until and unless the questions above can be answered in a
> more positive manner, IMO there is little point in spending any cycles
> in this area, other than getting positive answers to the questions.
> 
> Or am I missing something?

Several system vendors (myself included) stood up in Taipei and said
"one encapsulation, please".  If IETF can facilitate industry
convergence on a small set of NVO3 encapsulations (preferably one), that
would be a big win for Ethernet switch vendors.


Regards,

/////////////////////////////////////////////
Steven Blake       sblake@extremenetworks.com
Extreme Networks              +1 919-884-3211


From adalela@cisco.com  Thu Jan 19 08:23:21 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 793DD21F84DD for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 08:23:21 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.425
X-Spam-Level: 
X-Spam-Status: No, score=-2.425 tagged_above=-999 required=5 tests=[AWL=0.174,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id l4WKHcW3cJIZ for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 08:23:20 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id CBDF021F8619 for <dc@ietf.org>; Thu, 19 Jan 2012 08:23:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2376; q=dns/txt; s=iport; t=1326990200; x=1328199800; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=Wt6FFeLRV+3oWo+iGWD9fE+83/UgYyIIaETvug436Xw=; b=LsrK6JEcp5PH55Tsr7us0sZfrfm7dqKDRdyZUEWfzLFfwpyYOgATfiFa n9bXblJKhZctl/gpQRYncVRhRFLvALxhj8BmqRdK6no6DfLhdiNgISqIo JJQ0pqp7NqUVn+fsoWfnQQnENsm5ikzmxLuomlWN84hSZvxflfWOW4CLC M=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqAEAERDGE9Io8UY/2dsb2JhbABBA6xfggWBcgEBAQMBAQEBDwEdCjQLDAQCAQgOAwQBAQsGFwEGASYfCQgBAQQBCggIFgSHWgiaNAGeWASJNzYBBVABBAcBCwECAQEIAQEBAQJJCkqBaFcWAQEBAgmCPmMEiDmfMA
X-IronPort-AV: E=Sophos;i="4.71,537,1320624000";  d="scan'208";a="3767207"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 19 Jan 2012 16:23:18 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q0JGNIXu003575; Thu, 19 Jan 2012 16:23:18 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Thu, 19 Jan 2012 21:53:18 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Thu, 19 Jan 2012 21:53:16 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com>
In-Reply-To: <1326989277.2513.4.camel@ecliptic.extremenetworks.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-khasnabish-vmmi-problems-00.txt
Thread-Index: AczWxJOZ00cNuYvVSF6Xk7/uMykD9wAAL+Wg
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Steven Blake" <sblake@extremenetworks.com>, "Thomas Narten" <narten@us.ibm.com>
X-OriginalArrivalTime: 19 Jan 2012 16:23:18.0121 (UTC) FILETIME=[A709D590:01CCD6C6]
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 19 Jan 2012 16:23:21 -0000

>> "one encapsulation, please". =20

How do you propose we reconcile the present scenario:

- Hypervisor based encapsulations
- Network based encapsulations
- L2 in L2 encapsulation
- L2 in L3 encapsulation
- L3 in L3 encapsulation

Thanks, Ashish

-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Steven Blake
Sent: Thursday, January 19, 2012 9:38 PM
To: Thomas Narten
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

On Thu, 2012-01-19 at 06:19 -0800, Thomas Narten wrote:

> > It may be very helpful to bring one or more of these
> > proprietary VM migration approaches to IETF for consideration
> > for standardization, if that is appropriate.
>=20
> Well, let's start with this as a premise. Sure, in the utopia of open
> standards everwhere, having all this stuff standardized would be
> great.
>=20
> But let's be realistic. Are any of the vendors/implementors of these
> systems coming to the IETF saying they want or need a standard? (If
> so, I must have missed this.)
>=20
> If the market heavyweights are not indicating that they will
> participate and/or implement such a standard, would anything the IETF
> does in this space be relevant? (Based on experience, I fear the
> answer is a very clear "no".)
>=20
> Finally, do we see operators who use the existing systems coming to
> the IETF saying they want open standards here? So far, the silence has
> been deafening so far...
>=20
> > Sure, we'll update the draft to articulate these requirements.
>=20
> Frankly, until and unless the questions above can be answered in a
> more positive manner, IMO there is little point in spending any cycles
> in this area, other than getting positive answers to the questions.
>=20
> Or am I missing something?

Several system vendors (myself included) stood up in Taipei and said
"one encapsulation, please".  If IETF can facilitate industry
convergence on a small set of NVO3 encapsulations (preferably one), that
would be a big win for Ethernet switch vendors.


Regards,

/////////////////////////////////////////////
Steven Blake       sblake@extremenetworks.com
Extreme Networks              +1 919-884-3211

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From sblake@extremenetworks.com  Thu Jan 19 08:54:55 2012
Return-Path: <sblake@extremenetworks.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8ADDC21F86A3 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 08:54:55 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.599
X-Spam-Level: 
X-Spam-Status: No, score=-2.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JxvXECPtyheY for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 08:54:55 -0800 (PST)
Received: from ussc-casht-p1.extremenetworks.com (ussc-casht-p2.extremenetworks.com [207.179.9.62]) by ietfa.amsl.com (Postfix) with ESMTP id 1062321F86A2 for <dc@ietf.org>; Thu, 19 Jan 2012 08:54:55 -0800 (PST)
Received: from [10.5.2.53] (10.5.2.53) by ussc-casht-p1.corp.extremenetworks.com (10.0.4.73) with Microsoft SMTP Server id 8.3.83.0; Thu, 19 Jan 2012 08:54:54 -0800
From: Steven Blake <sblake@extremenetworks.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Date: Thu, 19 Jan 2012 11:54:53 -0500
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com>
Organization: Extreme Networks
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.0.3 (3.0.3-1.fc15) 
Content-Transfer-Encoding: 7bit
Message-ID: <1326992094.2513.10.camel@ecliptic.extremenetworks.com>
MIME-Version: 1.0
Cc: Thomas Narten <narten@us.ibm.com>, "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 19 Jan 2012 16:54:55 -0000

On Thu, 2012-01-19 at 08:23 -0800, Ashish Dalela (adalela) wrote:

> >> "one encapsulation, please".  
> 
> How do you propose we reconcile the present scenario:
> 
> - Hypervisor based encapsulations
> - Network based encapsulations

I don't see any reason why these need to differ.

> - L2 in L2 encapsulation

IEEE owns this space.

> - L2 in L3 encapsulation

This is NVO3.  I would prefer to have to support the minimum number of
NVO3 encapsulations in HW.

> - L3 in L3 encapsulation

There are already plenty to choose from.


Regards,

/////////////////////////////////////////////
Steven Blake       sblake@extremenetworks.com
Extreme Networks              +1 919-884-3211


From narten@us.ibm.com  Thu Jan 19 09:49:03 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C10FB21F854B for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 09:49:03 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -109.266
X-Spam-Level: 
X-Spam-Status: No, score=-109.266 tagged_above=-999 required=5 tests=[AWL=1.333, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Cb0fxpc8wccU for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 09:49:03 -0800 (PST)
Received: from e35.co.us.ibm.com (e35.co.us.ibm.com [32.97.110.153]) by ietfa.amsl.com (Postfix) with ESMTP id 4A54C21F8548 for <dc@ietf.org>; Thu, 19 Jan 2012 09:48:58 -0800 (PST)
Received: from /spool/local by e35.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Thu, 19 Jan 2012 10:48:57 -0700
Received: from d03dlp01.boulder.ibm.com (9.17.202.177) by e35.co.us.ibm.com (192.168.1.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Thu, 19 Jan 2012 10:48:21 -0700
Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by d03dlp01.boulder.ibm.com (Postfix) with ESMTP id CE4A51FF004C for <dc@ietf.org>; Thu, 19 Jan 2012 10:48:18 -0700 (MST)
Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q0JHmIZ5232262 for <dc@ietf.org>; Thu, 19 Jan 2012 12:48:19 -0500
Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q0JHlXZe025595 for <dc@ietf.org>; Thu, 19 Jan 2012 10:47:33 -0700
Received: from cichlid.raleigh.ibm.com (sig-9-76-45-53.mts.ibm.com [9.76.45.53]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q0JHlVAY025432 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 19 Jan 2012 10:47:32 -0700
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q0JHlS5J015128; Thu, 19 Jan 2012 12:47:28 -0500
Message-Id: <201201191747.q0JHlS5J015128@cichlid.raleigh.ibm.com>
To: Steven Blake <sblake@extremenetworks.com>
In-reply-to: <1326989277.2513.4.camel@ecliptic.extremenetworks.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com>
Comments: In-reply-to Steven Blake <sblake@extremenetworks.com> message dated "Thu, 19 Jan 2012 11:07:56 -0500."
Date: Thu, 19 Jan 2012 12:47:27 -0500
From: Thomas Narten <narten@us.ibm.com>
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12011917-6148-0000-0000-000002C21805
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Thu, 19 Jan 2012 17:49:03 -0000

Steven,

> Several system vendors (myself included) stood up in Taipei and said
> "one encapsulation, please".  If IETF can facilitate industry
> convergence on a small set of NVO3 encapsulations (preferably one), that
> would be a big win for Ethernet switch vendors.

I agree completely.

But my questions were asking about the apparent lack of  interest from
operators/implementers/market players regarding Bhumip's draft and the
apparent desire to have some sort of standards work related to the
general VM migration problem.

Is there such interest?

Thomas


From adalela@cisco.com  Thu Jan 19 17:07:40 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9531C21F8594 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 17:07:40 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.429
X-Spam-Level: 
X-Spam-Status: No, score=-2.429 tagged_above=-999 required=5 tests=[AWL=0.170,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UnlCdaKDdOqo for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 17:07:39 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 65FE221F858B for <dc@ietf.org>; Thu, 19 Jan 2012 17:07:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2456; q=dns/txt; s=iport; t=1327021659; x=1328231259; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=bX5OJIbWHxLodZrcot+ytHsZpeLMkzkkzPnwOAw9wQ8=; b=kOiHJYVaV66Pxvy55smN2txUcjk6sllzxR1zt3M+ZV7EXa4Ihik8sbUk XuYvVapzwDS+NePbbG2ZFDgzIiG1J/bYusPY3Mda7N/8i4pDvbRlE/AsB 7cc8i5CBp4YTYU1e8AgZ8PWdQjdzoRB/qT4cRm+JoZGHO1jMEjl+es77X o=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqIEANe9GE9Io8UY/2dsb2JhbABAA4UEp3cUgW+BcgEBAQMBEgEQDQRFBQcEAgEIDgMEAQEDAgYGFwECAgIBAUQJCAEBBAsICBMHh1qaWgGMYpFbgS+ICgEBMgEFUAEEBwELAQIBAQUDAQEBAQJJCkqBaDlCggszYwSIOZ8w
X-IronPort-AV: E=Sophos;i="4.71,539,1320624000";  d="scan'208";a="3789162"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 20 Jan 2012 01:07:37 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q0K17b01009948; Fri, 20 Jan 2012 01:07:37 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Fri, 20 Jan 2012 06:37:37 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Date: Fri, 20 Jan 2012 06:37:33 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102CB22FF@XMB-BGL-416.cisco.com>
In-Reply-To: <1326992094.2513.10.camel@ecliptic.extremenetworks.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-khasnabish-vmmi-problems-00.txt
Thread-Index: AczWyxk1kFC7h3sUQF+SIAm9U/X+jwAQ85EA
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com> <1326992094.2513.10.camel@ecliptic.extremenetworks.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Steven Blake" <sblake@extremenetworks.com>
X-OriginalArrivalTime: 20 Jan 2012 01:07:37.0707 (UTC) FILETIME=[E669B3B0:01CCD70F]
Cc: Thomas Narten <narten@us.ibm.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 01:07:40 -0000

PiAtIEh5cGVydmlzb3IgYmFzZWQgZW5jYXBzdWxhdGlvbnMNCj4gLSBOZXR3b3JrIGJhc2VkIGVu
Y2Fwc3VsYXRpb25zDQoNCj4gSSBkb24ndCBzZWUgYW55IHJlYXNvbiB3aHkgdGhlc2UgbmVlZCB0
byBkaWZmZXIuDQoNClRoZSBmYWN0IGlzIHRoYXQgdGhleSBkbyBkaWZmZXIuIEEgaHlwZXJ2aXNv
ciBiYXNlZCBzb2x1dGlvbiB3aWxsIG5vdCBydW4gYSByb3V0aW5nIHByb3RvY29sIChjb250cm9s
IHBsYW5lKSwgb3Igd2UgaGF2ZW4ndCBzZWVuIHRoYXQgeWV0LiBUaGF0IG1lYW5zIG1hbnkgdHlw
ZXMgb2YgaW5mb3JtYXRpb24gd2lsbCBiZSBwdWxsZWQgdGhyb3VnaCBjb25maWd1cmF0aW9uIC8g
bWdtdCBjb250cm9sIGZyb20gYSBoeXBlcnZpc29yIGNvbnRyb2xsZXIgcmF0aGVyIHRoYW4gdGhl
IG5ldHdvcmsgY29udHJvbCBwbGFuZS4gDQoNCj4gLSBMMiBpbiBMMiBlbmNhcHN1bGF0aW9uDQoN
Cj4gSUVFRSBvd25zIHRoaXMgc3BhY2UuDQoNClRoZW4gdGhlcmUgaXMgVFJJTEwgaW4gSUVURi4N
Cg0KPiAtIEwyIGluIEwzIGVuY2Fwc3VsYXRpb24NCg0KPiBUaGlzIGlzIE5WTzMuICBJIHdvdWxk
IHByZWZlciB0byBoYXZlIHRvIHN1cHBvcnQgdGhlIG1pbmltdW0gbnVtYmVyIG9mDQo+IE5WTzMg
ZW5jYXBzdWxhdGlvbnMgaW4gSFcuDQoNCk1pbmltdW0gaXMgbm90IHF1YW50aWZpZWQgOi0pIEkg
dGhvdWdodCB5b3Ugc2FpZCB5b3Ugd2FudGVkIG9ubHkgb25lLg0KDQo+IC0gTDMgaW4gTDMgZW5j
YXBzdWxhdGlvbg0KDQo+IFRoZXJlIGFyZSBhbHJlYWR5IHBsZW50eSB0byBjaG9vc2UgZnJvbS4N
Cg0KV2UgYWdyZWUuDQoNClRoYW5rcywgQXNoaXNoDQoNCi0tLS0tT3JpZ2luYWwgTWVzc2FnZS0t
LS0tDQpGcm9tOiBTdGV2ZW4gQmxha2UgW21haWx0bzpzYmxha2VAZXh0cmVtZW5ldHdvcmtzLmNv
bV0gDQpTZW50OiBUaHVyc2RheSwgSmFudWFyeSAxOSwgMjAxMiAxMDoyNSBQTQ0KVG86IEFzaGlz
aCBEYWxlbGEgKGFkYWxlbGEpDQpDYzogVGhvbWFzIE5hcnRlbjsgZGNAaWV0Zi5vcmcNClN1Ympl
Y3Q6IFJFOiBbZGNdIGRyYWZ0LWtoYXNuYWJpc2gtdm1taS1wcm9ibGVtcy0wMC50eHQNCg0KT24g
VGh1LCAyMDEyLTAxLTE5IGF0IDA4OjIzIC0wODAwLCBBc2hpc2ggRGFsZWxhIChhZGFsZWxhKSB3
cm90ZToNCg0KPiA+PiAib25lIGVuY2Fwc3VsYXRpb24sIHBsZWFzZSIuICANCj4gDQo+IEhvdyBk
byB5b3UgcHJvcG9zZSB3ZSByZWNvbmNpbGUgdGhlIHByZXNlbnQgc2NlbmFyaW86DQo+IA0KPiAt
IEh5cGVydmlzb3IgYmFzZWQgZW5jYXBzdWxhdGlvbnMNCj4gLSBOZXR3b3JrIGJhc2VkIGVuY2Fw
c3VsYXRpb25zDQoNCkkgZG9uJ3Qgc2VlIGFueSByZWFzb24gd2h5IHRoZXNlIG5lZWQgdG8gZGlm
ZmVyLg0KDQo+IC0gTDIgaW4gTDIgZW5jYXBzdWxhdGlvbg0KDQpJRUVFIG93bnMgdGhpcyBzcGFj
ZS4NCg0KPiAtIEwyIGluIEwzIGVuY2Fwc3VsYXRpb24NCg0KVGhpcyBpcyBOVk8zLiAgSSB3b3Vs
ZCBwcmVmZXIgdG8gaGF2ZSB0byBzdXBwb3J0IHRoZSBtaW5pbXVtIG51bWJlciBvZg0KTlZPMyBl
bmNhcHN1bGF0aW9ucyBpbiBIVy4NCg0KPiAtIEwzIGluIEwzIGVuY2Fwc3VsYXRpb24NCg0KVGhl
cmUgYXJlIGFscmVhZHkgcGxlbnR5IHRvIGNob29zZSBmcm9tLg0KDQoNClJlZ2FyZHMsDQoNCi8v
Ly8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLw0KU3RldmVuIEJsYWtl
ICAgICAgIHNibGFrZUBleHRyZW1lbmV0d29ya3MuY29tDQpFeHRyZW1lIE5ldHdvcmtzICAgICAg
ICAgICAgICArMSA5MTktODg0LTMyMTENCg0K

From adalela@cisco.com  Thu Jan 19 17:20:27 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4991521F85E7 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 17:20:27 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.433
X-Spam-Level: 
X-Spam-Status: No, score=-2.433 tagged_above=-999 required=5 tests=[AWL=0.167,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HmeJ4CK5Cq7V for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 17:20:26 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 1BBC821F85E4 for <dc@ietf.org>; Thu, 19 Jan 2012 17:20:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2278; q=dns/txt; s=iport; t=1327022426; x=1328232026; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=1vPpnep2hsJjOI+cfVcUaMkJKaGbbZYBTGxu4UcRICg=; b=ghHCTcpJZkoUWGCMFVJelE/vyn4aEtJkbYY7SbznHra9SnC3vyLqKjKc 8HXGyytZsK80u2V9EGmQt3pdrKh0zX07yaJ8bVvUKhbUy5K3puhYiN+eH k6NZfP1l/eN7SQ6eFESLABCOl/6FpMg1z32+jjm7BEnTRlcbdFuLE4SrP o=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqAEAIDAGE9Io8UY/2dsb2JhbABDrHuCA4FyAQEBAwEBAQEPAR0KNAsFBwQCAQgRBAEBCwYXAQYBJh8JCAEBBAEJAQgIFgSHWgiaTwGeOgSJNzYBBVABBAcBCwECAQEIAQEBAQJJCkqBaDkeFgEBAQKCR2MEiDmfMA
X-IronPort-AV: E=Sophos;i="4.71,539,1320624000";  d="scan'208";a="3789957"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 20 Jan 2012 01:20:24 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q0K1KOm3011509; Fri, 20 Jan 2012 01:20:24 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Fri, 20 Jan 2012 06:50:24 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Fri, 20 Jan 2012 06:50:22 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102CB2304@XMB-BGL-416.cisco.com>
In-Reply-To: <201201191747.q0JHlS5J015128@cichlid.raleigh.ibm.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-khasnabish-vmmi-problems-00.txt
Thread-Index: AczW0qaLHjfaIrIARQ+n0SUnPfoMjQAPVYSw
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com><1326989277.2513.4.camel@ecliptic.extremenetworks.com> <201201191747.q0JHlS5J015128@cichlid.raleigh.ibm.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Thomas Narten" <narten@us.ibm.com>, "Steven Blake" <sblake@extremenetworks.com>
X-OriginalArrivalTime: 20 Jan 2012 01:20:24.0613 (UTC) FILETIME=[AF864550:01CCD711]
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 01:20:27 -0000

I think it is fair to say that there is a difference between mobility
and portability. Mobility is live migration, but portability is
specifying a VM's properties, delete in one location and create in
another. The new location can be another hypervisor. In many cases, you
don't need mobility, just portability. E.g. if you have a disaster
recovery situation, then you aren't going to get mobility anyway.

DMTF has specified a standard called OVF (Open Virtualization Format)
that addresses the "description" of the VM. This format is supported by
various hypervisor vendors. So, some level of VM migration
standardization has already happened (albeit portability and not
mobility).=20

The questions are:

- Do we need a "control plane" to transfer VM state from point A to B -
the mobility problem?
- Do we need a "control plane" to transfer OVF specification from point
A to B - the portability problem?

The problem is relevant in the inter-datacenter, public-private, or
inter-cloud spaces, where there will be more than one hypervisor
controller by definition. Are we hitting the live migration issue today?
Maybe not. Is it conceivable that we will hit this issue? I think so.

However, the question has to be asked to the provider/operators and not
to the vendors.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Thomas Narten
Sent: Thursday, January 19, 2012 11:17 PM
To: Steven Blake
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

Steven,

> Several system vendors (myself included) stood up in Taipei and said
> "one encapsulation, please".  If IETF can facilitate industry
> convergence on a small set of NVO3 encapsulations (preferably one),
that
> would be a big win for Ethernet switch vendors.

I agree completely.

But my questions were asking about the apparent lack of  interest from
operators/implementers/market players regarding Bhumip's draft and the
apparent desire to have some sort of standards work related to the
general VM migration problem.

Is there such interest?

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From cdl@asgaard.org  Thu Jan 19 18:11:31 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 50A3E21F84FD for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 18:11:31 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.599
X-Spam-Level: 
X-Spam-Status: No, score=-6.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JkZWPC3qKVek for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 18:11:30 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id 8256021F84D5 for <dc@ietf.org>; Thu, 19 Jan 2012 18:11:30 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id A4E9AAF023F; Fri, 20 Jan 2012 02:11:29 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NzeaGvXejOok; Fri, 20 Jan 2012 02:11:27 +0000 (UTC)
Received: from fenrir.bigswitch.com (74-93-4-129-sfba.hfc.comcastbusiness.net [74.93.4.129]) by asgaard.org (Postfix) with ESMTPSA id 9D622AF0231; Fri, 20 Jan 2012 02:11:27 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: multipart/signed; boundary="Apple-Mail=_79712714-ACE8-40AA-8B0F-D69B386036CB"; protocol="application/pgp-signature"; micalg=pgp-sha1
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com>
Date: Thu, 19 Jan 2012 18:11:20 -0800
Message-Id: <406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com>
To: Ashish Dalela (adalela) <adalela@cisco.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Thomas Narten <narten@us.ibm.com>, Steven Blake <sblake@extremenetworks.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 02:11:31 -0000

--Apple-Mail=_79712714-ACE8-40AA-8B0F-D69B386036CB
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

Greetings,

	I agree with some subset of L2-in-L2, L2-in-L3, and L3-in-L3 =
encaps.  I'm not sure what we mean by network vs hypervisor based =
encaps.  Do we really want to have a different encap depending on where =
it's originated or terminated?  Anyone see problems with this if one end =
of the connection is an appliance (and connected to, say a switch) and =
one end that's a hypervisor guest?  When folks were asking for a small =
subset (where small is 1<=3Dn<=3D3 maybe) I didn't think that they were =
saying that this was a set of two, one for hypervisors, and one for =
"network elements" what ever that might mean (I would tend to view a =
hypervisor switch as an element in the network)

	Chris

On 19Jan2012, at 08.23, Ashish Dalela (adalela) wrote:

>=20
>>> "one encapsulation, please". =20
>=20
> How do you propose we reconcile the present scenario:
>=20
> - Hypervisor based encapsulations
> - Network based encapsulations
> - L2 in L2 encapsulation
> - L2 in L3 encapsulation
> - L3 in L3 encapsulation
>=20
> Thanks, Ashish
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Steven Blake
> Sent: Thursday, January 19, 2012 9:38 PM
> To: Thomas Narten
> Cc: dc@ietf.org
> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>=20
> On Thu, 2012-01-19 at 06:19 -0800, Thomas Narten wrote:
>=20
>>> It may be very helpful to bring one or more of these
>>> proprietary VM migration approaches to IETF for consideration
>>> for standardization, if that is appropriate.
>>=20
>> Well, let's start with this as a premise. Sure, in the utopia of open
>> standards everwhere, having all this stuff standardized would be
>> great.
>>=20
>> But let's be realistic. Are any of the vendors/implementors of these
>> systems coming to the IETF saying they want or need a standard? (If
>> so, I must have missed this.)
>>=20
>> If the market heavyweights are not indicating that they will
>> participate and/or implement such a standard, would anything the IETF
>> does in this space be relevant? (Based on experience, I fear the
>> answer is a very clear "no".)
>>=20
>> Finally, do we see operators who use the existing systems coming to
>> the IETF saying they want open standards here? So far, the silence =
has
>> been deafening so far...
>>=20
>>> Sure, we'll update the draft to articulate these requirements.
>>=20
>> Frankly, until and unless the questions above can be answered in a
>> more positive manner, IMO there is little point in spending any =
cycles
>> in this area, other than getting positive answers to the questions.
>>=20
>> Or am I missing something?
>=20
> Several system vendors (myself included) stood up in Taipei and said
> "one encapsulation, please".  If IETF can facilitate industry
> convergence on a small set of NVO3 encapsulations (preferably one), =
that
> would be a big win for Ethernet switch vendors.
>=20
>=20
> Regards,
>=20
> /////////////////////////////////////////////
> Steven Blake       sblake@extremenetworks.com
> Extreme Networks              +1 919-884-3211
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


--Apple-Mail=_79712714-ACE8-40AA-8B0F-D69B386036CB
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP using GPGMail

-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)

iQEcBAEBAgAGBQJPGM1OAAoJEGmx2Mt/+Iw/k4sH/jyorwes6b4QxFzf3LMfGwao
N7DzXtAYf+w9DdAjVPSOSsElQnV8SEJHz/w8rSR70sp9Oj/YQpkbsItZZdDcMuma
gm57YEEF/FDcGDulvzPuNZ/aF0QNNTMNCG3n96SEvMpbEpixFgYUNz7+IusPsetd
FCrbJ/ItsD/MBEqLDgdPSXG6Mekw85NOkNVyP5ICizaKoxw4XPJdYv4dB0sZBVwc
VnGMbJlkKwNNFNve2sw/3t/xC0GPUlEJai3vVKeJQc+5xiT05REVOPiVogSBpSFt
9urW0yjwLW8eIu4C7H4nW3k+gLeFqPd7KmBqODRPIolATyZlRqXIGSWt3Osa9wo=
=1h//
-----END PGP SIGNATURE-----

--Apple-Mail=_79712714-ACE8-40AA-8B0F-D69B386036CB--

From melinda.shore@gmail.com  Thu Jan 19 18:15:49 2012
Return-Path: <melinda.shore@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8B68721F85DB for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 18:15:49 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.044
X-Spam-Level: 
X-Spam-Status: No, score=-3.044 tagged_above=-999 required=5 tests=[AWL=0.555,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id VPxzWyk7p9tN for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 18:15:49 -0800 (PST)
Received: from mail-gy0-f172.google.com (mail-gy0-f172.google.com [209.85.160.172]) by ietfa.amsl.com (Postfix) with ESMTP id 07A4121F85E3 for <dc@ietf.org>; Thu, 19 Jan 2012 18:15:48 -0800 (PST)
Received: by ghrr16 with SMTP id r16so48422ghr.31 for <dc@ietf.org>; Thu, 19 Jan 2012 18:15:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=+BQvLAEgh2u7wtLtdJ/PatO8DGdvN6FFWxRWFRjcMew=; b=gwfTSrjX1xDKlu15SjyFMJ7lgG9TkKHgcFxClx6TeaQ642DfjUZVTrDBJ7pIlbyLKE duDAF7KxGiPoKQYvIsglX/Ep6qyzNEPBZNVNOsO/5PgiBzE/qXB/QALtAToGH3QWrikZ 4HzR/CQWz/piDAdjFONOo8oKV5J/8Iov+toXs=
Received: by 10.236.187.10 with SMTP id x10mr43188918yhm.4.1327025748663; Thu, 19 Jan 2012 18:15:48 -0800 (PST)
Received: from [137.229.12.236] (drake.swits.alaska.edu. [137.229.12.236]) by mx.google.com with ESMTPS id q29sm3790830anh.1.2012.01.19.18.15.47 (version=SSLv3 cipher=OTHER); Thu, 19 Jan 2012 18:15:48 -0800 (PST)
Message-ID: <4F18CE61.6030002@gmail.com>
Date: Thu, 19 Jan 2012 17:16:01 -0900
From: Melinda Shore <melinda.shore@gmail.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110424 Thunderbird/3.1.10
MIME-Version: 1.0
To: dc@ietf.org
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com> <406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org>
In-Reply-To: <406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 02:15:49 -0000

On 01/19/2012 05:11 PM, Christopher LILJENSTOLPE wrote:
> 	I agree with some subset of L2-in-L2, L2-in-L3, and L3-in-L3 encaps.
 > I'm not sure what we mean by network vs hypervisor based encaps.  Do
 > we really want to have a different encap depending on where it's
 > originated or terminated?  Anyone see problems with this if one end
 > of the connection is an appliance (and connected to, say a switch)
 > and one end that's a hypervisor guest?  When folks were asking for a
 > small subset (where small is 1<=n<=3 maybe) I didn't think that they
 > were saying that this was a set of two, one for hypervisors, and one
 > for "network elements" what ever that might mean (I would tend to
 > view a hypervisor switch as an element in the network)

I'm similarly unclear on what these distinctions actually are.  It
seems to me that in either case (network vs. "hypervisor") we're really
talking about tunnel endpoints.  Can someone state clearly what
the difference is in practice?

Thanks,

Melinda

From david.black@emc.com  Thu Jan 19 18:48:44 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id F41A321F85A5 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 18:48:43 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -108.912
X-Spam-Level: 
X-Spam-Status: No, score=-108.912 tagged_above=-999 required=5 tests=[AWL=1.687, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HUlY2kRboEWs for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 18:48:43 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id F02BE21F85A4 for <dc@ietf.org>; Thu, 19 Jan 2012 18:48:42 -0800 (PST)
Received: from hop04-l1d11-si01.isus.emc.com (HOP04-L1D11-SI01.isus.emc.com [10.254.111.54]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q0K2me6D024491 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 19 Jan 2012 21:48:41 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.221.253]) by hop04-l1d11-si01.isus.emc.com (RSA Interceptor); Thu, 19 Jan 2012 21:48:30 -0500
Received: from mxhub16.corp.emc.com (mxhub16.corp.emc.com [128.222.70.237]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q0K2mT4w014953; Thu, 19 Jan 2012 21:48:29 -0500
Received: from mx14a.corp.emc.com ([169.254.1.99]) by mxhub16.corp.emc.com ([128.222.70.237]) with mapi; Thu, 19 Jan 2012 21:48:29 -0500
From: <david.black@emc.com>
To: <adalela@cisco.com>
Date: Thu, 19 Jan 2012 21:48:28 -0500
Thread-Topic: [dc] draft-khasnabish-vmmi-problems-00.txt
Thread-Index: AczW0qaLHjfaIrIARQ+n0SUnPfoMjQAPVYSwAAM0Bd8=
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A7BB90E7@MX14A.corp.emc.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com><1326989277.2513.4.camel@ecliptic.extremenetworks.com> <201201191747.q0JHlS5J015128@cichlid.raleigh.ibm.com>, <618BE8B40039924EB9AED233D4A09C5102CB2304@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102CB2304@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 02:48:44 -0000

> - Do we need a "control plane" to transfer OVF specification from point
> A to B - the portability problem?

Was that supposed to be a serious question?

If it was, I suggest FTP or NFS, both of which are already used to move VM
images in practice, and are already specified in RFCs ;-).  OVF is fundamen=
tally
a VM image format.

Thanks,
--David
----------------------------------------------------
David L. Black, Distinguished Engineer
EMC Corporation, 176 South St., Hopkinton, MA  01748
+1 (508) 293-7953             FAX: +1 (508) 293-7786
david.black@emc.com        Mobile: +1 (978) 394-7754
----------------------------------------------------
________________________________________
From: dc-bounces@ietf.org [dc-bounces@ietf.org] On Behalf Of Ashish Dalela =
(adalela) [adalela@cisco.com]
Sent: Thursday, January 19, 2012 8:20 PM
To: Thomas Narten; Steven Blake
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

I think it is fair to say that there is a difference between mobility
and portability. Mobility is live migration, but portability is
specifying a VM's properties, delete in one location and create in
another. The new location can be another hypervisor. In many cases, you
don't need mobility, just portability. E.g. if you have a disaster
recovery situation, then you aren't going to get mobility anyway.

DMTF has specified a standard called OVF (Open Virtualization Format)
that addresses the "description" of the VM. This format is supported by
various hypervisor vendors. So, some level of VM migration
standardization has already happened (albeit portability and not
mobility).

The questions are:

- Do we need a "control plane" to transfer VM state from point A to B -
the mobility problem?
- Do we need a "control plane" to transfer OVF specification from point
A to B - the portability problem?

The problem is relevant in the inter-datacenter, public-private, or
inter-cloud spaces, where there will be more than one hypervisor
controller by definition. Are we hitting the live migration issue today?
Maybe not. Is it conceivable that we will hit this issue? I think so.

However, the question has to be asked to the provider/operators and not
to the vendors.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Thomas Narten
Sent: Thursday, January 19, 2012 11:17 PM
To: Steven Blake
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

Steven,

> Several system vendors (myself included) stood up in Taipei and said
> "one encapsulation, please".  If IETF can facilitate industry
> convergence on a small set of NVO3 encapsulations (preferably one),
that
> would be a big win for Ethernet switch vendors.

I agree completely.

But my questions were asking about the apparent lack of  interest from
operators/implementers/market players regarding Bhumip's draft and the
apparent desire to have some sort of standards work related to the
general VM migration problem.

Is there such interest?

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc=

From adalela@cisco.com  Thu Jan 19 20:10:54 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3713B21F859F for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 20:10:54 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.436
X-Spam-Level: 
X-Spam-Status: No, score=-2.436 tagged_above=-999 required=5 tests=[AWL=0.163,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id epV2C5pkXNIp for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 20:10:53 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id 4FD7921F859B for <dc@ietf.org>; Thu, 19 Jan 2012 20:10:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=4244; q=dns/txt; s=iport; t=1327032649; x=1328242249; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=u42yLQufwXDD4TRwPF9q/p3IfpKJIemi6yxpejCI9b8=; b=TqcJODc82F0sOg8Flxr6iNHXmCIXMCIWVUgRnnqoBkoAX3HwQdT6Ejoj W1uobhF6whgx5RFkeKucpI01pEE6TUg8XutRhA7waI0UTqoc31uCx+5/7 RKvPWkcJlMcrJvpp/IKL3ikDxAAMCWmfskIzZCwAFCn+5srvGWP5mHVvF s=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqAEABroGE9Io8UY/2dsb2JhbABAA6x7ggOBcgEBAQMBAQEBDwEdCjQLBQcEAgEIEQQBAQsGFwEGASYfCQgBAQQKAQgIFgSHWgiaQwGePQSJNzYBBVABBAcBCwECAQEIAQEBA0kKSoFoVxYBAQECCYI+YwSIOZ8w
X-IronPort-AV: E=Sophos;i="4.71,539,1320624000";  d="scan'208";a="3800840"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 20 Jan 2012 04:10:14 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q0K4AEKO020289; Fri, 20 Jan 2012 04:10:14 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Fri, 20 Jan 2012 09:40:14 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Fri, 20 Jan 2012 09:40:08 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102CB2326@XMB-BGL-416.cisco.com>
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A7BB90E7@MX14A.corp.emc.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-khasnabish-vmmi-problems-00.txt
Thread-Index: AczW0qaLHjfaIrIARQ+n0SUnPfoMjQAPVYSwAAM0Bd8AAwrgkA==
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com><1326989277.2513.4.camel@ecliptic.extremenetworks.com><201201191747.q0JHlS5J015128@cichlid.raleigh.ibm.com>, <618BE8B40039924EB9AED233D4A09C5102CB2304@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A7BB90E7@MX14A.corp.emc.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: <david.black@emc.com>
X-OriginalArrivalTime: 20 Jan 2012 04:10:14.0392 (UTC) FILETIME=[691B2780:01CCD729]
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 04:10:54 -0000

>> Was that supposed to be a serious question?

Yes, it is a serious question, because VM mobility goes beyond the VM.

>> If it was, I suggest FTP or NFS, both of which are already used to
move VM
>> images in practice, and are already specified in RFCs ;-).  OVF is
fundamentally
>> a VM image format.

That's one approach. Another approach is to use a SOAP/REST APIs. Yet
another one is to define a cloud control plane, that does more than just
move VMs. E.g. when you move a VM, you have to move the firewall rules,
the VLAN association, the bandwidth, VRF configuration, GRE tunnel
configuration, etc.

Thanks, Ashish


-----Original Message-----
From: david.black@emc.com [mailto:david.black@emc.com]=20
Sent: Friday, January 20, 2012 8:18 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt

> - Do we need a "control plane" to transfer OVF specification from
point
> A to B - the portability problem?

Was that supposed to be a serious question?

If it was, I suggest FTP or NFS, both of which are already used to move
VM
images in practice, and are already specified in RFCs ;-).  OVF is
fundamentally
a VM image format.

Thanks,
--David
----------------------------------------------------
David L. Black, Distinguished Engineer
EMC Corporation, 176 South St., Hopkinton, MA  01748
+1 (508) 293-7953             FAX: +1 (508) 293-7786
david.black@emc.com        Mobile: +1 (978) 394-7754
----------------------------------------------------
________________________________________
From: dc-bounces@ietf.org [dc-bounces@ietf.org] On Behalf Of Ashish
Dalela (adalela) [adalela@cisco.com]
Sent: Thursday, January 19, 2012 8:20 PM
To: Thomas Narten; Steven Blake
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

I think it is fair to say that there is a difference between mobility
and portability. Mobility is live migration, but portability is
specifying a VM's properties, delete in one location and create in
another. The new location can be another hypervisor. In many cases, you
don't need mobility, just portability. E.g. if you have a disaster
recovery situation, then you aren't going to get mobility anyway.

DMTF has specified a standard called OVF (Open Virtualization Format)
that addresses the "description" of the VM. This format is supported by
various hypervisor vendors. So, some level of VM migration
standardization has already happened (albeit portability and not
mobility).

The questions are:

- Do we need a "control plane" to transfer VM state from point A to B -
the mobility problem?
- Do we need a "control plane" to transfer OVF specification from point
A to B - the portability problem?

The problem is relevant in the inter-datacenter, public-private, or
inter-cloud spaces, where there will be more than one hypervisor
controller by definition. Are we hitting the live migration issue today?
Maybe not. Is it conceivable that we will hit this issue? I think so.

However, the question has to be asked to the provider/operators and not
to the vendors.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Thomas Narten
Sent: Thursday, January 19, 2012 11:17 PM
To: Steven Blake
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

Steven,

> Several system vendors (myself included) stood up in Taipei and said
> "one encapsulation, please".  If IETF can facilitate industry
> convergence on a small set of NVO3 encapsulations (preferably one),
that
> would be a big win for Ethernet switch vendors.

I agree completely.

But my questions were asking about the apparent lack of  interest from
operators/implementers/market players regarding Bhumip's draft and the
apparent desire to have some sort of standards work related to the
general VM migration problem.

Is there such interest?

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From adalela@cisco.com  Thu Jan 19 20:26:20 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A2A2921F8525 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 20:26:20 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.439
X-Spam-Level: 
X-Spam-Status: No, score=-2.439 tagged_above=-999 required=5 tests=[AWL=0.160,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id B0giJ70GlUEI for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 20:26:19 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id C8D7D21F853F for <dc@ietf.org>; Thu, 19 Jan 2012 20:26:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=3735; q=dns/txt; s=iport; t=1327033579; x=1328243179; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to; bh=oCVwqNztC7C9oRHvMA83C7p1LU0xx7pYRvR8GZAFJ1E=; b=TSE0izY1KKA2LifiBOSm80uPvhfhR0WU7SUXlybqpDWFZm2ikpls60DL KZHey6Iiasujlv87ZRwdnFvrEZNH9sJpLba8UbCt1E5o+gamm1ehKhy1O zOFY34izqNKzoKHrXGfVUiwsANnQTqpGRcoC4Wi7jHS4aIRjqFKII7ShT M=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AqAEAIfrGE9Io8UY/2dsb2JhbABDrHuCA4FyAQEBAwEBAQEPAR0KNBcEAgEIEQQBAQEKBhcBBgEmHwkIAQEEAQoICBqHWgiaQQGePgSJNzYBBVABBAcBCwECAQEIAQEBA0kKSoFoVxYBAQECgkdjBIg5nzA
X-IronPort-AV: E=Sophos;i="4.71,539,1320624000";  d="scan'208";a="3802289"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 20 Jan 2012 04:26:17 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q0K4QHbs014268; Fri, 20 Jan 2012 04:26:17 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Fri, 20 Jan 2012 09:56:17 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Fri, 20 Jan 2012 09:56:15 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com>
In-Reply-To: <4F18CE61.6030002@gmail.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-khasnabish-vmmi-problems-00.txt
Thread-Index: AczXGXuf66KXNblyQxmY8Rn5XCnJLwAD+9yQ
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Melinda Shore" <melinda.shore@gmail.com>, <dc@ietf.org>
X-OriginalArrivalTime: 20 Jan 2012 04:26:17.0344 (UTC) FILETIME=[A711F800:01CCD72B]
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 04:26:20 -0000

>> I'm similarly unclear on what these distinctions actually are.  It
>> seems to me that in either case (network vs. "hypervisor") we're
really
>> talking about tunnel endpoints.  Can someone state clearly what
>> the difference is in practice?

Tunnel endpoint creation needs a control plane (you don't to be
configuring tunnel end-points manually).

What kind of control plane would you run in the hypervisor? Network type
of control plane that has link-state or distance vector association? Or,
something that interacts with a management controller? People will
prefer the management type of interaction, as you can't have every
hypervisor interact with the network control plane, because it will
inject too many network devices, and break the control plane scale
problem.

As an example, a L2 over L3 needs to map a VLAN to a multicast group.
How do you know which multicast group is available? If the multicast
group creation is happening across many hypervisors, how do these
hypervisors coordinate the multicast group publishing between them? They
have to run PIM?

And, what do you do when a VM talks to a physical server (that doesn't
have a hypervisor). Terminate the encapsulation in the network access?
If yes, how does the network access know which encapsulation to
terminate, and which ones not to terminate? That needs an additional
control plane between a hypervisor to the network that says - "I'm a
hypervisor and I have decided to do XYZ type of encapsulation within
myself, but host targets PQR are unaware of this scheme, and I need you
to do the needful on my behalf". This additional control plane has to
now propagate to the other endpoint.

Then there are issues about QoS, Bandwidth, Security etc. E.g. VM-One
and VM-Two belong to different tenants, but sit on the same hypervisor.
VM-One and VM-Two have different QoS, Bandwidth needs, but they have the
same tunnel. How do I distinguish between them based on the tunnel? In
fact, if the tenant isolation is in the hypervisor, then the underlying
network has no clue which tenant needs what policy.=20

So, in short, yes they are both tunnel endpoints. But, when you start to
worry about the control plane needed to automate these tunnels, there
are many differences between network and hypervisor approaches.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Melinda Shore
Sent: Friday, January 20, 2012 7:46 AM
To: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

On 01/19/2012 05:11 PM, Christopher LILJENSTOLPE wrote:
> 	I agree with some subset of L2-in-L2, L2-in-L3, and L3-in-L3
encaps.
 > I'm not sure what we mean by network vs hypervisor based encaps.  Do
 > we really want to have a different encap depending on where it's
 > originated or terminated?  Anyone see problems with this if one end
 > of the connection is an appliance (and connected to, say a switch)
 > and one end that's a hypervisor guest?  When folks were asking for a
 > small subset (where small is 1<=3Dn<=3D3 maybe) I didn't think that =
they
 > were saying that this was a set of two, one for hypervisors, and one
 > for "network elements" what ever that might mean (I would tend to
 > view a hypervisor switch as an element in the network)

I'm similarly unclear on what these distinctions actually are.  It
seems to me that in either case (network vs. "hypervisor") we're really
talking about tunnel endpoints.  Can someone state clearly what
the difference is in practice?

Thanks,

Melinda
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From melinda.shore@gmail.com  Thu Jan 19 20:36:31 2012
Return-Path: <melinda.shore@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id BB51921F8497 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 20:36:31 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.599
X-Spam-Level: 
X-Spam-Status: No, score=-3.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3qaya4i8hVau for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 20:36:30 -0800 (PST)
Received: from mail-yx0-f172.google.com (mail-yx0-f172.google.com [209.85.213.172]) by ietfa.amsl.com (Postfix) with ESMTP id C0F5621F8495 for <dc@ietf.org>; Thu, 19 Jan 2012 20:36:30 -0800 (PST)
Received: by yenm3 with SMTP id m3so91947yen.31 for <dc@ietf.org>; Thu, 19 Jan 2012 20:36:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=WQxoEDwY3RWKcfql8M7aXxPomoxcCjv8/z6L5DMnGzE=; b=VPut1IOr/5VNgrUZy7BG/u5VH5NXWZrClzm5yV/6ywHEKU1xmwze5ir10aBd79GN6a p1m8Wn7JdA0NVH7rjm8WfTSFwL3WsJKpUzu2TPJ+hH/+LJlQz/hdZdsZuEXWtcX4BVER 31Zc8J8qFWIzjTygqtmuKBzekSiyg+IWrppKw=
Received: by 10.236.131.12 with SMTP id l12mr41607692yhi.111.1327034190444; Thu, 19 Jan 2012 20:36:30 -0800 (PST)
Received: from polypro.local (66-230-87-211-rb1.fai.dsl.dynamic.acsalaska.net. [66.230.87.211]) by mx.google.com with ESMTPS id w28sm3108709yhi.21.2012.01.19.20.36.28 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 19 Jan 2012 20:36:29 -0800 (PST)
Message-ID: <4F18EF4A.3060308@gmail.com>
Date: Thu, 19 Jan 2012 19:36:26 -0900
From: Melinda Shore <melinda.shore@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.25) Gecko/20111213 Lightning/1.0b2 Thunderbird/3.1.17
MIME-Version: 1.0
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 04:36:31 -0000

On 1/19/12 7:26 PM, Ashish Dalela (adalela) wrote:
>Bandwidth needs, but they have the
> same tunnel. How do I distinguish between them based on the tunnel? In
> fact, if the tenant isolation is in the hypervisor, then the underlying
> network has no clue which tenant needs what policy.

Well, that's not true.  In the case of IPSec we've got SPIs, and
there are similar demultiplexing mechanisms in other technologies.

But frankly I think that if you're going to distinguish between
tunnel endpoints in the hypervisor and tunnel endpoints in other
sorts of network devices I think you're going to be somewhat
hard-pressed to make the case for working on the former in
the IETF.

Melinda

From adalela@cisco.com  Thu Jan 19 21:07:32 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id B760921F858A for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 21:07:32 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.442
X-Spam-Level: 
X-Spam-Status: No, score=-2.442 tagged_above=-999 required=5 tests=[AWL=0.157,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3kF6wmO0XqXR for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 21:07:32 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id E358D21F8582 for <dc@ietf.org>; Thu, 19 Jan 2012 21:07:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=1482; q=dns/txt; s=iport; t=1327036041; x=1328245641; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=dlwy2SgeoHLJsb5PVC6BEPwZmgAOr+1URGsMTr63vKc=; b=dSOj1twL4g81LSzaXm+Imxov0HMPEpajROeItcxKRXZBDEV20D4gUttn wnoQiKHWGuUbWpybze6oP2rNzEcZZ2ifWmPNUwtKJmbe0GEiBNC4HyWmu fkcjpYnztz7Kj6qDgwCdVE7uoFRR87biZAiVfjrT66Bhqv4WmrWxkAHQl Y=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ap4EANL1GE9Io8UY/2dsb2JhbABDrwCBcgEBAQQSAR0KPwwEAgEIEQQBAQEKBhcBBgEgJQkIAQEECwgIGqIcAZ5Di0NjBIg6l1+HUg
X-IronPort-AV: E=Sophos;i="4.71,540,1320624000";  d="scan'208";a="3805385"
Received: from vla196-nat.cisco.com (HELO bgl-core-3.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 20 Jan 2012 05:07:19 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-3.cisco.com (8.14.3/8.14.3) with ESMTP id q0K57Jnk026403; Fri, 20 Jan 2012 05:07:19 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Fri, 20 Jan 2012 10:37:19 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Fri, 20 Jan 2012 10:37:17 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com>
In-Reply-To: <4F18EF4A.3060308@gmail.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-khasnabish-vmmi-problems-00.txt
Thread-Index: AczXLRbl4KfPfAt9StuylXXiAk98ygAAdawA
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com> <4F18EF4A.3060308@gmail.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Melinda Shore" <melinda.shore@gmail.com>
X-OriginalArrivalTime: 20 Jan 2012 05:07:19.0297 (UTC) FILETIME=[62822F10:01CCD731]
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 05:07:32 -0000

I would not arrive at the conclusion that hypervisor work should or
should not be done in IETF. That's a separate question. VXLAN and NVGRE
are hypervisor based approaches. But, they don't have control planes
(yet). My point is that finding a common map-encap scheme isn't that
hard. The harder part is how to make the hypervisor and network based
map-encap *control planes* work the same way.=20

If they don't work the same way, then L2-in-L2, L2-in-L3, L3-in-L3 has a
network flavor and a hypervisor flavor.

Thanks, Ashish


-----Original Message-----
From: Melinda Shore [mailto:melinda.shore@gmail.com]=20
Sent: Friday, January 20, 2012 10:06 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

On 1/19/12 7:26 PM, Ashish Dalela (adalela) wrote:
>Bandwidth needs, but they have the
> same tunnel. How do I distinguish between them based on the tunnel? In
> fact, if the tenant isolation is in the hypervisor, then the
underlying
> network has no clue which tenant needs what policy.

Well, that's not true.  In the case of IPSec we've got SPIs, and
there are similar demultiplexing mechanisms in other technologies.

But frankly I think that if you're going to distinguish between
tunnel endpoints in the hypervisor and tunnel endpoints in other
sorts of network devices I think you're going to be somewhat
hard-pressed to make the case for working on the former in
the IETF.

Melinda

From jmh@joelhalpern.com  Thu Jan 19 21:28:24 2012
Return-Path: <jmh@joelhalpern.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 29F4F21F8573 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 21:28:24 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -102.265
X-Spam-Level: 
X-Spam-Status: No, score=-102.265 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, IP_NOT_FRIENDLY=0.334, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zBiBWWKlA+dC for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 21:28:23 -0800 (PST)
Received: from morbo.mail.tigertech.net (morbo.mail.tigertech.net [67.131.251.54]) by ietfa.amsl.com (Postfix) with ESMTP id 72C2621F8572 for <dc@ietf.org>; Thu, 19 Jan 2012 21:28:23 -0800 (PST)
Received: from mailc2.tigertech.net (mailc2.tigertech.net [208.80.4.156]) by morbo.tigertech.net (Postfix) with ESMTP id 47C3A103A1E for <dc@ietf.org>; Thu, 19 Jan 2012 21:28:23 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by mailc2.tigertech.net (Postfix) with ESMTP id 766A22C244E; Thu, 19 Jan 2012 21:28:22 -0800 (PST)
X-Virus-Scanned: Debian amavisd-new at c2.tigertech.net
Received: from [172.17.114.244] (207.47.24.2.static.nextweb.net [207.47.24.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mailc2.tigertech.net (Postfix) with ESMTPSA id 1978F2C244B; Thu, 19 Jan 2012 21:28:22 -0800 (PST)
Message-ID: <4F18FB72.2090900@joelhalpern.com>
Date: Fri, 20 Jan 2012 00:28:18 -0500
From: "Joel M. Halpern" <jmh@joelhalpern.com>
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com> <4F18EF4A.3060308@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 05:28:24 -0000

While one can construct strawman hypothesis in whcih there are reasons 
to have different tunnel control protocols depepnding upon end-point 
location, equally one can construct reasonably hypothesis in which the 
same protocol mechanisms work whether the end-point is at the VM, the 
Hypervisor, the ToR switch, or an aggregation switch.


Yours,
Joel

On 1/20/2012 12:07 AM, Ashish Dalela (adalela) wrote:
>
> I would not arrive at the conclusion that hypervisor work should or
> should not be done in IETF. That's a separate question. VXLAN and NVGRE
> are hypervisor based approaches. But, they don't have control planes
> (yet). My point is that finding a common map-encap scheme isn't that
> hard. The harder part is how to make the hypervisor and network based
> map-encap *control planes* work the same way.
>
> If they don't work the same way, then L2-in-L2, L2-in-L3, L3-in-L3 has a
> network flavor and a hypervisor flavor.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: Melinda Shore [mailto:melinda.shore@gmail.com]
> Sent: Friday, January 20, 2012 10:06 AM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>
> On 1/19/12 7:26 PM, Ashish Dalela (adalela) wrote:
>> Bandwidth needs, but they have the
>> same tunnel. How do I distinguish between them based on the tunnel? In
>> fact, if the tenant isolation is in the hypervisor, then the
> underlying
>> network has no clue which tenant needs what policy.
>
> Well, that's not true.  In the case of IPSec we've got SPIs, and
> there are similar demultiplexing mechanisms in other technologies.
>
> But frankly I think that if you're going to distinguish between
> tunnel endpoints in the hypervisor and tunnel endpoints in other
> sorts of network devices I think you're going to be somewhat
> hard-pressed to make the case for working on the former in
> the IETF.
>
> Melinda
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>

From adalela@cisco.com  Thu Jan 19 21:53:00 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1C43721F857A for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 21:53:00 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.445
X-Spam-Level: 
X-Spam-Status: No, score=-2.445 tagged_above=-999 required=5 tests=[AWL=0.154,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id i0ThWZodki-n for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 21:52:59 -0800 (PST)
Received: from bgl-iport-1.cisco.com (bgl-iport-1.cisco.com [72.163.197.25]) by ietfa.amsl.com (Postfix) with ESMTP id C01BA21F8543 for <dc@ietf.org>; Thu, 19 Jan 2012 21:52:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=2969; q=dns/txt; s=iport; t=1327038779; x=1328248379; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=DVvprlpJ/uukeXOLeSoP9cjl7i1QyvWK3STOGSYZSrI=; b=QTVPZLvC/B9b8giSJ7tpIlrn2LzsU/Q/lxSW5E3Zum42yMQfAZez4gKt RMhbEtXoX5FCKVHsAs9KQHeJaRgKnHKXxaokbqyJJX36rT2N8dQ/kfNu4 q455QpX6fiHJVuiV977JtyqqTOvtKmCGWUHcdAdObaX60g39Pd5HkBWaF E=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AokTACEAGU9Io8UY/2dsb2JhbABDpyACh16BcgEBAQQBAQEPAR0KNAsMBAIBCBEEAQEBCgYXAQYBIAYfCQgBAQQLCAgah2KaNQGePwSLQ2MEiDqXX4dS
X-IronPort-AV: E=Sophos;i="4.71,540,1320624000";  d="scan'208";a="3809769"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-1.cisco.com with ESMTP; 20 Jan 2012 05:52:57 +0000
Received: from xbh-bgl-412.cisco.com (xbh-bgl-412.cisco.com [72.163.129.202]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q0K5qvlQ008073; Fri, 20 Jan 2012 05:52:57 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-412.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Fri, 20 Jan 2012 11:22:57 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Fri, 20 Jan 2012 11:22:55 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com>
In-Reply-To: <4F18FB72.2090900@joelhalpern.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-khasnabish-vmmi-problems-00.txt
Thread-Index: AczXNFZ1NSiDPlw8TeSK9mx6Vbf2BAAAUVWg
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com> <4F18EF4A.3060308@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com> <4F18FB72.2090900@joelhalpern.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Joel M. Halpern" <jmh@joelhalpern.com>
X-OriginalArrivalTime: 20 Jan 2012 05:52:57.0542 (UTC) FILETIME=[C2A13660:01CCD737]
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 05:53:00 -0000

Joel,

Hypervisor control is in the hypervisor manager. Switch control is in
the network control plane. These are parallel silos, that don't
interact.=20

Either the hypervisor manager defers control to the network control
plane, or the switches defer control to the manager. Or, some new third
entity emerges to control both, and both hypervisor and switch defer
control to that entity.=20

We can't have separate control models and expect this to work in the
same way.=20

Which of these (or other) models you think presents a reasonable
approach to reconcile hypervisor control and network control?

Thanks, Ashish

-----Original Message-----
From: Joel M. Halpern [mailto:jmh@joelhalpern.com]=20
Sent: Friday, January 20, 2012 10:58 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

While one can construct strawman hypothesis in whcih there are reasons=20
to have different tunnel control protocols depepnding upon end-point=20
location, equally one can construct reasonably hypothesis in which the=20
same protocol mechanisms work whether the end-point is at the VM, the=20
Hypervisor, the ToR switch, or an aggregation switch.


Yours,
Joel

On 1/20/2012 12:07 AM, Ashish Dalela (adalela) wrote:
>
> I would not arrive at the conclusion that hypervisor work should or
> should not be done in IETF. That's a separate question. VXLAN and
NVGRE
> are hypervisor based approaches. But, they don't have control planes
> (yet). My point is that finding a common map-encap scheme isn't that
> hard. The harder part is how to make the hypervisor and network based
> map-encap *control planes* work the same way.
>
> If they don't work the same way, then L2-in-L2, L2-in-L3, L3-in-L3 has
a
> network flavor and a hypervisor flavor.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: Melinda Shore [mailto:melinda.shore@gmail.com]
> Sent: Friday, January 20, 2012 10:06 AM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>
> On 1/19/12 7:26 PM, Ashish Dalela (adalela) wrote:
>> Bandwidth needs, but they have the
>> same tunnel. How do I distinguish between them based on the tunnel?
In
>> fact, if the tenant isolation is in the hypervisor, then the
> underlying
>> network has no clue which tenant needs what policy.
>
> Well, that's not true.  In the case of IPSec we've got SPIs, and
> there are similar demultiplexing mechanisms in other technologies.
>
> But frankly I think that if you're going to distinguish between
> tunnel endpoints in the hypervisor and tunnel endpoints in other
> sorts of network devices I think you're going to be somewhat
> hard-pressed to make the case for working on the former in
> the IETF.
>
> Melinda
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>

From melinda.shore@gmail.com  Thu Jan 19 22:01:54 2012
Return-Path: <melinda.shore@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8416521F84FD for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 22:01:54 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.599
X-Spam-Level: 
X-Spam-Status: No, score=-3.599 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id YdiOmGuT2ge0 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 22:01:54 -0800 (PST)
Received: from mail-gx0-f172.google.com (mail-gx0-f172.google.com [209.85.161.172]) by ietfa.amsl.com (Postfix) with ESMTP id C744E21F84F6 for <dc@ietf.org>; Thu, 19 Jan 2012 22:01:53 -0800 (PST)
Received: by ggnr5 with SMTP id r5so112067ggn.31 for <dc@ietf.org>; Thu, 19 Jan 2012 22:01:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:subject:references :in-reply-to:content-type:content-transfer-encoding; bh=jvJ4Yw+OmP/l/tsd4WAwtT8oN8otUGG5E/QuZOmL7pk=; b=HdqljFbbwv5cE8vn8i1cag/+fvNI6OWcomYORu2+8YNuHF9eK46kTf3ZMlIodTEos7 eAPTIL7jyMMYJL2ZerIKIbGEWW+FnGXApcRbX7ecug1wQKkYpx4v4juUzMae4Snp555+ S4rp5NK3RgBVOHrsvIXWpa8mm/iTpFJrfxwbU=
Received: by 10.236.189.105 with SMTP id b69mr10957118yhn.90.1327039313427; Thu, 19 Jan 2012 22:01:53 -0800 (PST)
Received: from polypro.local (66-230-87-211-rb1.fai.dsl.dynamic.acsalaska.net. [66.230.87.211]) by mx.google.com with ESMTPS id j11sm5378644anl.8.2012.01.19.22.01.52 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 19 Jan 2012 22:01:52 -0800 (PST)
Message-ID: <4F19034E.1070802@gmail.com>
Date: Thu, 19 Jan 2012 21:01:50 -0900
From: Melinda Shore <melinda.shore@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.25) Gecko/20111213 Lightning/1.0b2 Thunderbird/3.1.17
MIME-Version: 1.0
To: dc@ietf.org
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org>	<4F18CE61.6030002@gmail.com>	<618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com>	<4F18EF4A.3060308@gmail.com>	<618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com>	<4F18FB72.2090900@joelhalpern.com> <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 06:01:54 -0000

On 1/19/12 8:52 PM, Ashish Dalela (adalela) wrote:
> Hypervisor control is in the hypervisor manager. Switch control is in
> the network control plane. These are parallel silos, that don't
> interact.

It seems to me that you're implying that we can't put protocols on
the hypervisor - is that correct?

I'm still having a hard time understanding your argument.  Can you
talk about specific information elements that you'd need for a
hypervisor but not a VPN concentrator (or vice versa), or protocol
elements ditto?  I'd find it helpful if you could illustrate what
the issue is.

Melinda

From jmh@joelhalpern.com  Thu Jan 19 22:05:04 2012
Return-Path: <jmh@joelhalpern.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 41C1E21F85F6 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 22:05:04 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -102.265
X-Spam-Level: 
X-Spam-Status: No, score=-102.265 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, IP_NOT_FRIENDLY=0.334, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RV2scqX4VhEB for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 22:05:03 -0800 (PST)
Received: from morbo.mail.tigertech.net (morbo.mail.tigertech.net [67.131.251.54]) by ietfa.amsl.com (Postfix) with ESMTP id 94E5321F85CE for <dc@ietf.org>; Thu, 19 Jan 2012 22:05:03 -0800 (PST)
Received: from mailc2.tigertech.net (mailc2.tigertech.net [208.80.4.156]) by morbo.tigertech.net (Postfix) with ESMTP id 37723A5E17 for <dc@ietf.org>; Thu, 19 Jan 2012 22:05:03 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by mailc2.tigertech.net (Postfix) with ESMTP id 00E4D2C25E7; Thu, 19 Jan 2012 22:05:03 -0800 (PST)
X-Virus-Scanned: Debian amavisd-new at c2.tigertech.net
Received: from [172.17.114.244] (207.47.24.2.static.nextweb.net [207.47.24.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mailc2.tigertech.net (Postfix) with ESMTPSA id D1A182C25E6; Thu, 19 Jan 2012 22:05:02 -0800 (PST)
Message-ID: <4F19040B.7000901@joelhalpern.com>
Date: Fri, 20 Jan 2012 01:04:59 -0500
From: "Joel M. Halpern" <jmh@joelhalpern.com>
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
MIME-Version: 1.0
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com> <4F18EF4A.3060308@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com> <4F18FB72.2090900@joelhalpern.com> <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 06:05:04 -0000

Several of the proposal use mechanisms that are equally applicable.  For 
a general description of one class of such approaches, look at the dcop 
draft Warren Kumari and I wrote.  I am sure that there are other classes 
as well.

Yours,
Joel

On 1/20/2012 12:52 AM, Ashish Dalela (adalela) wrote:
> Joel,
>
> Hypervisor control is in the hypervisor manager. Switch control is in
> the network control plane. These are parallel silos, that don't
> interact.
>
> Either the hypervisor manager defers control to the network control
> plane, or the switches defer control to the manager. Or, some new third
> entity emerges to control both, and both hypervisor and switch defer
> control to that entity.
>
> We can't have separate control models and expect this to work in the
> same way.
>
> Which of these (or other) models you think presents a reasonable
> approach to reconcile hypervisor control and network control?
>
> Thanks, Ashish
>
> -----Original Message-----
> From: Joel M. Halpern [mailto:jmh@joelhalpern.com]
> Sent: Friday, January 20, 2012 10:58 AM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>
> While one can construct strawman hypothesis in whcih there are reasons
> to have different tunnel control protocols depepnding upon end-point
> location, equally one can construct reasonably hypothesis in which the
> same protocol mechanisms work whether the end-point is at the VM, the
> Hypervisor, the ToR switch, or an aggregation switch.
>
>
> Yours,
> Joel
>
> On 1/20/2012 12:07 AM, Ashish Dalela (adalela) wrote:
>>
>> I would not arrive at the conclusion that hypervisor work should or
>> should not be done in IETF. That's a separate question. VXLAN and
> NVGRE
>> are hypervisor based approaches. But, they don't have control planes
>> (yet). My point is that finding a common map-encap scheme isn't that
>> hard. The harder part is how to make the hypervisor and network based
>> map-encap *control planes* work the same way.
>>
>> If they don't work the same way, then L2-in-L2, L2-in-L3, L3-in-L3 has
> a
>> network flavor and a hypervisor flavor.
>>
>> Thanks, Ashish
>>
>>
>> -----Original Message-----
>> From: Melinda Shore [mailto:melinda.shore@gmail.com]
>> Sent: Friday, January 20, 2012 10:06 AM
>> To: Ashish Dalela (adalela)
>> Cc: dc@ietf.org
>> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>>
>> On 1/19/12 7:26 PM, Ashish Dalela (adalela) wrote:
>>> Bandwidth needs, but they have the
>>> same tunnel. How do I distinguish between them based on the tunnel?
> In
>>> fact, if the tenant isolation is in the hypervisor, then the
>> underlying
>>> network has no clue which tenant needs what policy.
>>
>> Well, that's not true.  In the case of IPSec we've got SPIs, and
>> there are similar demultiplexing mechanisms in other technologies.
>>
>> But frankly I think that if you're going to distinguish between
>> tunnel endpoints in the hypervisor and tunnel endpoints in other
>> sorts of network devices I think you're going to be somewhat
>> hard-pressed to make the case for working on the former in
>> the IETF.
>>
>> Melinda
>> _______________________________________________
>> dc mailing list
>> dc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dc
>>
>

From vishwas.ietf@gmail.com  Thu Jan 19 22:20:17 2012
Return-Path: <vishwas.ietf@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A615621F8607 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 22:20:17 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: 0.339
X-Spam-Level: 
X-Spam-Status: No, score=0.339 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HTML_MESSAGE=0.001, HTML_MIME_NO_HTML_TAG=0.097, MIME_HTML_ONLY=1.457, MIME_HTML_ONLY_MULTI=0.001, MPART_ALT_DIFF=0.739, RCVD_IN_DNSWL_LOW=-1, RCVD_IN_NJABL_PROXY=1.643]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qAwhpjTE10gu for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 22:20:16 -0800 (PST)
Received: from mail-tul01m020-f172.google.com (mail-tul01m020-f172.google.com [209.85.214.172]) by ietfa.amsl.com (Postfix) with ESMTP id 9B67021F85F2 for <dc@ietf.org>; Thu, 19 Jan 2012 22:20:16 -0800 (PST)
Received: by obbwc12 with SMTP id wc12so330285obb.31 for <dc@ietf.org>; Thu, 19 Jan 2012 22:20:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=message-id:date:from:to:cc:subject:in-reply-to:x-mailer :mime-version:content-type; bh=UzHQn7m+in1GgdycXwa6pxNPUucawGIn8ghZOQhuqzQ=; b=Ehce9rn8tlz9Y3Ms/XylWLdQ3TFXDOD4Y7HU2pCLz87UjG0aRgdhAh6AxEwdUnHqV+ Mhcll5/uba9F8qaTq+wQcjxL7N5TAOJiA0fkmAAL91Ip78ZIm7VjKBNSrghuWbzzGwPm MVOr8usCFmpolkDuDqgs0lCPfOnLx5K1/l35s=
Received: by 10.50.89.197 with SMTP id bq5mr31326494igb.24.1327040414755; Thu, 19 Jan 2012 22:20:14 -0800 (PST)
Received: from www.palm.com (c-67-161-8-98.hsd1.ca.comcast.net. [67.161.8.98]) by mx.google.com with ESMTPS id f8sm6372087ibl.6.2012.01.19.22.20.12 (version=SSLv3 cipher=OTHER); Thu, 19 Jan 2012 22:20:13 -0800 (PST)
Message-ID: <4f19079d.8853e70a.5b6d.ffffc0fd@mx.google.com>
Date: Thu, 19 Jan 2012 22:20:13 -0800
From: <vishwas.ietf@gmail.com>
To: "Joel M. Halpern" <jmh@joelhalpern.com>, "Ashish Dalela (adalela)" <adalela@cisco.com>
In-Reply-To: <4F19040B.7000901@joelhalpern.com>
X-Mailer: Palm webOS
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="Alternative_=_Boundary_=_1327040412"
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 06:20:17 -0000

--Alternative_=_Boundary_=_1327040412
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Ashish,<br><br>If you look at technologies like EVB you will see the int=
eraction between the hypervisor and the network control plane.<br><br>_Vish=
was<br><br><span style=3D"font-family:Prelude, Verdana, san-serif;"><br><br=
></span><span id=3D"signature"><div style=3D"font-family: arial, sans-serif=
; font-size: 12px;color: #999999">-- Sent from my HP TouchPad</div></span><=
span style=3D"color:navy; font-family:Prelude, Verdana, san-serif; "><hr al=
ign=3D"left" style=3D"width:75%">On Jan 19, 2012 10:05 PM, Joel M. Halpern =
&lt;jmh@joelhalpern.com&gt; wrote: <br></span>Several of the proposal use m=
echanisms that are equally applicable.  For =0A<br>a general description of=
 one class of such approaches, look at the dcop =0A<br>draft Warren Kumari =
and I wrote.  I am sure that there are other classes =0A<br>as well.=0A<br>=
=0A<br>Yours,=0A<br>Joel=0A<br>=0A<br>On 1/20/2012 12:52 AM, Ashish Dalela =
(adalela) wrote:=0A<br>&gt; Joel,=0A<br>&gt;=0A<br>&gt; Hypervisor control =
is in the hypervisor manager. Switch control is in=0A<br>&gt; the network c=
ontrol plane. These are parallel silos, that don't=0A<br>&gt; interact.=0A<=
br>&gt;=0A<br>&gt; Either the hypervisor manager defers control to the netw=
ork control=0A<br>&gt; plane, or the switches defer control to the manager.=
 Or, some new third=0A<br>&gt; entity emerges to control both, and both hyp=
ervisor and switch defer=0A<br>&gt; control to that entity.=0A<br>&gt;=0A<b=
r>&gt; We can't have separate control models and expect this to work in the=
=0A<br>&gt; same way.=0A<br>&gt;=0A<br>&gt; Which of these (or other) model=
s you think presents a reasonable=0A<br>&gt; approach to reconcile hypervis=
or control and network control?=0A<br>&gt;=0A<br>&gt; Thanks, Ashish=0A<br>=
&gt;=0A<br>&gt; -----Original Message-----=0A<br>&gt; From: Joel M. Halpern=
 [mailto:jmh@joelhalpern.com]=0A<br>&gt; Sent: Friday, January 20, 2012 10:=
58 AM=0A<br>&gt; To: Ashish Dalela (adalela)=0A<br>&gt; Cc: dc@ietf.org=0A<=
br>&gt; Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt=0A<br>&gt;=
=0A<br>&gt; While one can construct strawman hypothesis in whcih there are =
reasons=0A<br>&gt; to have different tunnel control protocols depepnding up=
on end-point=0A<br>&gt; location, equally one can construct reasonably hypo=
thesis in which the=0A<br>&gt; same protocol mechanisms work whether the en=
d-point is at the VM, the=0A<br>&gt; Hypervisor, the ToR switch, or an aggr=
egation switch.=0A<br>&gt;=0A<br>&gt;=0A<br>&gt; Yours,=0A<br>&gt; Joel=0A<=
br>&gt;=0A<br>&gt; On 1/20/2012 12:07 AM, Ashish Dalela (adalela) wrote:=0A=
<br>&gt;&gt;=0A<br>&gt;&gt; I would not arrive at the conclusion that hyper=
visor work should or=0A<br>&gt;&gt; should not be done in IETF. That's a se=
parate question. VXLAN and=0A<br>&gt; NVGRE=0A<br>&gt;&gt; are hypervisor b=
ased approaches. But, they don't have control planes=0A<br>&gt;&gt; (yet). =
My point is that finding a common map-encap scheme isn't that=0A<br>&gt;&gt=
; hard. The harder part is how to make the hypervisor and network based=0A<=
br>&gt;&gt; map-encap *control planes* work the same way.=0A<br>&gt;&gt;=0A=
<br>&gt;&gt; If they don't work the same way, then L2-in-L2, L2-in-L3, L3-i=
n-L3 has=0A<br>&gt; a=0A<br>&gt;&gt; network flavor and a hypervisor flavor=
=2E=0A<br>&gt;&gt;=0A<br>&gt;&gt; Thanks, Ashish=0A<br>&gt;&gt;=0A<br>&gt;&=
gt;=0A<br>&gt;&gt; -----Original Message-----=0A<br>&gt;&gt; From: Melinda =
Shore [mailto:melinda.shore@gmail.com]=0A<br>&gt;&gt; Sent: Friday, January=
 20, 2012 10:06 AM=0A<br>&gt;&gt; To: Ashish Dalela (adalela)=0A<br>&gt;&gt=
; Cc: dc@ietf.org=0A<br>&gt;&gt; Subject: Re: [dc] draft-khasnabish-vmmi-pr=
oblems-00.txt=0A<br>&gt;&gt;=0A<br>&gt;&gt; On 1/19/12 7:26 PM, Ashish Dale=
la (adalela) wrote:=0A<br>&gt;&gt;&gt; Bandwidth needs, but they have the=
=0A<br>&gt;&gt;&gt; same tunnel. How do I distinguish between them based on=
 the tunnel?=0A<br>&gt; In=0A<br>&gt;&gt;&gt; fact, if the tenant isolation=
 is in the hypervisor, then the=0A<br>&gt;&gt; underlying=0A<br>&gt;&gt;&gt=
; network has no clue which tenant needs what policy.=0A<br>&gt;&gt;=0A<br>=
&gt;&gt; Well, that's not true.  In the case of IPSec we've got SPIs, and=
=0A<br>&gt;&gt; there are similar demultiplexing mechanisms in other techno=
logies.=0A<br>&gt;&gt;=0A<br>&gt;&gt; But frankly I think that if you're go=
ing to distinguish between=0A<br>&gt;&gt; tunnel endpoints in the hyperviso=
r and tunnel endpoints in other=0A<br>&gt;&gt; sorts of network devices I t=
hink you're going to be somewhat=0A<br>&gt;&gt; hard-pressed to make the ca=
se for working on the former in=0A<br>&gt;&gt; the IETF.=0A<br>&gt;&gt;=0A<=
br>&gt;&gt; Melinda=0A<br>&gt;&gt; ________________________________________=
_______=0A<br>&gt;&gt; dc mailing list=0A<br>&gt;&gt; dc@ietf.org=0A<br>&gt=
;&gt; https://www.ietf.org/mailman/listinfo/dc=0A<br>&gt;&gt;=0A<br>&gt;=0A=
<br>_______________________________________________=0A<br>dc mailing list=
=0A<br>dc@ietf.org=0A<br>https://www.ietf.org/mailman/listinfo/dc=0A
--Alternative_=_Boundary_=_1327040412--

From adalela@cisco.com  Thu Jan 19 22:45:18 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5338F21F85D4 for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 22:45:18 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.448
X-Spam-Level: 
X-Spam-Status: No, score=-2.448 tagged_above=-999 required=5 tests=[AWL=0.151,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bGonS+YyX62Z for <dc@ietfa.amsl.com>; Thu, 19 Jan 2012 22:45:17 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 5966521F8591 for <dc@ietf.org>; Thu, 19 Jan 2012 22:45:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=4137; q=dns/txt; s=iport; t=1327041916; x=1328251516; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to:cc; bh=JDm4hl4QnWg2qH780vUFIdPdzmOIs4m+zdX7u2VtlSA=; b=h+U8FNYZAKKyBSDDIlrxxgqTkoG5hKpCjegSg8Hqd/Ue1mKKx9DeY2oN cHguz0qKdjqSU3ILNPZc+brHspxHZXPCEkGNS7grjRjx37i4lsYFuV0BB jSnWbuWbKuP29HcolMRcUNz4AWpp3Fv0hyFM1/kWHMXEg756l4UNKCp5V I=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AooTAMoMGU9Io8UY/2dsb2JhbABDpyACh16BcgEBAQMBAQEBDwEdCjQLDAQCAQgRBAEBAQoGFwEGASAGHwkIAQEECwgIGodaCJosAZ45BItDYwSIOpdfh1I
X-IronPort-AV: E=Sophos;i="4.71,540,1320624000";  d="scan'208";a="3809849"
Received: from vla196-nat.cisco.com (HELO bgl-core-2.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 20 Jan 2012 06:45:14 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-2.cisco.com (8.14.3/8.14.3) with ESMTP id q0K6jElZ009187; Fri, 20 Jan 2012 06:45:14 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Fri, 20 Jan 2012 12:15:14 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Fri, 20 Jan 2012 12:15:12 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102CB23B9@XMB-BGL-416.cisco.com>
In-Reply-To: <4F19040B.7000901@joelhalpern.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-khasnabish-vmmi-problems-00.txt
Thread-Index: AczXOXc6Ez9veCFwR5SBP5nLA5CKewABNoew
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com> <4F18EF4A.3060308@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com> <4F18FB72.2090900@joelhalpern.com> <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com> <4F19040B.7000901@joelhalpern.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Joel M. Halpern" <jmh@joelhalpern.com>
X-OriginalArrivalTime: 20 Jan 2012 06:45:14.0310 (UTC) FILETIME=[104A0660:01CCD73F]
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 06:45:18 -0000

>> Several of the proposal use mechanisms that are equally applicable. =20

The question is - are hypervisor and network based encap-decap control
planes identical in all respects? The question is not if some mechanisms
can be common. The question is - can all the mechanisms be common?

As a point of note, the draft you mention doesn't address multicast and
broadcast. It defines a database to lookup unicast addresses through a
new protocol. That mechanism won't work for multicast and broadcast.=20

Thanks, Ashish


-----Original Message-----
From: Joel M. Halpern [mailto:jmh@joelhalpern.com]=20
Sent: Friday, January 20, 2012 11:35 AM
To: Ashish Dalela (adalela)
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

Several of the proposal use mechanisms that are equally applicable.  For

a general description of one class of such approaches, look at the dcop=20
draft Warren Kumari and I wrote.  I am sure that there are other classes

as well.

Yours,
Joel

On 1/20/2012 12:52 AM, Ashish Dalela (adalela) wrote:
> Joel,
>
> Hypervisor control is in the hypervisor manager. Switch control is in
> the network control plane. These are parallel silos, that don't
> interact.
>
> Either the hypervisor manager defers control to the network control
> plane, or the switches defer control to the manager. Or, some new
third
> entity emerges to control both, and both hypervisor and switch defer
> control to that entity.
>
> We can't have separate control models and expect this to work in the
> same way.
>
> Which of these (or other) models you think presents a reasonable
> approach to reconcile hypervisor control and network control?
>
> Thanks, Ashish
>
> -----Original Message-----
> From: Joel M. Halpern [mailto:jmh@joelhalpern.com]
> Sent: Friday, January 20, 2012 10:58 AM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>
> While one can construct strawman hypothesis in whcih there are reasons
> to have different tunnel control protocols depepnding upon end-point
> location, equally one can construct reasonably hypothesis in which the
> same protocol mechanisms work whether the end-point is at the VM, the
> Hypervisor, the ToR switch, or an aggregation switch.
>
>
> Yours,
> Joel
>
> On 1/20/2012 12:07 AM, Ashish Dalela (adalela) wrote:
>>
>> I would not arrive at the conclusion that hypervisor work should or
>> should not be done in IETF. That's a separate question. VXLAN and
> NVGRE
>> are hypervisor based approaches. But, they don't have control planes
>> (yet). My point is that finding a common map-encap scheme isn't that
>> hard. The harder part is how to make the hypervisor and network based
>> map-encap *control planes* work the same way.
>>
>> If they don't work the same way, then L2-in-L2, L2-in-L3, L3-in-L3
has
> a
>> network flavor and a hypervisor flavor.
>>
>> Thanks, Ashish
>>
>>
>> -----Original Message-----
>> From: Melinda Shore [mailto:melinda.shore@gmail.com]
>> Sent: Friday, January 20, 2012 10:06 AM
>> To: Ashish Dalela (adalela)
>> Cc: dc@ietf.org
>> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>>
>> On 1/19/12 7:26 PM, Ashish Dalela (adalela) wrote:
>>> Bandwidth needs, but they have the
>>> same tunnel. How do I distinguish between them based on the tunnel?
> In
>>> fact, if the tenant isolation is in the hypervisor, then the
>> underlying
>>> network has no clue which tenant needs what policy.
>>
>> Well, that's not true.  In the case of IPSec we've got SPIs, and
>> there are similar demultiplexing mechanisms in other technologies.
>>
>> But frankly I think that if you're going to distinguish between
>> tunnel endpoints in the hypervisor and tunnel endpoints in other
>> sorts of network devices I think you're going to be somewhat
>> hard-pressed to make the case for working on the former in
>> the IETF.
>>
>> Melinda
>> _______________________________________________
>> dc mailing list
>> dc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dc
>>
>

From lizhong.jin@zte.com.cn  Fri Jan 20 00:03:12 2012
Return-Path: <lizhong.jin@zte.com.cn>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5831021F857D for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 00:03:12 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -101.315
X-Spam-Level: 
X-Spam-Status: No, score=-101.315 tagged_above=-999 required=5 tests=[AWL=0.523, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_DOUBLE_IP_LOOSE=0.76, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pAMA82dXrTmr for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 00:03:07 -0800 (PST)
Received: from mx5.zte.com.cn (mx6.zte.com.cn [95.130.199.165]) by ietfa.amsl.com (Postfix) with ESMTP id C935221F8532 for <dc@ietf.org>; Fri, 20 Jan 2012 00:03:06 -0800 (PST)
Received: from [10.30.17.100] by mx5.zte.com.cn with surfront esmtp id 56690122734555; Fri, 20 Jan 2012 15:39:48 +0800 (CST)
Received: from [10.30.3.20] by [192.168.168.16] with StormMail ESMTP id 67184.4038907279; Fri, 20 Jan 2012 16:02:38 +0800 (CST)
Received: from notes_smtp.zte.com.cn ([10.30.1.239]) by mse01.zte.com.cn with ESMTP id q0K82oBf048171; Fri, 20 Jan 2012 16:02:50 +0800 (GMT-8) (envelope-from lizhong.jin@zte.com.cn)
To: vishwas.ietf@gmail.com
MIME-Version: 1.0
X-Mailer: Lotus Notes Release 6.5.4 March 27, 2005
Message-ID: <OF4387C31B.7EB661BF-ON4825798B.002973A7-4825798B.002C35F9@zte.com.cn>
From: Lizhong Jin<lizhong.jin@zte.com.cn>
Date: Fri, 20 Jan 2012 16:02:31 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.1FP4|July 25, 2010) at 2012-01-20 16:02:52, Serialize complete at 2012-01-20 16:02:52
Content-Type: multipart/alternative; boundary="=_alternative 002C35F64825798B_="
X-MAIL: mse01.zte.com.cn q0K82oBf048171
Cc: jmh@joelhalpern.com, adalela@cisco.com, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 08:03:12 -0000

This is a multipart message in MIME format.
--=_alternative 002C35F64825798B_=
Content-Type: text/plain; charset="US-ASCII"

Hi Vishwas and all,
EVB like technologies currently only focus on the interaction between 
hypervisor and ToR. The network control plane may not locate only on ToR. 
What's more, the interaction is not happened between hypervisor and its 
connected ToR, but maybe between hypervisor and ToR in other site.

I agree, hypervisor and network based encapsulations are quite different. 
But we could consider the design that ToR has same control plane as 
hypervisor. Then communication between VM and physical server would have 
same control plane.

Regards
Lizhong



--------------------------------------------------------------------------------

From: <vishwas.ietf at gmail.com> 
To: "Joel M. Halpern" <jmh at joelhalpern.com>, "Ashish Dalela (adalela)" 
<adalela at cisco.com> 
Cc: dc at ietf.org 
Date: Thu, 19 Jan 2012 22:20:13 -0800 
In-reply-to: <4F19040B.7000901 at joelhalpern.com> 
List-id: IETF Data Center Mailing List <dc.ietf.org> 

--------------------------------------------------------------------------------
Hi Ashish,

If you look at technologies like EVB you will see the interaction between 
the hypervisor and the network control plane.

_Vishwas




-- Sent from my HP TouchPad

--------------------------------------------------------------------------------
On Jan 19, 2012 10:05 PM, Joel M. Halpern <jmh at joelhalpern.com> wrote: 
Several of the proposal use mechanisms that are equally applicable. For 
a general description of one class of such approaches, look at the dcop 
draft Warren Kumari and I wrote. I am sure that there are other classes 
as well. 

Yours, 
Joel 

On 1/20/2012 12:52 AM, Ashish Dalela (adalela) wrote: 
> Joel, 
> 
> Hypervisor control is in the hypervisor manager. Switch control is in 
> the network control plane. These are parallel silos, that don't 
> interact. 
> 
> Either the hypervisor manager defers control to the network control 
> plane, or the switches defer control to the manager. Or, some new third 
> entity emerges to control both, and both hypervisor and switch defer 
> control to that entity. 
> 
> We can't have separate control models and expect this to work in the 
> same way. 
> 
> Which of these (or other) models you think presents a reasonable 
> approach to reconcile hypervisor control and network control? 
> 
> Thanks, Ashish 
> 
> -----Original Message----- 
> From: Joel M. Halpern [mailto:jmh at joelhalpern.com] 
> Sent: Friday, January 20, 2012 10:58 AM 
> To: Ashish Dalela (adalela) 
> Cc: dc at ietf.org 
> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt 
> 
> While one can construct strawman hypothesis in whcih there are reasons 
> to have different tunnel control protocols depepnding upon end-point 
> location, equally one can construct reasonably hypothesis in which the 
> same protocol mechanisms work whether the end-point is at the VM, the 
> Hypervisor, the ToR switch, or an aggregation switch. 
> 
> 
> Yours, 
> Joel 
> 
> On 1/20/2012 12:07 AM, Ashish Dalela (adalela) wrote: 
>> 
>> I would not arrive at the conclusion that hypervisor work should or 
>> should not be done in IETF. That's a separate question. VXLAN and 
> NVGRE 
>> are hypervisor based approaches. But, they don't have control planes 
>> (yet). My point is that finding a common map-encap scheme isn't that 
>> hard. The harder part is how to make the hypervisor and network based 
>> map-encap *control planes* work the same way. 
>> 
>> If they don't work the same way, then L2-in-L2, L2-in-L3, L3-in-L3 has 
> a 
>> network flavor and a hypervisor flavor. 
>> 
>> Thanks, Ashish 
>> 
>> 
>> -----Original Message----- 
>> From: Melinda Shore [mailto:melinda.shore at gmail.com] 
>> Sent: Friday, January 20, 2012 10:06 AM 
>> To: Ashish Dalela (adalela) 
>> Cc: dc at ietf.org 
>> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt 
>> 
>> On 1/19/12 7:26 PM, Ashish Dalela (adalela) wrote: 
>>> Bandwidth needs, but they have the 
>>> same tunnel. How do I distinguish between them based on the tunnel? 
> In 
>>> fact, if the tenant isolation is in the hypervisor, then the 
>> underlying 
>>> network has no clue which tenant needs what policy. 
>> 
>> Well, that's not true. In the case of IPSec we've got SPIs, and 
>> there are similar demultiplexing mechanisms in other technologies. 
>> 
>> But frankly I think that if you're going to distinguish between 
>> tunnel endpoints in the hypervisor and tunnel endpoints in other 
>> sorts of network devices I think you're going to be somewhat 
>> hard-pressed to make the case for working on the former in 
>> the IETF. 
>> 
>> Melinda 
>> _______________________________________________ 
>> dc mailing list 
>> dc at ietf.org 
>> https://www.ietf.org/mailman/listinfo/dc 
>> 
> 
_______________________________________________ 
dc mailing list 
dc at ietf.org 
https://www.ietf.org/mailman/listinfo/dc 

--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail is solely property of the sender's organization. This mail communication is confidential. Recipients named above are obligated to maintain secrecy and are not permitted to disclose the contents of this communication to others.
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the originator of the message. Any views expressed in this message are those of the individual sender.
This message has been scanned for viruses and Spam by ZTE Anti-Spam system.

--=_alternative 002C35F64825798B_=
Content-Type: text/html; charset="US-ASCII"


<br><font size=2 face="sans-serif">Hi Vishwas and all,</font>
<br><font size=2 face="sans-serif">EVB like technologies currently only
focus on the interaction between hypervisor and ToR. The network control
plane may not locate only on ToR. What's more, the interaction is not happened
between hypervisor and its connected ToR, but maybe between hypervisor
and ToR in other site.</font>
<br>
<br><font size=2 face="sans-serif">I agree, hypervisor and network based
encapsulations are quite different. But we could consider the design that
ToR has same control plane as hypervisor. Then communication between VM
and physical server would have same control plane.</font>
<br>
<br><font size=2 face="sans-serif">Regards</font>
<br><font size=2 face="sans-serif">Lizhong</font>
<br>
<br>
<br>
<br><font size=2 face="sans-serif">--------------------------------------------------------------------------------</font>
<br>
<br><font size=2 face="sans-serif">From: &lt;vishwas.ietf at gmail.com&gt;
</font>
<br><font size=2 face="sans-serif">To: &quot;Joel M. Halpern&quot; &lt;jmh
at joelhalpern.com&gt;, &quot;Ashish Dalela (adalela)&quot; &lt;adalela
at cisco.com&gt; </font>
<br><font size=2 face="sans-serif">Cc: dc at ietf.org </font>
<br><font size=2 face="sans-serif">Date: Thu, 19 Jan 2012 22:20:13 -0800
</font>
<br><font size=2 face="sans-serif">In-reply-to: &lt;4F19040B.7000901 at
joelhalpern.com&gt; </font>
<br><font size=2 face="sans-serif">List-id: IETF Data Center Mailing List
&lt;dc.ietf.org&gt; </font>
<br>
<br><font size=2 face="sans-serif">--------------------------------------------------------------------------------</font>
<br><font size=2 face="sans-serif">Hi Ashish,</font>
<br>
<br><font size=2 face="sans-serif">If you look at technologies like EVB
you will see the interaction between the hypervisor and the network control
plane.</font>
<br>
<br><font size=2 face="sans-serif">_Vishwas</font>
<br>
<br>
<br>
<br>
<br><font size=2 face="sans-serif">-- Sent from my HP TouchPad</font>
<br>
<br><font size=2 face="sans-serif">--------------------------------------------------------------------------------</font>
<br><font size=2 face="sans-serif">On Jan 19, 2012 10:05 PM, Joel M. Halpern
&lt;jmh at joelhalpern.com&gt; wrote: </font>
<br><font size=2 face="sans-serif">Several of the proposal use mechanisms
that are equally applicable. For </font>
<br><font size=2 face="sans-serif">a general description of one class of
such approaches, look at the dcop </font>
<br><font size=2 face="sans-serif">draft Warren Kumari and I wrote. I am
sure that there are other classes </font>
<br><font size=2 face="sans-serif">as well. </font>
<br>
<br><font size=2 face="sans-serif">Yours, </font>
<br><font size=2 face="sans-serif">Joel </font>
<br>
<br><font size=2 face="sans-serif">On 1/20/2012 12:52 AM, Ashish Dalela
(adalela) wrote: </font>
<br><font size=2 face="sans-serif">&gt; Joel, </font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">&gt; Hypervisor control is in the hypervisor
manager. Switch control is in </font>
<br><font size=2 face="sans-serif">&gt; the network control plane. These
are parallel silos, that don't </font>
<br><font size=2 face="sans-serif">&gt; interact. </font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">&gt; Either the hypervisor manager defers
control to the network control </font>
<br><font size=2 face="sans-serif">&gt; plane, or the switches defer control
to the manager. Or, some new third </font>
<br><font size=2 face="sans-serif">&gt; entity emerges to control both,
and both hypervisor and switch defer </font>
<br><font size=2 face="sans-serif">&gt; control to that entity. </font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">&gt; We can't have separate control
models and expect this to work in the </font>
<br><font size=2 face="sans-serif">&gt; same way. </font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">&gt; Which of these (or other) models
you think presents a reasonable </font>
<br><font size=2 face="sans-serif">&gt; approach to reconcile hypervisor
control and network control? </font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">&gt; Thanks, Ashish </font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">&gt; -----Original Message----- </font>
<br><font size=2 face="sans-serif">&gt; From: Joel M. Halpern [mailto:jmh
at joelhalpern.com] </font>
<br><font size=2 face="sans-serif">&gt; Sent: Friday, January 20, 2012
10:58 AM </font>
<br><font size=2 face="sans-serif">&gt; To: Ashish Dalela (adalela) </font>
<br><font size=2 face="sans-serif">&gt; Cc: dc at ietf.org </font>
<br><font size=2 face="sans-serif">&gt; Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
</font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">&gt; While one can construct strawman
hypothesis in whcih there are reasons </font>
<br><font size=2 face="sans-serif">&gt; to have different tunnel control
protocols depepnding upon end-point </font>
<br><font size=2 face="sans-serif">&gt; location, equally one can construct
reasonably hypothesis in which the </font>
<br><font size=2 face="sans-serif">&gt; same protocol mechanisms work whether
the end-point is at the VM, the </font>
<br><font size=2 face="sans-serif">&gt; Hypervisor, the ToR switch, or
an aggregation switch. </font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">&gt; Yours, </font>
<br><font size=2 face="sans-serif">&gt; Joel </font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">&gt; On 1/20/2012 12:07 AM, Ashish Dalela
(adalela) wrote: </font>
<br><font size=2 face="sans-serif">&gt;&gt; </font>
<br><font size=2 face="sans-serif">&gt;&gt; I would not arrive at the conclusion
that hypervisor work should or </font>
<br><font size=2 face="sans-serif">&gt;&gt; should not be done in IETF.
That's a separate question. VXLAN and </font>
<br><font size=2 face="sans-serif">&gt; NVGRE </font>
<br><font size=2 face="sans-serif">&gt;&gt; are hypervisor based approaches.
But, they don't have control planes </font>
<br><font size=2 face="sans-serif">&gt;&gt; (yet). My point is that finding
a common map-encap scheme isn't that </font>
<br><font size=2 face="sans-serif">&gt;&gt; hard. The harder part is how
to make the hypervisor and network based </font>
<br><font size=2 face="sans-serif">&gt;&gt; map-encap *control planes*
work the same way. </font>
<br><font size=2 face="sans-serif">&gt;&gt; </font>
<br><font size=2 face="sans-serif">&gt;&gt; If they don't work the same
way, then L2-in-L2, L2-in-L3, L3-in-L3 has </font>
<br><font size=2 face="sans-serif">&gt; a </font>
<br><font size=2 face="sans-serif">&gt;&gt; network flavor and a hypervisor
flavor. </font>
<br><font size=2 face="sans-serif">&gt;&gt; </font>
<br><font size=2 face="sans-serif">&gt;&gt; Thanks, Ashish </font>
<br><font size=2 face="sans-serif">&gt;&gt; </font>
<br><font size=2 face="sans-serif">&gt;&gt; </font>
<br><font size=2 face="sans-serif">&gt;&gt; -----Original Message-----
</font>
<br><font size=2 face="sans-serif">&gt;&gt; From: Melinda Shore [mailto:melinda.shore
at gmail.com] </font>
<br><font size=2 face="sans-serif">&gt;&gt; Sent: Friday, January 20, 2012
10:06 AM </font>
<br><font size=2 face="sans-serif">&gt;&gt; To: Ashish Dalela (adalela)
</font>
<br><font size=2 face="sans-serif">&gt;&gt; Cc: dc at ietf.org </font>
<br><font size=2 face="sans-serif">&gt;&gt; Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
</font>
<br><font size=2 face="sans-serif">&gt;&gt; </font>
<br><font size=2 face="sans-serif">&gt;&gt; On 1/19/12 7:26 PM, Ashish
Dalela (adalela) wrote: </font>
<br><font size=2 face="sans-serif">&gt;&gt;&gt; Bandwidth needs, but they
have the </font>
<br><font size=2 face="sans-serif">&gt;&gt;&gt; same tunnel. How do I distinguish
between them based on the tunnel? </font>
<br><font size=2 face="sans-serif">&gt; In </font>
<br><font size=2 face="sans-serif">&gt;&gt;&gt; fact, if the tenant isolation
is in the hypervisor, then the </font>
<br><font size=2 face="sans-serif">&gt;&gt; underlying </font>
<br><font size=2 face="sans-serif">&gt;&gt;&gt; network has no clue which
tenant needs what policy. </font>
<br><font size=2 face="sans-serif">&gt;&gt; </font>
<br><font size=2 face="sans-serif">&gt;&gt; Well, that's not true. In the
case of IPSec we've got SPIs, and </font>
<br><font size=2 face="sans-serif">&gt;&gt; there are similar demultiplexing
mechanisms in other technologies. </font>
<br><font size=2 face="sans-serif">&gt;&gt; </font>
<br><font size=2 face="sans-serif">&gt;&gt; But frankly I think that if
you're going to distinguish between </font>
<br><font size=2 face="sans-serif">&gt;&gt; tunnel endpoints in the hypervisor
and tunnel endpoints in other </font>
<br><font size=2 face="sans-serif">&gt;&gt; sorts of network devices I
think you're going to be somewhat </font>
<br><font size=2 face="sans-serif">&gt;&gt; hard-pressed to make the case
for working on the former in </font>
<br><font size=2 face="sans-serif">&gt;&gt; the IETF. </font>
<br><font size=2 face="sans-serif">&gt;&gt; </font>
<br><font size=2 face="sans-serif">&gt;&gt; Melinda </font>
<br><font size=2 face="sans-serif">&gt;&gt; _______________________________________________
</font>
<br><font size=2 face="sans-serif">&gt;&gt; dc mailing list </font>
<br><font size=2 face="sans-serif">&gt;&gt; dc at ietf.org </font>
<br><font size=2 face="sans-serif">&gt;&gt; https://www.ietf.org/mailman/listinfo/dc
</font>
<br><font size=2 face="sans-serif">&gt;&gt; </font>
<br><font size=2 face="sans-serif">&gt; </font>
<br><font size=2 face="sans-serif">_______________________________________________
</font>
<br><font size=2 face="sans-serif">dc mailing list </font>
<br><font size=2 face="sans-serif">dc at ietf.org </font>
<br><font size=2 face="sans-serif">https://www.ietf.org/mailman/listinfo/dc
</font><br><pre>
--------------------------------------------------------
ZTE&nbsp;Information&nbsp;Security&nbsp;Notice:&nbsp;The&nbsp;information&nbsp;contained&nbsp;in&nbsp;this&nbsp;mail&nbsp;is&nbsp;solely&nbsp;property&nbsp;of&nbsp;the&nbsp;sender's&nbsp;organization.&nbsp;This&nbsp;mail&nbsp;communication&nbsp;is&nbsp;confidential.&nbsp;Recipients&nbsp;named&nbsp;above&nbsp;are&nbsp;obligated&nbsp;to&nbsp;maintain&nbsp;secrecy&nbsp;and&nbsp;are&nbsp;not&nbsp;permitted&nbsp;to&nbsp;disclose&nbsp;the&nbsp;contents&nbsp;of&nbsp;this&nbsp;communication&nbsp;to&nbsp;others.
This&nbsp;email&nbsp;and&nbsp;any&nbsp;files&nbsp;transmitted&nbsp;with&nbsp;it&nbsp;are&nbsp;confidential&nbsp;and&nbsp;intended&nbsp;solely&nbsp;for&nbsp;the&nbsp;use&nbsp;of&nbsp;the&nbsp;individual&nbsp;or&nbsp;entity&nbsp;to&nbsp;whom&nbsp;they&nbsp;are&nbsp;addressed.&nbsp;If&nbsp;you&nbsp;have&nbsp;received&nbsp;this&nbsp;email&nbsp;in&nbsp;error&nbsp;please&nbsp;notify&nbsp;the&nbsp;originator&nbsp;of&nbsp;the&nbsp;message.&nbsp;Any&nbsp;views&nbsp;expressed&nbsp;in&nbsp;this&nbsp;message&nbsp;are&nbsp;those&nbsp;of&nbsp;the&nbsp;individual&nbsp;sender.
This&nbsp;message&nbsp;has&nbsp;been&nbsp;scanned&nbsp;for&nbsp;viruses&nbsp;and&nbsp;Spam&nbsp;by&nbsp;ZTE&nbsp;Anti-Spam&nbsp;system.
</pre>
--=_alternative 002C35F64825798B_=--


From vishwas.ietf@gmail.com  Fri Jan 20 06:31:11 2012
Return-Path: <vishwas.ietf@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id EE89921F859F for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 06:31:10 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.599
X-Spam-Level: 
X-Spam-Status: No, score=-3.599 tagged_above=-999 required=5 tests=[AWL=0.000,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id hUM9fcEkMJ3x for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 06:31:10 -0800 (PST)
Received: from mail-tul01m020-f172.google.com (mail-tul01m020-f172.google.com [209.85.214.172]) by ietfa.amsl.com (Postfix) with ESMTP id E484721F8596 for <dc@ietf.org>; Fri, 20 Jan 2012 06:31:09 -0800 (PST)
Received: by obbwc12 with SMTP id wc12so947570obb.31 for <dc@ietf.org>; Fri, 20 Jan 2012 06:31:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=z1kDO7NjTPtvQn8XFc+o+4DtKpTYWY+XlyYcOi2wvLI=; b=w0XhX6BboGjwrg681zyRpsJsaz/toRehGcaIg7YotD/2pbdOHFImhRlYzDG/bNs7IJ Rv1OO8NNTAdtynnvmTcRh0FkaIIYckfihTJvZM4tka1LCZAdgwX5X1/GCGstEvwowcer q5R+hiLCAwM15vBJK2hNcU9/6WMTcLsNg+PSM=
MIME-Version: 1.0
Received: by 10.182.212.105 with SMTP id nj9mr26799434obc.62.1327069868526; Fri, 20 Jan 2012 06:31:08 -0800 (PST)
Received: by 10.182.28.196 with HTTP; Fri, 20 Jan 2012 06:31:08 -0800 (PST)
In-Reply-To: <OF4387C31B.7EB661BF-ON4825798B.002973A7-4825798B.002C35F9@zte.com.cn>
References: <OF4387C31B.7EB661BF-ON4825798B.002973A7-4825798B.002C35F9@zte.com.cn>
Date: Fri, 20 Jan 2012 06:31:08 -0800
Message-ID: <CAOyVPHTVFT4+r9B1e66u=M5cYiKFZWxn4k0yminqpFAr_DmQCQ@mail.gmail.com>
From: Vishwas Manral <vishwas.ietf@gmail.com>
To: Lizhong Jin <lizhong.jin@zte.com.cn>
Content-Type: text/plain; charset=ISO-8859-1
Cc: jmh@joelhalpern.com, adalela@cisco.com, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 14:31:11 -0000

Hi Lizhong,

For most networking technologies I know we interact with the neighbor
routers/edge router, which inturn can propagate information through
the remaining routers, if required.

Thanks,
Vishwas

On 1/20/12, Lizhong Jin <lizhong.jin@zte.com.cn> wrote:
> Hi Vishwas and all,
> EVB like technologies currently only focus on the interaction between
> hypervisor and ToR. The network control plane may not locate only on ToR.
> What's more, the interaction is not happened between hypervisor and its
> connected ToR, but maybe between hypervisor and ToR in other site.
>
> I agree, hypervisor and network based encapsulations are quite different.
> But we could consider the design that ToR has same control plane as
> hypervisor. Then communication between VM and physical server would have
> same control plane.
>
> Regards
> Lizhong
>
>
>
> --------------------------------------------------------------------------------
>
> From: <vishwas.ietf at gmail.com>
> To: "Joel M. Halpern" <jmh at joelhalpern.com>, "Ashish Dalela (adalela)"
> <adalela at cisco.com>
> Cc: dc at ietf.org
> Date: Thu, 19 Jan 2012 22:20:13 -0800
> In-reply-to: <4F19040B.7000901 at joelhalpern.com>
> List-id: IETF Data Center Mailing List <dc.ietf.org>
>
> --------------------------------------------------------------------------------
> Hi Ashish,
>
> If you look at technologies like EVB you will see the interaction between
> the hypervisor and the network control plane.
>
> _Vishwas
>
>
>
>
> -- Sent from my HP TouchPad
>
> --------------------------------------------------------------------------------
> On Jan 19, 2012 10:05 PM, Joel M. Halpern <jmh at joelhalpern.com> wrote:
> Several of the proposal use mechanisms that are equally applicable. For
> a general description of one class of such approaches, look at the dcop
> draft Warren Kumari and I wrote. I am sure that there are other classes
> as well.
>
> Yours,
> Joel
>
> On 1/20/2012 12:52 AM, Ashish Dalela (adalela) wrote:
>> Joel,
>>
>> Hypervisor control is in the hypervisor manager. Switch control is in
>> the network control plane. These are parallel silos, that don't
>> interact.
>>
>> Either the hypervisor manager defers control to the network control
>> plane, or the switches defer control to the manager. Or, some new third
>> entity emerges to control both, and both hypervisor and switch defer
>> control to that entity.
>>
>> We can't have separate control models and expect this to work in the
>> same way.
>>
>> Which of these (or other) models you think presents a reasonable
>> approach to reconcile hypervisor control and network control?
>>
>> Thanks, Ashish
>>
>> -----Original Message-----
>> From: Joel M. Halpern [mailto:jmh at joelhalpern.com]
>> Sent: Friday, January 20, 2012 10:58 AM
>> To: Ashish Dalela (adalela)
>> Cc: dc at ietf.org
>> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>>
>> While one can construct strawman hypothesis in whcih there are reasons
>> to have different tunnel control protocols depepnding upon end-point
>> location, equally one can construct reasonably hypothesis in which the
>> same protocol mechanisms work whether the end-point is at the VM, the
>> Hypervisor, the ToR switch, or an aggregation switch.
>>
>>
>> Yours,
>> Joel
>>
>> On 1/20/2012 12:07 AM, Ashish Dalela (adalela) wrote:
>>>
>>> I would not arrive at the conclusion that hypervisor work should or
>>> should not be done in IETF. That's a separate question. VXLAN and
>> NVGRE
>>> are hypervisor based approaches. But, they don't have control planes
>>> (yet). My point is that finding a common map-encap scheme isn't that
>>> hard. The harder part is how to make the hypervisor and network based
>>> map-encap *control planes* work the same way.
>>>
>>> If they don't work the same way, then L2-in-L2, L2-in-L3, L3-in-L3 has
>> a
>>> network flavor and a hypervisor flavor.
>>>
>>> Thanks, Ashish
>>>
>>>
>>> -----Original Message-----
>>> From: Melinda Shore [mailto:melinda.shore at gmail.com]
>>> Sent: Friday, January 20, 2012 10:06 AM
>>> To: Ashish Dalela (adalela)
>>> Cc: dc at ietf.org
>>> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>>>
>>> On 1/19/12 7:26 PM, Ashish Dalela (adalela) wrote:
>>>> Bandwidth needs, but they have the
>>>> same tunnel. How do I distinguish between them based on the tunnel?
>> In
>>>> fact, if the tenant isolation is in the hypervisor, then the
>>> underlying
>>>> network has no clue which tenant needs what policy.
>>>
>>> Well, that's not true. In the case of IPSec we've got SPIs, and
>>> there are similar demultiplexing mechanisms in other technologies.
>>>
>>> But frankly I think that if you're going to distinguish between
>>> tunnel endpoints in the hypervisor and tunnel endpoints in other
>>> sorts of network devices I think you're going to be somewhat
>>> hard-pressed to make the case for working on the former in
>>> the IETF.
>>>
>>> Melinda
>>> _______________________________________________
>>>

From adalela@cisco.com  Fri Jan 20 08:32:24 2012
Return-Path: <adalela@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8344E21F85C0 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 08:32:24 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.451
X-Spam-Level: 
X-Spam-Status: No, score=-2.451 tagged_above=-999 required=5 tests=[AWL=0.148,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id K03U6gKv9NWm for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 08:32:23 -0800 (PST)
Received: from bgl-iport-2.cisco.com (bgl-iport-2.cisco.com [72.163.197.26]) by ietfa.amsl.com (Postfix) with ESMTP id 024C521F85B8 for <dc@ietf.org>; Fri, 20 Jan 2012 08:32:22 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=adalela@cisco.com; l=1251; q=dns/txt; s=iport; t=1327077143; x=1328286743; h=mime-version:content-transfer-encoding:subject:date: message-id:in-reply-to:references:from:to; bh=rzumigU1Er+YkVV7TwW7grM4ZIzow2UwrhRTdssgyFk=; b=Rmlp2U5SpCyFh/VNfLzrH8ExC7rpfokgcWK22UH8E4dglcC+QPdK1ky4 1gaxTrXjB2PnZTLqPxV0O+YC075qnQC3j7iyDNPbXi2FIL7nyp1s6RcJT DT5qEH+YHc9v3idSu1EAofKtu88gNdlxsHa3+VjbEC8rI95N4CyKUddO6 o=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ap8EAOWVGU9Io8UY/2dsb2JhbABDrw2BcgEBAQQBAQEPAR0KNBcEAgEIEQQBAQEKBhcBBgEmHwkIAQEEAQoICBMHh2KaDQGePQSLQ2MEiDqfMg
X-IronPort-AV: E=Sophos;i="4.71,543,1320624000";  d="scan'208";a="3846353"
Received: from vla196-nat.cisco.com (HELO bgl-core-4.cisco.com) ([72.163.197.24]) by bgl-iport-2.cisco.com with ESMTP; 20 Jan 2012 16:32:21 +0000
Received: from xbh-bgl-411.cisco.com (xbh-bgl-411.cisco.com [72.163.129.201]) by bgl-core-4.cisco.com (8.14.3/8.14.3) with ESMTP id q0KGWLpl022824; Fri, 20 Jan 2012 16:32:21 GMT
Received: from xmb-bgl-416.cisco.com ([72.163.129.212]) by xbh-bgl-411.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Fri, 20 Jan 2012 22:02:20 +0530
X-MimeOLE: Produced By Microsoft Exchange V6.5
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: quoted-printable
Date: Fri, 20 Jan 2012 22:02:19 +0530
Message-ID: <618BE8B40039924EB9AED233D4A09C5102CB2524@XMB-BGL-416.cisco.com>
In-Reply-To: <4F19034E.1070802@gmail.com>
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [dc] draft-khasnabish-vmmi-problems-00.txt
Thread-Index: AczXOQwOm5R89m7gR8qEThF3S4Z0DgAVjY1Q
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org>	<4F18CE61.6030002@gmail.com>	<618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com>	<4F18EF4A.3060308@gmail.com>	<618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com>	<4F18FB72.2090900@joelhalpern.com><618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com> <4F19034E.1070802@gmail.com>
From: "Ashish Dalela (adalela)" <adalela@cisco.com>
To: "Melinda Shore" <melinda.shore@gmail.com>, <dc@ietf.org>
X-OriginalArrivalTime: 20 Jan 2012 16:32:20.0838 (UTC) FILETIME=[14EFA060:01CCD791]
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 16:32:24 -0000

Let's assume we are building protocols for hypervisors. Can you suggest
what type of protocols will allow a hypervisor based solution interact
with a non-hypervisor based server? I.e. encap-decap done in the
hypervisor is not understood by the non-virtualized server.

Thanks, Ashish


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
Melinda Shore
Sent: Friday, January 20, 2012 11:32 AM
To: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt

On 1/19/12 8:52 PM, Ashish Dalela (adalela) wrote:
> Hypervisor control is in the hypervisor manager. Switch control is in
> the network control plane. These are parallel silos, that don't
> interact.

It seems to me that you're implying that we can't put protocols on
the hypervisor - is that correct?

I'm still having a hard time understanding your argument.  Can you
talk about specific information elements that you'd need for a
hypervisor but not a VPN concentrator (or vice versa), or protocol
elements ditto?  I'd find it helpful if you could illustrate what
the issue is.

Melinda
_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From melinda.shore@gmail.com  Fri Jan 20 11:28:24 2012
Return-Path: <melinda.shore@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9375621F8685 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 11:28:24 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.229
X-Spam-Level: 
X-Spam-Status: No, score=-3.229 tagged_above=-999 required=5 tests=[AWL=0.370,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id CiK7uqjcvrjD for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 11:28:24 -0800 (PST)
Received: from mail-gy0-f182.google.com (mail-gy0-f182.google.com [209.85.160.182]) by ietfa.amsl.com (Postfix) with ESMTP id 55D9C21F8650 for <dc@ietf.org>; Fri, 20 Jan 2012 11:28:22 -0800 (PST)
Received: by ghy10 with SMTP id 10so568391ghy.27 for <dc@ietf.org>; Fri, 20 Jan 2012 11:28:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=7eS0BDfuHeTAF4tWE0ZRMiCB8MOsF7jqSI1Kue+5AoA=; b=HC8s3KNu0CHPUYZqO5rDeHa3zrK2S+SCjV3snICIJyL2bLj+Sj60Sz7AXX+3IkI23S PZVylX1NSHIbFMlLyb7ZxW34cq8IuC51K+8qc5vGGikCTUtFn6Zoh1OWSfsP9VkuYa3u Rk3E7aNUllO2HGwq32iHTUeM2KxNi/7GI5RrA=
Received: by 10.101.18.12 with SMTP id v12mr3617806ani.59.1327087701899; Fri, 20 Jan 2012 11:28:21 -0800 (PST)
Received: from [137.229.12.236] (drake.swits.alaska.edu. [137.229.12.236]) by mx.google.com with ESMTPS id y14sm10617207anm.10.2012.01.20.11.28.20 (version=SSLv3 cipher=OTHER); Fri, 20 Jan 2012 11:28:21 -0800 (PST)
Message-ID: <4F19C061.4060609@gmail.com>
Date: Fri, 20 Jan 2012 10:28:33 -0900
From: Melinda Shore <melinda.shore@gmail.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110424 Thunderbird/3.1.10
MIME-Version: 1.0
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org>	<4F18CE61.6030002@gmail.com>	<618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com>	<4F18EF4A.3060308@gmail.com>	<618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com>	<4F18FB72.2090900@joelhalpern.com><618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com> <4F19034E.1070802@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2524@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102CB2524@XMB-BGL-416.cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 19:28:24 -0000

On 01/20/2012 07:32 AM, Ashish Dalela (adalela) wrote:
> Let's assume we are building protocols for hypervisors.

This may be the source of some of the confusion.  This is
a very vertical framing of the question, when more typically
we're dealing with horizontal communication.   Network
layering is not as clean as we'd necessarily like but what
we're usually dealing with is protocol peers talking
to each other.  That is to say, we're not building protocols
for *hypervisors*, we're building protocols for some functional
component of the network.  From what I've seen a bunch of us
are unclear on why you think that a hypervisor cannot be
where that component lives.

Melinda

From narten@us.ibm.com  Fri Jan 20 13:38:08 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0711421F864F for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 13:38:08 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -108.467
X-Spam-Level: 
X-Spam-Status: No, score=-108.467 tagged_above=-999 required=5 tests=[AWL=2.132, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id O76R5vwiJjV7 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 13:38:07 -0800 (PST)
Received: from e2.ny.us.ibm.com (e2.ny.us.ibm.com [32.97.182.142]) by ietfa.amsl.com (Postfix) with ESMTP id 4FCDC21F864D for <dc@ietf.org>; Fri, 20 Jan 2012 13:38:07 -0800 (PST)
Received: from /spool/local by e2.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Fri, 20 Jan 2012 16:38:05 -0500
Received: from d01dlp02.pok.ibm.com (9.56.224.85) by e2.ny.us.ibm.com (192.168.1.102) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Fri, 20 Jan 2012 16:37:49 -0500
Received: from d01relay03.pok.ibm.com (d01relay03.pok.ibm.com [9.56.227.235]) by d01dlp02.pok.ibm.com (Postfix) with ESMTP id 095646E805E for <dc@ietf.org>; Fri, 20 Jan 2012 16:37:46 -0500 (EST)
Received: from d01av03.pok.ibm.com (d01av03.pok.ibm.com [9.56.224.217]) by d01relay03.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q0KLaTOe270310 for <dc@ietf.org>; Fri, 20 Jan 2012 16:36:29 -0500
Received: from d01av03.pok.ibm.com (loopback [127.0.0.1]) by d01av03.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q0KLaSF8023193 for <dc@ietf.org>; Fri, 20 Jan 2012 19:36:29 -0200
Received: from cichlid.raleigh.ibm.com (sig-9-65-224-224.mts.ibm.com [9.65.224.224]) by d01av03.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q0KLaRQM023134 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 20 Jan 2012 19:36:28 -0200
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q0KLaQlt007105; Fri, 20 Jan 2012 16:36:26 -0500
Message-Id: <201201202136.q0KLaQlt007105@cichlid.raleigh.ibm.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
In-reply-to: <618BE8B40039924EB9AED233D4A09C5102CB2524@XMB-BGL-416.cisco.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com><406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com> <4F18EF4A.3060308@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com> <4F18FB72.2090900@joelhalpern.com><618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com> <4F19034E.1070802@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2524@XMB-BGL-416.cisco.com>
Comments: In-reply-to "Ashish Dalela (adalela)" <adalela@cisco.com> message dated "Fri, 20 Jan 2012 22:02:19 +0530."
Date: Fri, 20 Jan 2012 16:36:26 -0500
From: Thomas Narten <narten@us.ibm.com>
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12012021-5112-0000-0000-0000043FBAE3
Cc: Melinda Shore <melinda.shore@gmail.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 21:38:08 -0000

Ashish,

> Let's assume we are building protocols for hypervisors. Can you suggest
> what type of protocols will allow a hypervisor based solution interact
> with a non-hypervisor based server? I.e. encap-decap done in the
> hypervisor is not understood by the non-virtualized server.

In which case, in something like NVO3/VXLAN/NVGRE, the switch that the
physical server connects to can do the encap/decap.

But the kind of question you are asking above about protocols
completely misses the point.

We are not building "protocols for hypervisors". Hopefully, at some
point we talk about building "solutions" to "real concrete
problems". One of the requirements for such a solution might be that
hypervisors can particpate in the protocol. And that non-virtualized
server are not excluded (and indeed that is what the NVO3 problem
statement talks about). If so, those sorts of requirement get factored
in. But the key thing is not "hypervisors" but how well a proposed
solution addresses the problem and all of its assocated requirements.

<broken-record-mode>

So, first, we articulate a problem or pain point that we need a fix
for. Then we start talking about requirements a workable solution must
address, then mechanisms that address the problem.

Talking generically about "hypervisor based solution" needing to talk
to "non-hypervisor based servers" seems like a pointless exercise. At
least not without being able to answer "solution to what? part of the
question"

</broken-record-mode>

Thomas


From vishwas.ietf@gmail.com  Fri Jan 20 15:18:18 2012
Return-Path: <vishwas.ietf@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id D609B21F85F0 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 15:18:18 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.599
X-Spam-Level: 
X-Spam-Status: No, score=-3.599 tagged_above=-999 required=5 tests=[AWL=0.000,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id o7zKyPz1SlTS for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 15:18:18 -0800 (PST)
Received: from mail-tul01m020-f172.google.com (mail-tul01m020-f172.google.com [209.85.214.172]) by ietfa.amsl.com (Postfix) with ESMTP id 8113121F85EF for <dc@ietf.org>; Fri, 20 Jan 2012 15:18:15 -0800 (PST)
Received: by obbwc12 with SMTP id wc12so1593405obb.31 for <dc@ietf.org>; Fri, 20 Jan 2012 15:18:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=k6HqxlRYLZWgiBJhmioo2xGG7j4G+e6oxs39TSnjdD8=; b=IEGqjexQ1ARHbGbqLOMh7K1J+D2MD8lphyWly2KVrqxx2Sk2DS7QNmWG8xlOpzVkeT KpANjQ8HW0Smg7d/S/VEGI1oZ6n7GMz2gwc0D17t9m1y1/7vadjn+04Dn4dyXwSyiW90 U7IgWocDAOrudcPm5QfU49p3L6blTJ8csvza8=
MIME-Version: 1.0
Received: by 10.182.75.65 with SMTP id a1mr28329766obw.32.1327101495223; Fri, 20 Jan 2012 15:18:15 -0800 (PST)
Received: by 10.182.28.196 with HTTP; Fri, 20 Jan 2012 15:18:15 -0800 (PST)
In-Reply-To: <4F19034E.1070802@gmail.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com> <406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com> <4F18EF4A.3060308@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com> <4F18FB72.2090900@joelhalpern.com> <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com> <4F19034E.1070802@gmail.com>
Date: Fri, 20 Jan 2012 15:18:15 -0800
Message-ID: <CAOyVPHTbxB=QYC3Qw0ybL=5RN7VefSENV4iiBBOpXbCn58oi=Q@mail.gmail.com>
From: Vishwas Manral <vishwas.ietf@gmail.com>
To: Melinda Shore <melinda.shore@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 23:18:19 -0000

On 1/19/12, Melinda Shore <melinda.shore@gmail.com> wrote:
> On 1/19/12 8:52 PM, Ashish Dalela (adalela) wrote:
>> Hypervisor control is in the hypervisor manager. Switch control is in
>> the network control plane. These are parallel silos, that don't
>> interact.
>
> It seems to me that you're implying that we can't put protocols on
> the hypervisor - is that correct?
Hi Melinda,

An interesting thing to note is that the more the functionality you
put in the hypervisor, the more you stress the single point of failure
in the virtualized system.

Thanks,
Vishwas

From melinda.shore@gmail.com  Fri Jan 20 15:30:53 2012
Return-Path: <melinda.shore@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C2C3821F8548 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 15:30:53 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.322
X-Spam-Level: 
X-Spam-Status: No, score=-3.322 tagged_above=-999 required=5 tests=[AWL=0.277,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id D5VqxTrqYUpa for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 15:30:53 -0800 (PST)
Received: from mail-gx0-f172.google.com (mail-gx0-f172.google.com [209.85.161.172]) by ietfa.amsl.com (Postfix) with ESMTP id 0898021F852C for <dc@ietf.org>; Fri, 20 Jan 2012 15:30:52 -0800 (PST)
Received: by ggnq4 with SMTP id q4so31334ggn.31 for <dc@ietf.org>; Fri, 20 Jan 2012 15:30:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=ZG2xxX4KWwpk1xRTLQxdFzX4XT+XMdwbTP1knfiqQzc=; b=K1t/K7wqfGftfSS2YLImhQzNtjYClx4CZMXV6zOtUA/V0mn+U9WecpIu01vFLrfXHS /OwLBnVKchBCPVHHQ1KvBSW5b2teIewTrK/jGrYfjyrtUCEj4TvvDFd+FA+y53jClhf3 B+1lJXacgMlSC4nuPmyDqgdWL8m/NKRQ/m+gQ=
Received: by 10.101.141.24 with SMTP id t24mr14614985ann.52.1327102252648; Fri, 20 Jan 2012 15:30:52 -0800 (PST)
Received: from [137.229.12.236] (drake.swits.alaska.edu. [137.229.12.236]) by mx.google.com with ESMTPS id g32sm11699531ann.19.2012.01.20.15.30.51 (version=SSLv3 cipher=OTHER); Fri, 20 Jan 2012 15:30:52 -0800 (PST)
Message-ID: <4F19F939.2020804@gmail.com>
Date: Fri, 20 Jan 2012 14:31:05 -0900
From: Melinda Shore <melinda.shore@gmail.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110424 Thunderbird/3.1.10
MIME-Version: 1.0
To: Vishwas Manral <vishwas.ietf@gmail.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com>	<CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com>	<201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com>	<CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com>	<201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com>	<406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org>	<4F18CE61.6030002@gmail.com>	<618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com>	<4F18EF4A.3060308@gmail.com>	<618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com>	<4F18FB72.2090900@joelhalpern.com>	<618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com>	<4F19034E.1070802@gmail.com> <CAOyVPHTbxB=QYC3Qw0ybL=5RN7VefSENV4iiBBOpXbCn58oi=Q@mail.gmail.com>
In-Reply-To: <CAOyVPHTbxB=QYC3Qw0ybL=5RN7VefSENV4iiBBOpXbCn58oi=Q@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 23:30:53 -0000

On 01/20/2012 02:18 PM, Vishwas Manral wrote:
> An interesting thing to note is that the more the functionality you
> put in the hypervisor, the more you stress the single point of failure
> in the virtualized system.

Well, there are a few ways to look at it.  For example,
the fewer components you've got the larger the mean time
between failures.  But aside from that it's been a
general rule of thumb that you want to minimize the
impact of failed components on non-failed components
(the fate sharing principle).

At any rate the hypervisor (at least the ones with which
I'm familiar) basically *are* network devices - they
function as a switch, or even a NAT.  If you're going to
suggest that they can't be used to terminate control plane
sessions I hope there's a more compelling reason for it
than what has been offered so far.

Melinda

From cdl@asgaard.org  Fri Jan 20 15:41:49 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7F64C21F8532 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 15:41:49 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.492
X-Spam-Level: 
X-Spam-Status: No, score=-6.492 tagged_above=-999 required=5 tests=[AWL=0.107,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id A1KpSE+GEjg6 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 15:41:48 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id A8E7E21F8533 for <dc@ietf.org>; Fri, 20 Jan 2012 15:41:48 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id BB12EAF898B; Fri, 20 Jan 2012 23:41:47 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id f6Hocwvpkwum; Fri, 20 Jan 2012 23:41:44 +0000 (UTC)
Received: from fenrir.asgaard.org (50-76-34-185-ip-static.hfc.comcastbusiness.net [50.76.34.185]) by asgaard.org (Postfix) with ESMTPSA id C8042AF897D; Fri, 20 Jan 2012 23:41:44 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: multipart/signed; boundary="Apple-Mail=_A3593B48-E551-475F-A824-5AF0FA0A3781"; protocol="application/pgp-signature"; micalg=pgp-sha1
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <4F19F939.2020804@gmail.com>
Date: Fri, 20 Jan 2012 15:41:40 -0800
Message-Id: <DF0D6664-9FD5-4EF0-A03F-86C1921D9D01@asgaard.org>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com>	<CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com>	<201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com>	<CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com>	<201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com>	<1326989277.2513.4.camel@ecliptic.extremenetworks.com>	<618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com>	<406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org>	<4F18CE61.6030002@gmail.com>	<618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com>	<4F18EF4A.3060308@gmail.com>	<618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com>	<4F18FB72.2090900@joelhalpern.com>	<618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com>	<4F19034E.1070802@gmail.com> <CAOyVPHTbxB=QYC3Qw0ybL=5RN7VefSENV4iiBBOpXbCn58oi=Q@mail.gmail.com> <4F19F939.2020804@gmail.com>
To: Melinda Shore <melinda.shore@gmail.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Vishwas Manral <vishwas.ietf@gmail.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 23:41:49 -0000

--Apple-Mail=_A3593B48-E551-475F-A824-5AF0FA0A3781
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

Greetings,

	Another way of looking at it is a hypervisor is really an =
operating system distribution.  Many of the hypervisors out there have =
the network "switch" as a separate process (actually I believe all of =
them do).  So, if we are saying that networking intel doesn't belong in =
an OS distribution, that is a departure from current thinking :)

	Chris

On 20Jan2012, at 15.31, Melinda Shore wrote:

> On 01/20/2012 02:18 PM, Vishwas Manral wrote:
>> An interesting thing to note is that the more the functionality you
>> put in the hypervisor, the more you stress the single point of =
failure
>> in the virtualized system.
>=20
> Well, there are a few ways to look at it.  For example,
> the fewer components you've got the larger the mean time
> between failures.  But aside from that it's been a
> general rule of thumb that you want to minimize the
> impact of failed components on non-failed components
> (the fate sharing principle).
>=20
> At any rate the hypervisor (at least the ones with which
> I'm familiar) basically *are* network devices - they
> function as a switch, or even a NAT.  If you're going to
> suggest that they can't be used to terminate control plane
> sessions I hope there's a more compelling reason for it
> than what has been offered so far.
>=20
> Melinda
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


--Apple-Mail=_A3593B48-E551-475F-A824-5AF0FA0A3781
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP using GPGMail

-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)

iQEcBAEBAgAGBQJPGfu3AAoJEGmx2Mt/+Iw/ZmwH/3ll1G5gPmld2r6YYAF5Qi26
d2onuupi0mkyblGGutO2qPZnig0VZorV9IwmXlsC4z6kwI0DwIPwVG0nJ9ij+wOG
T+XnFFOH81m3UuHtKHqRTx1gX9cyXRL7eBZ51G2a2EoozUWbwbOo8xG2EslJKVFz
5jczv7D5zF6k9pjT1O9oXuVLDR61y3QBzhUlRQ/QurZRp+NFOrW2bRjgY0esCWCi
87rCX/q7BbNPPcw4bHv33y8Gv5R/g3tPbKZSr8olBaG95gQXBzPRL6bWH5s62Nfa
Ywvy2c7VUkaF36RToh47zGxEiwf+6r5yMfv/furjhm/MspUEyI8HDrZtiMB5xVM=
=rLo9
-----END PGP SIGNATURE-----

--Apple-Mail=_A3593B48-E551-475F-A824-5AF0FA0A3781--

From vishwas.ietf@gmail.com  Fri Jan 20 15:47:19 2012
Return-Path: <vishwas.ietf@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3C3CF21F8566 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 15:47:19 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.599
X-Spam-Level: 
X-Spam-Status: No, score=-3.599 tagged_above=-999 required=5 tests=[AWL=0.000,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id TEZWBw6ZabQG for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 15:47:18 -0800 (PST)
Received: from mail-tul01m020-f172.google.com (mail-tul01m020-f172.google.com [209.85.214.172]) by ietfa.amsl.com (Postfix) with ESMTP id 6625921F855A for <dc@ietf.org>; Fri, 20 Jan 2012 15:47:18 -0800 (PST)
Received: by obbwc12 with SMTP id wc12so1616961obb.31 for <dc@ietf.org>; Fri, 20 Jan 2012 15:47:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=VoHXtSCrKQAGYSsMeLXMqml4hXtwX4IoPRzPzv26buA=; b=UhKXQJwsz1GRgtprqMdcO8k9XBDaMzpyVAqXtH8iyF6FgP5pBdBzu0u4Hw3RrrBmvY 6mVzMrrL3AWmRCM3DkRQpBBxUF89vv4wS/0BfNAERiKlMRnV/63VqvARTduHJE69wW+h TqWTD5t2SRYbzRYjVgr0XhkgY/h4VktrV8S7I=
MIME-Version: 1.0
Received: by 10.182.222.102 with SMTP id ql6mr28482170obc.2.1327103230722; Fri, 20 Jan 2012 15:47:10 -0800 (PST)
Received: by 10.182.28.196 with HTTP; Fri, 20 Jan 2012 15:47:10 -0800 (PST)
In-Reply-To: <DF0D6664-9FD5-4EF0-A03F-86C1921D9D01@asgaard.org>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com> <406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com> <4F18EF4A.3060308@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com> <4F18FB72.2090900@joelhalpern.com> <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com> <4F19034E.1070802@gmail.com> <CAOyVPHTbxB=QYC3Qw0ybL=5RN7VefSENV4iiBBOpXbCn58oi=Q@mail.gmail.com> <4F19F939.2020804@gmail.com> <DF0D6664-9FD5-4EF0-A03F-86C1921D9D01@asgaard.org>
Date: Fri, 20 Jan 2012 15:47:10 -0800
Message-ID: <CAOyVPHQh2yb5iP9-bH6NOzamW6FaK0cYwpfqfqns7TZVTpmY5g@mail.gmail.com>
From: Vishwas Manral <vishwas.ietf@gmail.com>
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Cc: Melinda Shore <melinda.shore@gmail.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2012 23:47:19 -0000

Hi Chistopher,

I totally agree.

The point I was making was that the hypervisor is the single point of
failure which can cause all the guest OS to fail. The more complex you
make the functionality the higer the chances of failure. So we should
work with that thought in mind.

Thanks,
Vishwas

On 1/20/12, Christopher LILJENSTOLPE <cdl@asgaard.org> wrote:
> Greetings,
>
> 	Another way of looking at it is a hypervisor is really an operating syst=
em
> distribution.  Many of the hypervisors out there have the network "switch=
"
> as a separate process (actually I believe all of them do).  So, if we are
> saying that networking intel doesn't belong in an OS distribution, that i=
s a
> departure from current thinking :)
>
> 	Chris
>
> On 20Jan2012, at 15.31, Melinda Shore wrote:
>
>> On 01/20/2012 02:18 PM, Vishwas Manral wrote:
>>> An interesting thing to note is that the more the functionality you
>>> put in the hypervisor, the more you stress the single point of failure
>>> in the virtualized system.
>>
>> Well, there are a few ways to look at it.  For example,
>> the fewer components you've got the larger the mean time
>> between failures.  But aside from that it's been a
>> general rule of thumb that you want to minimize the
>> impact of failed components on non-failed components
>> (the fate sharing principle).
>>
>> At any rate the hypervisor (at least the ones with which
>> I'm familiar) basically *are* network devices - they
>> function as a switch, or even a NAT.  If you're going to
>> suggest that they can't be used to terminate control plane
>> sessions I hope there's a more compelling reason for it
>> than what has been offered so far.
>>
>> Melinda
>> _______________________________________________
>> dc mailing list
>> dc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dc
>
> --
> =E6=9D=8E=E6=9F=AF=E7=9D=BF
> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>
>

From cdl@asgaard.org  Fri Jan 20 16:06:37 2012
Return-Path: <cdl@asgaard.org>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3F52E21F8513 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 16:06:37 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.504
X-Spam-Level: 
X-Spam-Status: No, score=-6.504 tagged_above=-999 required=5 tests=[AWL=0.095,  BAYES_00=-2.599, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id HsnFEMOjeinp for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 16:06:36 -0800 (PST)
Received: from asgaard.org (odin.asgaard.org [204.29.151.68]) by ietfa.amsl.com (Postfix) with ESMTP id 50FCE21F850F for <dc@ietf.org>; Fri, 20 Jan 2012 16:06:36 -0800 (PST)
Received: from localhost (localhost [127.0.0.1]) by asgaard.org (Postfix) with ESMTP id 3555BAF8C81; Sat, 21 Jan 2012 00:06:36 +0000 (UTC)
X-Virus-Scanned: amavisd-new at asgaard.org
Received: from asgaard.org ([127.0.0.1]) by localhost (odin.asgaard.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ku8xSHkklZiY; Sat, 21 Jan 2012 00:06:34 +0000 (UTC)
Received: from fenrir.asgaard.org (50-76-34-185-ip-static.hfc.comcastbusiness.net [50.76.34.185]) by asgaard.org (Postfix) with ESMTPSA id 811B7AF8C73; Sat, 21 Jan 2012 00:06:34 +0000 (UTC)
Mime-Version: 1.0 (Apple Message framework v1251.1)
Content-Type: multipart/signed; boundary="Apple-Mail=_F8AD102F-C4B6-4846-AA0B-EE1330F8F1C7"; protocol="application/pgp-signature"; micalg=pgp-sha1
From: Christopher LILJENSTOLPE <cdl@asgaard.org>
In-Reply-To: <CAOyVPHQh2yb5iP9-bH6NOzamW6FaK0cYwpfqfqns7TZVTpmY5g@mail.gmail.com>
Date: Fri, 20 Jan 2012 16:06:19 -0800
Message-Id: <FF8EC204-C4B0-4690-B692-905F672D60D3@asgaard.org>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com> <406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com> <4F18EF4A.3060308@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com> <4F18FB72.2090900@joelhalpern.com> <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com> <4F19034E.1070802@gmail.com> <CAOyVPHTbxB=QYC3Qw0ybL=5RN7VefSENV4iiBBOpXbCn58oi=Q@mail.gmail.com> <4F19F939.2020804@gmail.com> <DF0D6664-9FD5-4EF0-A03F-86C1921D9D01@asgaard.org> <CAOyVPHQh2yb5iP9-bH6NOzam W6FaK0cYwpfqfqns7TZVTpmY5g@mail.gmail.com>
To: Vishwas Manral <vishwas.ietf@gmail.com>
X-Mailer: Apple Mail (2.1251.1)
Cc: Melinda Shore <melinda.shore@gmail.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 21 Jan 2012 00:06:37 -0000

--Apple-Mail=_F8AD102F-C4B6-4846-AA0B-EE1330F8F1C7
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

Greetings Vishwas,

	And I guess I am saying that I'm not sure I agree.  If a single =
process running can kill all the other processes on a given system =
(especially if it is in "user space" - which I would assume for control =
plane functions), I would say that there are more substantial issues =
with the architecture of that particular VM distribution.  Similar =
answer if someone actually put a control-plane engine (think routing =
protocols) in kernel space.  We, as a standards organization can't save =
the world from less-than-inteligent developers....


	Chris

On 20Jan2012, at 15.47, Vishwas Manral wrote:

> Hi Chistopher,
>=20
> I totally agree.
>=20
> The point I was making was that the hypervisor is the single point of
> failure which can cause all the guest OS to fail. The more complex you
> make the functionality the higer the chances of failure. So we should
> work with that thought in mind.
>=20
> Thanks,
> Vishwas
>=20
> On 1/20/12, Christopher LILJENSTOLPE <cdl@asgaard.org> wrote:
>> Greetings,
>>=20
>> 	Another way of looking at it is a hypervisor is really an =
operating system
>> distribution.  Many of the hypervisors out there have the network =
"switch"
>> as a separate process (actually I believe all of them do).  So, if we =
are
>> saying that networking intel doesn't belong in an OS distribution, =
that is a
>> departure from current thinking :)
>>=20
>> 	Chris
>>=20
>> On 20Jan2012, at 15.31, Melinda Shore wrote:
>>=20
>>> On 01/20/2012 02:18 PM, Vishwas Manral wrote:
>>>> An interesting thing to note is that the more the functionality you
>>>> put in the hypervisor, the more you stress the single point of =
failure
>>>> in the virtualized system.
>>>=20
>>> Well, there are a few ways to look at it.  For example,
>>> the fewer components you've got the larger the mean time
>>> between failures.  But aside from that it's been a
>>> general rule of thumb that you want to minimize the
>>> impact of failed components on non-failed components
>>> (the fate sharing principle).
>>>=20
>>> At any rate the hypervisor (at least the ones with which
>>> I'm familiar) basically *are* network devices - they
>>> function as a switch, or even a NAT.  If you're going to
>>> suggest that they can't be used to terminate control plane
>>> sessions I hope there's a more compelling reason for it
>>> than what has been offered so far.
>>>=20
>>> Melinda
>>> _______________________________________________
>>> dc mailing list
>>> dc@ietf.org
>>> https://www.ietf.org/mailman/listinfo/dc
>>=20
>> --
>> =E6=9D=8E=E6=9F=AF=E7=9D=BF
>> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
>> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>>=20
>>=20

-- =20
=E6=9D=8E=E6=9F=AF=E7=9D=BF
Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf


--Apple-Mail=_F8AD102F-C4B6-4846-AA0B-EE1330F8F1C7
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP using GPGMail

-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)

iQEcBAEBAgAGBQJPGgGAAAoJEGmx2Mt/+Iw/NOQH/29frT5TLL4AuYsCcEE91xwn
cD0W4d22U6FD0mnIOUrTUyT3+YKDNyqCP+wCcsbhi/ROHnoyx/aNsFEFfvXqm6Ru
kLQ0dyN2eYXiiF3SYKaJSggC+e4Gx9lh8jNWSzMzBMzqtdaL0kdMhuiRnh0fXNs1
gKbPj1Zn9Nwao1rdxL2MEg/MkC5ysOnsQ7DMLzS0rOu3n8dWoLL+YWDbrluy0dfi
2YZcFahm0p7+Es1KjZSqs5ezBceTSefASgS+P+LYUjWLYNvE2NK/0owEeIp7APg4
93mgbz4yZi3ZAsCpFPhqauRsFv2i1zKZD9v30uR8IWBHZzGAxbEdH0MLO6NDoP4=
=ut2j
-----END PGP SIGNATURE-----

--Apple-Mail=_F8AD102F-C4B6-4846-AA0B-EE1330F8F1C7--

From david.black@emc.com  Fri Jan 20 17:22:54 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 560D321F86C4 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 17:22:54 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -109.05
X-Spam-Level: 
X-Spam-Status: No, score=-109.05 tagged_above=-999 required=5 tests=[AWL=1.549, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id T3-1zyu1dWL5 for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 17:22:53 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 7AC3521F86A4 for <dc@ietf.org>; Fri, 20 Jan 2012 17:22:53 -0800 (PST)
Received: from hop04-l1d11-si01.isus.emc.com (HOP04-L1D11-SI01.isus.emc.com [10.254.111.54]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q0L1MpFu003423 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 20 Jan 2012 20:22:51 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.221.253]) by hop04-l1d11-si01.isus.emc.com (RSA Interceptor); Fri, 20 Jan 2012 20:22:40 -0500
Received: from mxhub04.corp.emc.com (mxhub04.corp.emc.com [10.254.141.106]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q0L1MdYx031761; Fri, 20 Jan 2012 20:22:39 -0500
Received: from mx14a.corp.emc.com ([169.254.1.99]) by mxhub04.corp.emc.com ([10.254.141.106]) with mapi; Fri, 20 Jan 2012 20:22:39 -0500
From: <david.black@emc.com>
To: <adalela@cisco.com>
Date: Fri, 20 Jan 2012 20:22:38 -0500
Thread-Topic: OVF "control plane" - Not a good idea
Thread-Index: AczW0qaLHjfaIrIARQ+n0SUnPfoMjQAPVYSwAAM0Bd8AAwrgkAAsav5Q
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05A7CF1290@MX14A.corp.emc.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com><CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com><201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com><CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com><201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com><1326989277.2513.4.camel@ecliptic.extremenetworks.com><201201191747.q0JHlS5J015128@cichlid.raleigh.ibm.com>, <618BE8B40039924EB9AED233D4A09C5102CB2304@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A7BB90E7@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102CB2326@XMB-BGL-416.cisco.com>
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102CB2326@XMB-BGL-416.cisco.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: dc@ietf.org
Subject: [dc] OVF "control plane" - Not a good idea
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 21 Jan 2012 01:22:54 -0000

Ashish,

Unfortunately, this is digging in the "wrong place" because it recreates th=
e
problem that OVF was designed to solve.  OVF is intended to be a self-conta=
ined
packaging and distribution format that contains everything needed to instan=
tiate
one or more VMs.  As such, OVF can be moved by all of the protocols noted b=
elow,
plus a variety of other means, such as sneaker-net.

If OVF is insufficient for the portability use case, then I suggest going t=
o DMTF
to work on adding what's missing instead of inventing a "control plane" tha=
t is
at odds with OVF's design intent.

Thanks,
--David

> -----Original Message-----
> From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> Sent: Thursday, January 19, 2012 11:10 PM
> To: Black, David
> Cc: dc@ietf.org
> Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt
>=20
>=20
> >> Was that supposed to be a serious question?
>=20
> Yes, it is a serious question, because VM mobility goes beyond the VM.
>=20
> >> If it was, I suggest FTP or NFS, both of which are already used to
> move VM
> >> images in practice, and are already specified in RFCs ;-).  OVF is
> fundamentally
> >> a VM image format.
>=20
> That's one approach. Another approach is to use a SOAP/REST APIs. Yet
> another one is to define a cloud control plane, that does more than just
> move VMs. E.g. when you move a VM, you have to move the firewall rules,
> the VLAN association, the bandwidth, VRF configuration, GRE tunnel
> configuration, etc.
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: david.black@emc.com [mailto:david.black@emc.com]
> Sent: Friday, January 20, 2012 8:18 AM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org
> Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt
>=20
> > - Do we need a "control plane" to transfer OVF specification from
> point
> > A to B - the portability problem?
>=20
> Was that supposed to be a serious question?
>=20
> If it was, I suggest FTP or NFS, both of which are already used to move
> VM
> images in practice, and are already specified in RFCs ;-).  OVF is
> fundamentally
> a VM image format.
>=20
> Thanks,
> --David
> ----------------------------------------------------
> David L. Black, Distinguished Engineer
> EMC Corporation, 176 South St., Hopkinton, MA  01748
> +1 (508) 293-7953             FAX: +1 (508) 293-7786
> david.black@emc.com        Mobile: +1 (978) 394-7754
> ----------------------------------------------------
> ________________________________________
> From: dc-bounces@ietf.org [dc-bounces@ietf.org] On Behalf Of Ashish
> Dalela (adalela) [adalela@cisco.com]
> Sent: Thursday, January 19, 2012 8:20 PM
> To: Thomas Narten; Steven Blake
> Cc: dc@ietf.org
> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>=20
> I think it is fair to say that there is a difference between mobility
> and portability. Mobility is live migration, but portability is
> specifying a VM's properties, delete in one location and create in
> another. The new location can be another hypervisor. In many cases, you
> don't need mobility, just portability. E.g. if you have a disaster
> recovery situation, then you aren't going to get mobility anyway.
>=20
> DMTF has specified a standard called OVF (Open Virtualization Format)
> that addresses the "description" of the VM. This format is supported by
> various hypervisor vendors. So, some level of VM migration
> standardization has already happened (albeit portability and not
> mobility).
>=20
> The questions are:
>=20
> - Do we need a "control plane" to transfer VM state from point A to B -
> the mobility problem?
> - Do we need a "control plane" to transfer OVF specification from point
> A to B - the portability problem?
>=20
> The problem is relevant in the inter-datacenter, public-private, or
> inter-cloud spaces, where there will be more than one hypervisor
> controller by definition. Are we hitting the live migration issue today?
> Maybe not. Is it conceivable that we will hit this issue? I think so.
>=20
> However, the question has to be asked to the provider/operators and not
> to the vendors.
>=20
> Thanks, Ashish
>=20
>=20
> -----Original Message-----
> From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> Thomas Narten
> Sent: Thursday, January 19, 2012 11:17 PM
> To: Steven Blake
> Cc: dc@ietf.org
> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>=20
> Steven,
>=20
> > Several system vendors (myself included) stood up in Taipei and said
> > "one encapsulation, please".  If IETF can facilitate industry
> > convergence on a small set of NVO3 encapsulations (preferably one),
> that
> > would be a big win for Ethernet switch vendors.
>=20
> I agree completely.
>=20
> But my questions were asking about the apparent lack of  interest from
> operators/implementers/market players regarding Bhumip's draft and the
> apparent desire to have some sort of standards work related to the
> general VM migration problem.
>=20
> Is there such interest?
>=20
> Thomas
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc


From vishwas.ietf@gmail.com  Fri Jan 20 17:23:28 2012
Return-Path: <vishwas.ietf@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 97EF021F86DE for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 17:23:28 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.398
X-Spam-Level: 
X-Spam-Status: No, score=-3.398 tagged_above=-999 required=5 tests=[AWL=0.201,  BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id WmsfHigal1qA for <dc@ietfa.amsl.com>; Fri, 20 Jan 2012 17:23:27 -0800 (PST)
Received: from mail-tul01m020-f172.google.com (mail-tul01m020-f172.google.com [209.85.214.172]) by ietfa.amsl.com (Postfix) with ESMTP id A3FA221F86C5 for <dc@ietf.org>; Fri, 20 Jan 2012 17:23:27 -0800 (PST)
Received: by obbwc12 with SMTP id wc12so1683254obb.31 for <dc@ietf.org>; Fri, 20 Jan 2012 17:23:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=c6Pe4EhcAX2ergyuqHvShkvjc86zp7OBUrpYIcO+/Qs=; b=BlscJP9RLsdR6ZiaKqHI2Y1qi9GE4DkMiDnDroMlr6b7EMIxGZPvqbAiv1rbS/pMqQ 3ejXcvUmjgW4gx8Z70a7s3NT9a3qm55E8aphlY/uTrsAZScai1wcO2Y/NbyflDS1njbG wJfLfv5qqyZlJvdO0eWuEnZCMZ9Urfa1UCmnY=
MIME-Version: 1.0
Received: by 10.182.222.102 with SMTP id ql6mr149213obc.2.1327109005926; Fri, 20 Jan 2012 17:23:25 -0800 (PST)
Received: by 10.182.28.196 with HTTP; Fri, 20 Jan 2012 17:23:25 -0800 (PST)
In-Reply-To: <FF8EC204-C4B0-4690-B692-905F672D60D3@asgaard.org>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com> <406B8B5D-E1E5-4DF4-8DE2-D7D2A699430A@asgaard.org> <4F18CE61.6030002@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB2330@XMB-BGL-416.cisco.com> <4F18EF4A.3060308@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com> <4F18FB72.2090900@joelhalpern.com> <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com> <4F19034E.1070802@gmail.com> <CAOyVPHTbxB=QYC3Qw0ybL=5RN7VefSENV4iiBBOpXbCn58oi=Q@mail.gmail.com> <4F19F939.2020804@gmail.com> <DF0D6664-9FD5-4EF0-A03F-86C1921D9D01@asgaard.org> <CAOyVPHQh2yb5iP9-bH6NOzamW6FaK0cYwpfqfqns7TZVTpmY5g@mail.gmail.com> <FF8EC204-C4B0-4690-B692-905F672D60D3@asgaard.org>
Date: Fri, 20 Jan 2012 17:23:25 -0800
Message-ID: <CAOyVPHQPBmhpKKLs=GnaOe9EwJMB0pSBAuS458mNDo8J6tC3Lw@mail.gmail.com>
From: Vishwas Manral <vishwas.ietf@gmail.com>
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Cc: Melinda Shore <melinda.shore@gmail.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 21 Jan 2012 01:23:28 -0000

Hi Chris,

It is not about crashing alone, it could be a wrong entry added into
the table that could cause the packets to be routed wrongly. The idea
is about increasing the complexity in a single point of failure.

I guess this is not the most critical discussion and we can probably
move forward.

Thanks,
Vishwas


On 1/20/12, Christopher LILJENSTOLPE <cdl@asgaard.org> wrote:
> Greetings Vishwas,
>
> 	And I guess I am saying that I'm not sure I agree.  If a single process
> running can kill all the other processes on a given system (especially if=
 it
> is in "user space" - which I would assume for control plane functions), I
> would say that there are more substantial issues with the architecture of
> that particular VM distribution.  Similar answer if someone actually put =
a
> control-plane engine (think routing protocols) in kernel space.  We, as a
> standards organization can't save the world from less-than-inteligent
> developers....
>
>
> 	Chris
>
> On 20Jan2012, at 15.47, Vishwas Manral wrote:
>
>> Hi Chistopher,
>>
>> I totally agree.
>>
>> The point I was making was that the hypervisor is the single point of
>> failure which can cause all the guest OS to fail. The more complex you
>> make the functionality the higer the chances of failure. So we should
>> work with that thought in mind.
>>
>> Thanks,
>> Vishwas
>>
>> On 1/20/12, Christopher LILJENSTOLPE <cdl@asgaard.org> wrote:
>>> Greetings,
>>>
>>> 	Another way of looking at it is a hypervisor is really an operating
>>> system
>>> distribution.  Many of the hypervisors out there have the network
>>> "switch"
>>> as a separate process (actually I believe all of them do).  So, if we a=
re
>>> saying that networking intel doesn't belong in an OS distribution, that
>>> is a
>>> departure from current thinking :)
>>>
>>> 	Chris
>>>
>>> On 20Jan2012, at 15.31, Melinda Shore wrote:
>>>
>>>> On 01/20/2012 02:18 PM, Vishwas Manral wrote:
>>>>> An interesting thing to note is that the more the functionality you
>>>>> put in the hypervisor, the more you stress the single point of failur=
e
>>>>> in the virtualized system.
>>>>
>>>> Well, there are a few ways to look at it.  For example,
>>>> the fewer components you've got the larger the mean time
>>>> between failures.  But aside from that it's been a
>>>> general rule of thumb that you want to minimize the
>>>> impact of failed components on non-failed components
>>>> (the fate sharing principle).
>>>>
>>>> At any rate the hypervisor (at least the ones with which
>>>> I'm familiar) basically *are* network devices - they
>>>> function as a switch, or even a NAT.  If you're going to
>>>> suggest that they can't be used to terminate control plane
>>>> sessions I hope there's a more compelling reason for it
>>>> than what has been offered so far.
>>>>
>>>> Melinda
>>>> _______________________________________________
>>>> dc mailing list
>>>> dc@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/dc
>>>
>>> --
>>> =E6=9D=8E=E6=9F=AF=E7=9D=BF
>>> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
>>> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>>>
>>>
>
> --
> =E6=9D=8E=E6=9F=AF=E7=9D=BF
> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>
>

From j.schoenwaelder@jacobs-university.de  Sat Jan 21 00:01:05 2012
Return-Path: <j.schoenwaelder@jacobs-university.de>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E34BC21F8600 for <dc@ietfa.amsl.com>; Sat, 21 Jan 2012 00:01:05 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -103.229
X-Spam-Level: 
X-Spam-Status: No, score=-103.229 tagged_above=-999 required=5 tests=[AWL=0.020, BAYES_00=-2.599, HELO_EQ_DE=0.35, RCVD_IN_DNSWL_LOW=-1, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vwhsK+LrlEuk for <dc@ietfa.amsl.com>; Sat, 21 Jan 2012 00:01:01 -0800 (PST)
Received: from hermes.jacobs-university.de (hermes.jacobs-university.de [212.201.44.23]) by ietfa.amsl.com (Postfix) with ESMTP id 94AC421F85FF for <dc@ietf.org>; Sat, 21 Jan 2012 00:01:01 -0800 (PST)
Received: from localhost (demetrius4.jacobs-university.de [212.201.44.49]) by hermes.jacobs-university.de (Postfix) with ESMTP id 74BD320BDC; Sat, 21 Jan 2012 09:01:00 +0100 (CET)
X-Virus-Scanned: amavisd-new at jacobs-university.de
Received: from hermes.jacobs-university.de ([212.201.44.23]) by localhost (demetrius4.jacobs-university.de [212.201.44.32]) (amavisd-new, port 10024) with ESMTP id Q5O1jiwiLyDz; Sat, 21 Jan 2012 09:01:00 +0100 (CET)
Received: from elstar.local (elstar.jacobs.jacobs-university.de [10.50.231.133]) by hermes.jacobs-university.de (Postfix) with ESMTP id 20FDE20BDA; Sat, 21 Jan 2012 09:01:00 +0100 (CET)
Received: by elstar.local (Postfix, from userid 501) id DA2451C95D07; Sat, 21 Jan 2012 09:00:42 +0100 (CET)
Date: Sat, 21 Jan 2012 09:00:42 +0100
From: Juergen Schoenwaelder <j.schoenwaelder@jacobs-university.de>
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
Message-ID: <20120121080042.GB39496@elstar.local>
Mail-Followup-To: Christopher LILJENSTOLPE <cdl@asgaard.org>, Vishwas Manral <vishwas.ietf@gmail.com>, Melinda Shore <melinda.shore@gmail.com>, dc@ietf.org
References: <4F18EF4A.3060308@gmail.com> <618BE8B40039924EB9AED233D4A09C5102CB234C@XMB-BGL-416.cisco.com> <4F18FB72.2090900@joelhalpern.com> <618BE8B40039924EB9AED233D4A09C5102CB2380@XMB-BGL-416.cisco.com> <4F19034E.1070802@gmail.com> <CAOyVPHTbxB=QYC3Qw0ybL=5RN7VefSENV4iiBBOpXbCn58oi=Q@mail.gmail.com> <4F19F939.2020804@gmail.com> <DF0D6664-9FD5-4EF0-A03F-86C1921D9D01@asgaard.org> <CAOyVPHQh2yb5iP9-bH6NOzamW6FaK0cYwpfqfqns7TZVTpmY5g@mail.gmail.com> <FF8EC204-C4B0-4690-B692-905F672D60D3@asgaard.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <FF8EC204-C4B0-4690-B692-905F672D60D3@asgaard.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Vishwas Manral <vishwas.ietf@gmail.com>, Melinda Shore <melinda.shore@gmail.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
Reply-To: Juergen Schoenwaelder <j.schoenwaelder@jacobs-university.de>
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sat, 21 Jan 2012 08:01:06 -0000

On Fri, Jan 20, 2012 at 04:06:19PM -0800, Christopher LILJENSTOLPE wrote:
> Greetings Vishwas,
> 
> 	And I guess I am saying that I'm not sure I agree.  If a single process running can kill all the other processes on a given system (especially if it is in "user space" - which I would assume for control plane functions), I would say that there are more substantial issues with the architecture of that particular VM distribution.  Similar answer if someone actually put a control-plane engine (think routing protocols) in kernel space.  We, as a standards organization can't save the world from less-than-inteligent developers....
> 

The Linux-based hypervisors I have used happen to have the bridge(s)
in the kernel plus user space utilities to configure/control them, as
it should be. But then I am not using products but just open source
implementations... ;-)

A general observation: I think what this discussion really is lacking
is implementors familiar with real-world implementations. If there
were more implementors involved here with a decent understanding of
code bases, I assume we would be quickly able to figure out what and
if anything needs to be dealt with in the IETF.

/js

-- 
Juergen Schoenwaelder           Jacobs University Bremen gGmbH
Phone: +49 421 200 3587         Campus Ring 1, 28759 Bremen, Germany
Fax:   +49 421 200 3103         <http://www.jacobs-university.de/>

From linda.dunbar@huawei.com  Thu Jan 26 16:23:56 2012
Return-Path: <linda.dunbar@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3C5B121F86C7 for <dc@ietfa.amsl.com>; Thu, 26 Jan 2012 16:23:56 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.508
X-Spam-Level: 
X-Spam-Status: No, score=-2.508 tagged_above=-999 required=5 tests=[AWL=0.090,  BAYES_00=-2.599, HTML_MESSAGE=0.001]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id iOCNklV--O5J for <dc@ietfa.amsl.com>; Thu, 26 Jan 2012 16:23:54 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id 9431021F86C1 for <dc@ietf.org>; Thu, 26 Jan 2012 16:23:53 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml202-edg.china.huawei.com) ([172.18.9.243]) by dfwrg02-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACR20228; Thu, 26 Jan 2012 19:23:53 -0500 (EST)
Received: from DFWEML404-HUB.china.huawei.com (10.193.5.203) by dfweml202-edg.china.huawei.com (172.18.9.108) with Microsoft SMTP Server (TLS) id 14.1.323.3; Thu, 26 Jan 2012 16:22:57 -0800
Received: from DFWEML505-MBX.china.huawei.com ([10.124.31.100]) by dfweml404-hub.china.huawei.com ([10.193.5.203]) with mapi id 14.01.0323.003; Thu, 26 Jan 2012 16:22:53 -0800
From: Linda Dunbar <linda.dunbar@huawei.com>
To: Thomas Narten <narten@us.ibm.com>, "david.black@emc.com" <david.black@emc.com>, Murari Sridharan <muraris@microsoft.com>, Dinesh Dutt <ddutt@cisco.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Thread-Topic: comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
Thread-Index: Aczcic4dhzNCqBUxQSqOznX8P2avFQ==
Date: Fri, 27 Jan 2012 00:22:52 +0000
Message-ID: <4A95BA014132FF49AE685FAB4B9F17F632E17F2D@dfweml505-mbx>
Accept-Language: en-US, zh-CN
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.192.11.97]
Content-Type: multipart/alternative; boundary="_000_4A95BA014132FF49AE685FAB4B9F17F632E17F2Ddfweml505mbx_"
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: "dc@ietf.org" <dc@ietf.org>
Subject: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 00:23:56 -0000

--_000_4A95BA014132FF49AE685FAB4B9F17F632E17F2Ddfweml505mbx_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Thomas, et al,

Here are my comments to draft-narten-nv03-overlay-problem-statment-01 and s=
uggested wording change:

3.1. Limitations of Existing Virtual Network Models.

Since IEEE802.1 defined VLAN separation is mentioned, it would be appropria=
te to mention PBB's ISID separation.

The limitation for PBB and VLAN should include that MAC addresses can't be =
aggregated, therefore forwarding table can be very large for large data cen=
ters.

Second paragraph: Why "VLANs are a pure bridging construct while VRF is pur=
e routing construct" is a problem?

4. Network Overlays

Should add a subsection to describe Virtual Network Instance ID.

4.x. Virtual Network Instance ID

Virtual Network Instance is for segregating traffic belonging to different =
tenants or different zones of one tenant. When a data center uses Overlay N=
etwork to hide hosts addresses, it is important that Virtual Network Instan=
ce identifier can properly represent the zones or the tenants in entire dat=
a center, especially when the overlay edge nodes are access switches, i.e. =
not embedded in hypervisors.

When overlay edge nodes are access switches, the data frames before enterin=
g the Overlay Network or after exiting the Overlay Network might carry trad=
itional VLAN-ID for proper traffic segregation. The Virtual Network Instanc=
e ID value carried by the Overlay Header  of the data frames might be 24 bi=
ts (as described in 3.2). Those VLAN-ID for data frames under each overlay =
edge node are only locally significant. Proper mapping has to be maintained=
.

6.2 (TRILL) &6.3 (L2VPN)

It is necessary to point out that both TRILL and L2VPN carry the VLAN-ID em=
bedded in the original Ethernet frames across the Overlay Network and the V=
LAN-ID maintain the same meaning in two separate L2 islands.

When VLAN-ID tagged Ethernet frames traverse across the Overlay Network for=
 Data Center, the VLAN-ID carried by Ethernet frames lose its significance.=
 In another words, a different VLAN-ID might be re-assigned to the Ethernet=
 frames by the Egress overlay edge.


5.3. Associating a VNID with an endpoint.

You stated that "typically, it is a virtual NIC coming up that triggers thi=
s association". It is not quite right. Virtual NIC, same as physical NIC fo=
r a non-virtualized server, usually is not aware of which network segment i=
t is attached to. For Ethernet data frames coming out from a traditional se=
rver (i.e. server without virtualization), the data frame is not tagged, i.=
e. not having VID associated with it. It is the first switch which assign t=
he VID for the data frame.

It is more accurate to say  "virtual switch" or "Hypervisor" within the ser=
ver which can trigger the association.

Since overlay header can be also added by first access switches (i.e. switc=
hes not embedded in physical servers), it is very possible that  data frame=
s arriving at the overlay edge (i.e. the first access switch) is already VL=
AN tagged. Then association (or mapping) from the VLAN-Tag to VNID has to b=
e operator administrated.


Linda Dunbar


--_000_4A95BA014132FF49AE685FAB4B9F17F632E17F2Ddfweml505mbx_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" xmlns:p=3D"urn:schemas-m=
icrosoft-com:office:powerpoint" xmlns:a=3D"urn:schemas-microsoft-com:office=
:access" xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:s=3D"=
uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:rs=3D"urn:schemas-microsof=
t-com:rowset" xmlns:z=3D"#RowsetSchema" xmlns:b=3D"urn:schemas-microsoft-co=
m:office:publisher" xmlns:ss=3D"urn:schemas-microsoft-com:office:spreadshee=
t" xmlns:c=3D"urn:schemas-microsoft-com:office:component:spreadsheet" xmlns=
:odc=3D"urn:schemas-microsoft-com:office:odc" xmlns:oa=3D"urn:schemas-micro=
soft-com:office:activation" xmlns:html=3D"http://www.w3.org/TR/REC-html40" =
xmlns:q=3D"http://schemas.xmlsoap.org/soap/envelope/" xmlns:rtc=3D"http://m=
icrosoft.com/officenet/conferencing" xmlns:D=3D"DAV:" xmlns:Repl=3D"http://=
schemas.microsoft.com/repl/" xmlns:mt=3D"http://schemas.microsoft.com/share=
point/soap/meetings/" xmlns:x2=3D"http://schemas.microsoft.com/office/excel=
/2003/xml" xmlns:ppda=3D"http://www.passport.com/NameSpace.xsd" xmlns:ois=
=3D"http://schemas.microsoft.com/sharepoint/soap/ois/" xmlns:dir=3D"http://=
schemas.microsoft.com/sharepoint/soap/directory/" xmlns:ds=3D"http://www.w3=
.org/2000/09/xmldsig#" xmlns:dsp=3D"http://schemas.microsoft.com/sharepoint=
/dsp" xmlns:udc=3D"http://schemas.microsoft.com/data/udc" xmlns:xsd=3D"http=
://www.w3.org/2001/XMLSchema" xmlns:sub=3D"http://schemas.microsoft.com/sha=
repoint/soap/2002/1/alerts/" xmlns:ec=3D"http://www.w3.org/2001/04/xmlenc#"=
 xmlns:sp=3D"http://schemas.microsoft.com/sharepoint/" xmlns:sps=3D"http://=
schemas.microsoft.com/sharepoint/soap/" xmlns:xsi=3D"http://www.w3.org/2001=
/XMLSchema-instance" xmlns:udcs=3D"http://schemas.microsoft.com/data/udc/so=
ap" xmlns:udcxf=3D"http://schemas.microsoft.com/data/udc/xmlfile" xmlns:udc=
p2p=3D"http://schemas.microsoft.com/data/udc/parttopart" xmlns:wf=3D"http:/=
/schemas.microsoft.com/sharepoint/soap/workflow/" xmlns:dsss=3D"http://sche=
mas.microsoft.com/office/2006/digsig-setup" xmlns:dssi=3D"http://schemas.mi=
crosoft.com/office/2006/digsig" xmlns:mdssi=3D"http://schemas.openxmlformat=
s.org/package/2006/digital-signature" xmlns:mver=3D"http://schemas.openxmlf=
ormats.org/markup-compatibility/2006" xmlns:m=3D"http://schemas.microsoft.c=
om/office/2004/12/omml" xmlns:mrels=3D"http://schemas.openxmlformats.org/pa=
ckage/2006/relationships" xmlns:spwp=3D"http://microsoft.com/sharepoint/web=
partpages" xmlns:ex12t=3D"http://schemas.microsoft.com/exchange/services/20=
06/types" xmlns:ex12m=3D"http://schemas.microsoft.com/exchange/services/200=
6/messages" xmlns:pptsl=3D"http://schemas.microsoft.com/sharepoint/soap/Sli=
deLibrary/" xmlns:spsl=3D"http://microsoft.com/webservices/SharePointPortal=
Server/PublishedLinksService" xmlns:Z=3D"urn:schemas-microsoft-com:" xmlns:=
st=3D"&#1;" xmlns=3D"http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 12 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.25in 1.0in 1.25in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Thomas, et al, <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Here are my comments to draft-narten-nv03-overlay-pr=
oblem-statment-01 and suggested wording change:<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">3.1. Limitations of Existing Virtual Network Models.=
 <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">Since IEEE802.1 defined V=
LAN separation is mentioned, it would be appropriate to mention PBB&#8217;s=
 ISID separation.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">The limitation for PBB an=
d VLAN should include that MAC addresses can&#8217;t be aggregated, therefo=
re forwarding table can be very large for large data centers.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">Second paragraph: Why &#8=
220;VLANs are a pure bridging construct while VRF is pure routing construct=
&#8221; is a problem?
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">4. Network Overlays<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">Should add a subsection t=
o describe Virtual Network Instance ID.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">4.x. Virtual Network Inst=
ance ID<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">Virtual Network Instance =
is for segregating traffic belonging to different tenants or different zone=
s of one tenant. When a data center uses Overlay Network to hide hosts addr=
esses, it is important that Virtual
 Network Instance identifier can properly represent the zones or the tenant=
s in entire data center, especially when the overlay edge nodes are access =
switches, i.e. not embedded in hypervisors.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">When overlay edge nodes a=
re access switches, the data frames before entering the Overlay Network or =
after exiting the Overlay Network might carry traditional VLAN-ID for prope=
r traffic segregation. The Virtual Network
 Instance ID value carried by the Overlay Header &nbsp;of the data frames m=
ight be 24 bits (as described in 3.2). Those VLAN-ID for data frames under =
each overlay edge node are only locally significant. Proper mapping has to =
be maintained.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">6.2 (TRILL) &amp;6.3 (L2VPN)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">It is necessary to point =
out that both TRILL and L2VPN carry the VLAN-ID embedded in the original Et=
hernet frames across the Overlay Network and the VLAN-ID maintain the same =
meaning in two separate L2 islands.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">When VLAN-ID tagged Ether=
net frames traverse across the Overlay Network for Data Center, the VLAN-ID=
 carried by Ethernet frames lose its significance. In another words, a diff=
erent VLAN-ID might be re-assigned to
 the Ethernet frames by the Egress overlay edge. <o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;<o:p></o:p></p>
<p class=3D"MsoNormal">5.3. Associating a VNID with an endpoint. <o:p></o:p=
></p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">You stated that &#8220;ty=
pically, it is a virtual NIC coming up that triggers this association&#8221=
;. It is not quite right. Virtual NIC, same as physical NIC for a non-virtu=
alized server, usually is not aware of which network
 segment it is attached to. For Ethernet data frames coming out from a trad=
itional server (i.e. server without virtualization), the data frame is not =
tagged, i.e. not having VID associated with it. It is the first switch whic=
h assign the VID for the data frame.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">It is more accurate to sa=
y &nbsp;&#8220;virtual switch&#8221; or &#8220;Hypervisor&#8221; within the=
 server which can trigger the association.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">Since overlay header can =
be also added by first access switches (i.e. switches not embedded in physi=
cal servers), it is very possible that &nbsp;data frames arriving at the ov=
erlay edge (i.e. the first access switch)
 is already VLAN tagged. Then association (or mapping) from the VLAN-Tag to=
 VNID has to be operator administrated.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Linda Dunbar<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_4A95BA014132FF49AE685FAB4B9F17F632E17F2Ddfweml505mbx_--

From paul@unbehagen.net  Thu Jan 26 16:50:51 2012
Return-Path: <paul@unbehagen.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 5D1FF21F860E for <dc@ietfa.amsl.com>; Thu, 26 Jan 2012 16:50:51 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.901
X-Spam-Level: 
X-Spam-Status: No, score=-2.901 tagged_above=-999 required=5 tests=[AWL=-0.699, BAYES_00=-2.599, HTML_MESSAGE=0.001, MIME_QP_LONG_LINE=1.396, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id W-bxdTHn6Rlf for <dc@ietfa.amsl.com>; Thu, 26 Jan 2012 16:50:50 -0800 (PST)
Received: from mail-yx0-f172.google.com (mail-yx0-f172.google.com [209.85.213.172]) by ietfa.amsl.com (Postfix) with ESMTP id 6426521F8646 for <dc@ietf.org>; Thu, 26 Jan 2012 16:50:50 -0800 (PST)
Received: by yenm3 with SMTP id m3so620340yen.31 for <dc@ietf.org>; Thu, 26 Jan 2012 16:50:50 -0800 (PST)
Received: by 10.236.124.105 with SMTP id w69mr7016755yhh.31.1327625449968; Thu, 26 Jan 2012 16:50:49 -0800 (PST)
Received: from [10.0.1.11] (c-67-161-144-217.hsd1.co.comcast.net. [67.161.144.217]) by mx.google.com with ESMTPS id h43sm10050218yhj.2.2012.01.26.16.50.47 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 26 Jan 2012 16:50:49 -0800 (PST)
References: <4A95BA014132FF49AE685FAB4B9F17F632E17F2D@dfweml505-mbx>
In-Reply-To: <4A95BA014132FF49AE685FAB4B9F17F632E17F2D@dfweml505-mbx>
Mime-Version: 1.0 (1.0)
Content-Transfer-Encoding: 7bit
Content-Type: multipart/alternative; boundary=Apple-Mail-AC9B1B65-32A3-4CC4-94CC-678B3D996999
Message-Id: <C5CE8493-6543-4EB0-BCEB-99EEBA3FD59E@unbehagen.net>
X-Mailer: iPad Mail (9A405)
From: Paul Unbehagen <paul@unbehagen.net>
Date: Thu, 26 Jan 2012 17:50:44 -0700
To: Linda Dunbar <linda.dunbar@huawei.com>
Cc: Thomas Narten <narten@us.ibm.com>, "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 00:50:51 -0000

--Apple-Mail-AC9B1B65-32A3-4CC4-94CC-678B3D996999
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

Inline below

--
Paul Unbehagen


On Jan 26, 2012, at 5:22 PM, Linda Dunbar <linda.dunbar@huawei.com> wrote:

> Thomas, et al,
> =20
> Here are my comments to draft-narten-nv03-overlay-problem-statment-01 and s=
uggested wording change:
> =20
> 3.1. Limitations of Existing Virtual Network Models.
> =20
> Since IEEE802.1 defined VLAN separation is mentioned, it would be appropri=
ate to mention PBB=E2=80=99s ISID separation.

Agreed, clear service separation and the 16 million services should be clear=
ly explained, additionally the ease of edge provisioning over STP's per hop c=
onfiguration.=20

> =20
> The limitation for PBB and VLAN should include that MAC addresses can=E2=80=
=99t be aggregated, therefore forwarding table can be very large for large d=
ata centers.

Disagree, this isn't what we see in live deployments. Since the Mac learning=
 of the host stations only happens at the edge switches and the services are=
 distributed across many edge switches the ISID service Mac table sizes don'=
t end up being that large on any given node. Additionally the core never see=
s any macs but the nodes participating in the backbone from edge switch to e=
dge switch.=20

> =20
> Second paragraph: Why =E2=80=9CVLANs are a pure bridging construct while V=
RF is pure routing construct=E2=80=9D is a problem?
> =20
> 4. Network Overlays
> =20
> Should add a subsection to describe Virtual Network Instance ID.
> =20
> 4.x. Virtual Network Instance ID
> =20
> Virtual Network Instance is for segregating traffic belonging to different=
 tenants or different zones of one tenant. When a data center uses Overlay N=
etwork to hide hosts addresses, it is important that Virtual Network Instanc=
e identifier can properly represent the zones or the tenants in entire data c=
enter, especially when the overlay edge nodes are access switches, i.e. not e=
mbedded in hypervisors.
> =20
> When overlay edge nodes are access switches, the data frames before enteri=
ng the Overlay Network or after exiting the Overlay Network might carry trad=
itional VLAN-ID for proper traffic segregation. The Virtual Network Instance=
 ID value carried by the Overlay Header  of the data frames might be 24 bits=
 (as described in 3.2). Those VLAN-ID for data frames under each overlay edg=
e node are only locally significant. Proper mapping has to be maintained.
> =20
> 6.2 (TRILL) &6.3 (L2VPN)
> =20
> It is necessary to point out that both TRILL and L2VPN carry the VLAN-ID e=
mbedded in the original Ethernet frames across the Overlay Network and the V=
LAN-ID maintain the same meaning in two separate L2 islands.
> =20
> When VLAN-ID tagged Ethernet frames traverse across the Overlay Network fo=
r Data Center, the VLAN-ID carried by Ethernet frames lose its significance.=
 In another words, a different VLAN-ID might be re-assigned to the Ethernet f=
rames by the Egress overlay edge.
> =20
> =20
> 5.3. Associating a VNID with an endpoint.
>              =20
> You stated that =E2=80=9Ctypically, it is a virtual NIC coming up that tri=
ggers this association=E2=80=9D. It is not quite right. Virtual NIC, same as=
 physical NIC for a non-virtualized server, usually is not aware of which ne=
twork segment it is attached to. For Ethernet data frames coming out from a t=
raditional server (i.e. server without virtualization), the data frame is no=
t tagged, i.e. not having VID associated with it. It is the first switch whi=
ch assign the VID for the data frame.
> =20
> It is more accurate to say  =E2=80=9Cvirtual switch=E2=80=9D or =E2=80=9CH=
ypervisor=E2=80=9D within the server which can trigger the association.
> =20
> Since overlay header can be also added by first access switches (i.e. swit=
ches not embedded in physical servers), it is very possible that  data frame=
s arriving at the overlay edge (i.e. the first access switch) is already VLA=
N tagged. Then association (or mapping) from the VLAN-Tag to VNID has to be o=
perator administrated.
> =20
> =20
> Linda Dunbar
> =20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

--Apple-Mail-AC9B1B65-32A3-4CC4-94CC-678B3D996999
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head></head><body bgcolor=3D"#FFFFFF"><div>Inline below<br><br><div><=
div>--</div><div>Paul Unbehagen</div><div><br></div></div></div><div><br>On J=
an 26, 2012, at 5:22 PM, Linda Dunbar &lt;<a href=3D"mailto:linda.dunbar@hua=
wei.com">linda.dunbar@huawei.com</a>&gt; wrote:<br><br></div><div></div><blo=
ckquote type=3D"cite"><div>

<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii">=

<meta name=3D"Generator" content=3D"Microsoft Word 12 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.25in 1.0in 1.25in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->


<div class=3D"WordSection1">
<p class=3D"MsoNormal">Thomas, et al, <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Here are my comments to draft-narten-nv03-overlay-pro=
blem-statment-01 and suggested wording change:<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">3.1. Limitations of Existing Virtual Network Models. <=
o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">Since IEEE802.1 defined VL=
AN separation is mentioned, it would be appropriate to mention PBB=E2=80=99s=
 ISID separation.
</p></div></div></blockquote><div><br></div><div>Agreed, clear service separ=
ation and the 16 million services should be clearly explained, additionally t=
he ease of edge provisioning over STP's per hop configuration.&nbsp;</div><b=
r><blockquote type=3D"cite"><div><div class=3D"WordSection1"><p class=3D"Mso=
Normal" style=3D"margin-left:.5in"><o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p></div=
></div></blockquote><blockquote type=3D"cite"><div><div class=3D"WordSection=
1">
<p class=3D"MsoNormal" style=3D"margin-left:.5in">The limitation for PBB and=
 VLAN should include that MAC addresses can=E2=80=99t be aggregated, therefo=
re forwarding table can be very large for large data centers.
</p></div></div></blockquote><div><br></div><div>Disagree, this isn't what w=
e see in live deployments. Since the Mac learning of the host stations only h=
appens at the edge switches and the services are distributed across many edg=
e switches the ISID service Mac table sizes don't end up being that large on=
 any given node. Additionally the core never sees any macs but the nodes par=
ticipating in the backbone from edge switch to edge switch.&nbsp;</div><br><=
blockquote type=3D"cite"><div><div class=3D"WordSection1"><p class=3D"MsoNor=
mal" style=3D"margin-left:.5in"><o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">Second paragraph: Why =E2=80=
=9CVLANs are a pure bridging construct while VRF is pure routing construct=E2=
=80=9D is a problem?
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">4. Network Overlays<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">Should add a subsection to=
 describe Virtual Network Instance ID.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">4.x. Virtual Network Insta=
nce ID<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">Virtual Network Instance i=
s for segregating traffic belonging to different tenants or different zones o=
f one tenant. When a data center uses Overlay Network to hide hosts addresse=
s, it is important that Virtual
 Network Instance identifier can properly represent the zones or the tenants=
 in entire data center, especially when the overlay edge nodes are access sw=
itches, i.e. not embedded in hypervisors.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">When overlay edge nodes ar=
e access switches, the data frames before entering the Overlay Network or af=
ter exiting the Overlay Network might carry traditional VLAN-ID for proper t=
raffic segregation. The Virtual Network
 Instance ID value carried by the Overlay Header &nbsp;of the data frames mi=
ght be 24 bits (as described in 3.2). Those VLAN-ID for data frames under ea=
ch overlay edge node are only locally significant. Proper mapping has to be m=
aintained.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">6.2 (TRILL) &amp;6.3 (L2VPN)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">It is necessary to point o=
ut that both TRILL and L2VPN carry the VLAN-ID embedded in the original Ethe=
rnet frames across the Overlay Network and the VLAN-ID maintain the same mea=
ning in two separate L2 islands.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">When VLAN-ID tagged Ethern=
et frames traverse across the Overlay Network for Data Center, the VLAN-ID c=
arried by Ethernet frames lose its significance. In another words, a differe=
nt VLAN-ID might be re-assigned to
 the Ethernet frames by the Egress overlay edge. <o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;<o:p></o:p></p>
<p class=3D"MsoNormal">&nbsp;<o:p></o:p></p>
<p class=3D"MsoNormal">5.3. Associating a VNID with an endpoint. <o:p></o:p>=
</p>
<p class=3D"MsoNormal">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">You stated that =E2=80=9Ct=
ypically, it is a virtual NIC coming up that triggers this association=E2=80=
=9D. It is not quite right. Virtual NIC, same as physical NIC for a non-virt=
ualized server, usually is not aware of which network
 segment it is attached to. For Ethernet data frames coming out from a tradi=
tional server (i.e. server without virtualization), the data frame is not ta=
gged, i.e. not having VID associated with it. It is the first switch which a=
ssign the VID for the data frame.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">It is more accurate to say=
 &nbsp;=E2=80=9Cvirtual switch=E2=80=9D or =E2=80=9CHypervisor=E2=80=9D with=
in the server which can trigger the association.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in">Since overlay header can b=
e also added by first access switches (i.e. switches not embedded in physica=
l servers), it is very possible that &nbsp;data frames arriving at the overl=
ay edge (i.e. the first access switch)
 is already VLAN tagged. Then association (or mapping) from the VLAN-Tag to V=
NID has to be operator administrated.
<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Linda Dunbar<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><o:p>&nbsp;</o:p></p>
</div>


</div></blockquote><blockquote type=3D"cite"><div><span>____________________=
___________________________</span><br><span>dc mailing list</span><br><span>=
<a href=3D"mailto:dc@ietf.org">dc@ietf.org</a></span><br><span><a href=3D"ht=
tps://www.ietf.org/mailman/listinfo/dc">https://www.ietf.org/mailman/listinf=
o/dc</a></span><br></div></blockquote></body></html>=

--Apple-Mail-AC9B1B65-32A3-4CC4-94CC-678B3D996999--

From rsaha@us.ibm.com  Thu Jan 26 23:43:41 2012
Return-Path: <rsaha@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 88D2621F8518 for <dc@ietfa.amsl.com>; Thu, 26 Jan 2012 23:43:41 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -8.298
X-Spam-Level: 
X-Spam-Status: No, score=-8.298 tagged_above=-999 required=5 tests=[AWL=1.700,  BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_52=0.6, RCVD_IN_DNSWL_HI=-8]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id u6OOhfZRcgkH for <dc@ietfa.amsl.com>; Thu, 26 Jan 2012 23:43:40 -0800 (PST)
Received: from e31.co.us.ibm.com (e31.co.us.ibm.com [32.97.110.149]) by ietfa.amsl.com (Postfix) with ESMTP id 50F4921F84EC for <dc@ietf.org>; Thu, 26 Jan 2012 23:43:40 -0800 (PST)
Received: from /spool/local by e31.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <rsaha@us.ibm.com>; Fri, 27 Jan 2012 00:43:38 -0700
Received: from d03dlp03.boulder.ibm.com (9.17.202.179) by e31.co.us.ibm.com (192.168.1.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Fri, 27 Jan 2012 00:43:06 -0700
Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com [9.17.195.228]) by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id 3F8CF19D804C; Fri, 27 Jan 2012 00:43:03 -0700 (MST)
Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q0R7h5cV132422; Fri, 27 Jan 2012 00:43:05 -0700
Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q0R7h5YA016367; Fri, 27 Jan 2012 00:43:05 -0700
Received: from d03nm691.boulder.ibm.com (d03nm691.boulder.ibm.com [9.17.195.188]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q0R7h5HX016361; Fri, 27 Jan 2012 00:43:05 -0700
In-Reply-To: <4A95BA014132FF49AE685FAB4B9F17F632E17F2D@dfweml505-mbx>
References: <4A95BA014132FF49AE685FAB4B9F17F632E17F2D@dfweml505-mbx>
To: Linda Dunbar <linda.dunbar@huawei.com>
MIME-Version: 1.0
X-KeepSent: F6866E01:3C35AF59-88257992:0029C4E2; type=4; name=$KeepSent
X-Mailer: Lotus Notes Release 8.5.1FP5 SHF29 November 12, 2010
From: Rakesh Saha <rsaha@us.ibm.com>
Message-ID: <OFF6866E01.3C35AF59-ON88257992.0029C4E2-88257992.002A64BA@us.ibm.com>
Date: Thu, 26 Jan 2012 23:43:03 -0800
X-MIMETrack: Serialize by Router on D03NM691/03/M/IBM(Release 8.5.1FP4HF305 | July 28, 2011) at 01/27/2012 00:43:04, Serialize complete at 01/27/2012 00:43:04
Content-Type: multipart/alternative; boundary="=_alternative 002A636388257992_="
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12012707-7282-0000-0000-000005F64CFF
Cc: "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, dc-bounces@ietf.org, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>, narten@rotala.raleigh.ibm.com
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 07:43:41 -0000

This is a multipart message in MIME format.
--=_alternative 002A636388257992_=
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64

SGkgTGluZGEsDQoNCj5Zb3Ugc3RhdGVkIHRoYXQg4oCcdHlwaWNhbGx5LCBpdCBpcyBhIHZpcnR1
YWwgTklDIGNvbWluZyB1cCB0aGF0IHRyaWdnZXJzIA0KdGhpcyBhc3NvY2lhdGlvbuKAnS4gSXQg
aXMgbm90IHF1aXRlIHJpZ2h0LiBWaXJ0dWFsIE5JQywgc2FtZSBhcyBwaHlzaWNhbCANCk5JQyBm
b3IgYSBub24tdmlydHVhbGl6ZWQgc2VydmVyLCB1c3VhbGx5IGlzIG5vdCBhd2FyZSBvZiB3aGlj
aCA+bmV0d29yayANCnNlZ21lbnQgaXQgaXMgYXR0YWNoZWQgdG8uIEZvciBFdGhlcm5ldCBkYXRh
IGZyYW1lcyBjb21pbmcgb3V0IGZyb20gYSANCnRyYWRpdGlvbmFsIHNlcnZlciAoaS5lLiBzZXJ2
ZXIgd2l0aG91dCB2aXJ0dWFsaXphdGlvbiksIHRoZSBkYXRhIGZyYW1lIGlzIA0Kbm90IHRhZ2dl
ZCwgaS5lLiBub3QgaGF2aW5nIFZJRCBhc3NvY2lhdGVkIHdpdGggaXQuIEl0IGlzID50aGUgZmly
c3QgDQpzd2l0Y2ggd2hpY2ggYXNzaWduIHRoZSBWSUQgZm9yIHRoZSBkYXRhIGZyYW1lLiANCg0K
UmVnYXJkaW5nICJGb3IgRXRoZXJuZXQgZGF0YSBmcmFtZXMgY29taW5nIG91dCBmcm9tIGEgdHJh
ZGl0aW9uYWwgc2VydmVyIA0KKGkuZS4gc2VydmVyIHdpdGhvdXQgdmlydHVhbGl6YXRpb24pLCB0
aGUgZGF0YSBmcmFtZSBpcyBub3QgdGFnZ2VkIiwuLi4iIA0KSXQgaXMgPnRoZSBmaXJzdCBzd2l0
Y2ggd2hpY2ggYXNzaWduIHRoZSBWSUQgZm9yIHRoZSBkYXRhIGZyYW1lIiAuLi4NCg0Kbm9uLXZp
cnR1YWxpemVkIHNlcnZlcnMgYXJlIGNhcGFibGUgb2Ygc2VuZGluZyBWTEFOIHRhZ2dlZCBwYWNr
ZXRzLiBJdCBpcyANCmEgc3RhbmRhcmQgZmVhdHVyZSBvZiB0b2RheSdhcyB0cmFkaXRpb25hbCBP
cGVyYXRpbmcgU3lzdGVtcyAoTklDIGRyaXZlcnMpIA0KIHRvIGFsbG93IFZMQU5zIHRvIGJlIGFz
c29jaWF0ZWQgd2l0aCBhbiBldGhlcm5ldCBpbnRlcmZhY2UuDQpTbyBhIHRyYWRpdGlvbmFsIG5v
bi12aXJ0dWFsaXplZCBzZXJ2ZXIgbWF5IHNlbmQgdGFnZ2VkIGFuZCB1bnRhZ2dlZCANCmZyYW1l
cy4NCg0KVGhhbmtzLA0KUmFrZXNoLg0KLT0tDQoNCg0KDQoNCg0KDQpGcm9tOiAgIExpbmRhIER1
bmJhciA8bGluZGEuZHVuYmFyQGh1YXdlaS5jb20+DQpUbzogICAgIG5hcnRlbkByb3RhbGEucmFs
ZWlnaC5pYm0uY29tLCAiZGF2aWQuYmxhY2tAZW1jLmNvbSIgDQo8ZGF2aWQuYmxhY2tAZW1jLmNv
bT4sIE11cmFyaSBTcmlkaGFyYW4gPG11cmFyaXNAbWljcm9zb2Z0LmNvbT4sIERpbmVzaCANCkR1
dHQgPGRkdXR0QGNpc2NvLmNvbT4sICJrcmVlZ2VyQGNpc2NvLmNvbSIgPGtyZWVnZXJAY2lzY28u
Y29tPg0KQ2M6ICAgICAiZGNAaWV0Zi5vcmciIDxkY0BpZXRmLm9yZz4NCkRhdGU6ICAgMDEvMjYv
MjAxMiAwNDoyNSBQTQ0KU3ViamVjdDogICAgICAgIFtkY10gY29tbWVudHMgYW5kIHN1Z2dlc3Rp
b25zIHRvIA0KZHJhZnQtbmFydGVuLW52MDMtb3ZlcmxheS1wcm9ibGVtLXN0YXRtZW50LTAxDQpT
ZW50IGJ5OiAgICAgICAgZGMtYm91bmNlc0BpZXRmLm9yZw0KDQoNCg0KVGhvbWFzLCBldCBhbCwg
DQogDQpIZXJlIGFyZSBteSBjb21tZW50cyB0byBkcmFmdC1uYXJ0ZW4tbnYwMy1vdmVybGF5LXBy
b2JsZW0tc3RhdG1lbnQtMDEgYW5kIA0Kc3VnZ2VzdGVkIHdvcmRpbmcgY2hhbmdlOg0KIA0KMy4x
LiBMaW1pdGF0aW9ucyBvZiBFeGlzdGluZyBWaXJ0dWFsIE5ldHdvcmsgTW9kZWxzLiANCiANClNp
bmNlIElFRUU4MDIuMSBkZWZpbmVkIFZMQU4gc2VwYXJhdGlvbiBpcyBtZW50aW9uZWQsIGl0IHdv
dWxkIGJlIA0KYXBwcm9wcmlhdGUgdG8gbWVudGlvbiBQQkLigJlzIElTSUQgc2VwYXJhdGlvbi4g
DQogDQpUaGUgbGltaXRhdGlvbiBmb3IgUEJCIGFuZCBWTEFOIHNob3VsZCBpbmNsdWRlIHRoYXQg
TUFDIGFkZHJlc3NlcyBjYW7igJl0IGJlIA0KYWdncmVnYXRlZCwgdGhlcmVmb3JlIGZvcndhcmRp
bmcgdGFibGUgY2FuIGJlIHZlcnkgbGFyZ2UgZm9yIGxhcmdlIGRhdGEgDQpjZW50ZXJzLiANCiAN
ClNlY29uZCBwYXJhZ3JhcGg6IFdoeSDigJxWTEFOcyBhcmUgYSBwdXJlIGJyaWRnaW5nIGNvbnN0
cnVjdCB3aGlsZSBWUkYgaXMgDQpwdXJlIHJvdXRpbmcgY29uc3RydWN04oCdIGlzIGEgcHJvYmxl
bT8gDQogDQo0LiBOZXR3b3JrIE92ZXJsYXlzDQogDQpTaG91bGQgYWRkIGEgc3Vic2VjdGlvbiB0
byBkZXNjcmliZSBWaXJ0dWFsIE5ldHdvcmsgSW5zdGFuY2UgSUQuIA0KIA0KNC54LiBWaXJ0dWFs
IE5ldHdvcmsgSW5zdGFuY2UgSUQNCiANClZpcnR1YWwgTmV0d29yayBJbnN0YW5jZSBpcyBmb3Ig
c2VncmVnYXRpbmcgdHJhZmZpYyBiZWxvbmdpbmcgdG8gZGlmZmVyZW50IA0KdGVuYW50cyBvciBk
aWZmZXJlbnQgem9uZXMgb2Ygb25lIHRlbmFudC4gV2hlbiBhIGRhdGEgY2VudGVyIHVzZXMgT3Zl
cmxheSANCk5ldHdvcmsgdG8gaGlkZSBob3N0cyBhZGRyZXNzZXMsIGl0IGlzIGltcG9ydGFudCB0
aGF0IFZpcnR1YWwgTmV0d29yayANCkluc3RhbmNlIGlkZW50aWZpZXIgY2FuIHByb3Blcmx5IHJl
cHJlc2VudCB0aGUgem9uZXMgb3IgdGhlIHRlbmFudHMgaW4gDQplbnRpcmUgZGF0YSBjZW50ZXIs
IGVzcGVjaWFsbHkgd2hlbiB0aGUgb3ZlcmxheSBlZGdlIG5vZGVzIGFyZSBhY2Nlc3MgDQpzd2l0
Y2hlcywgaS5lLiBub3QgZW1iZWRkZWQgaW4gaHlwZXJ2aXNvcnMuIA0KIA0KV2hlbiBvdmVybGF5
IGVkZ2Ugbm9kZXMgYXJlIGFjY2VzcyBzd2l0Y2hlcywgdGhlIGRhdGEgZnJhbWVzIGJlZm9yZSAN
CmVudGVyaW5nIHRoZSBPdmVybGF5IE5ldHdvcmsgb3IgYWZ0ZXIgZXhpdGluZyB0aGUgT3Zlcmxh
eSBOZXR3b3JrIG1pZ2h0IA0KY2FycnkgdHJhZGl0aW9uYWwgVkxBTi1JRCBmb3IgcHJvcGVyIHRy
YWZmaWMgc2VncmVnYXRpb24uIFRoZSBWaXJ0dWFsIA0KTmV0d29yayBJbnN0YW5jZSBJRCB2YWx1
ZSBjYXJyaWVkIGJ5IHRoZSBPdmVybGF5IEhlYWRlciAgb2YgdGhlIGRhdGEgDQpmcmFtZXMgbWln
aHQgYmUgMjQgYml0cyAoYXMgZGVzY3JpYmVkIGluIDMuMikuIFRob3NlIFZMQU4tSUQgZm9yIGRh
dGEgDQpmcmFtZXMgdW5kZXIgZWFjaCBvdmVybGF5IGVkZ2Ugbm9kZSBhcmUgb25seSBsb2NhbGx5
IHNpZ25pZmljYW50LiBQcm9wZXIgDQptYXBwaW5nIGhhcyB0byBiZSBtYWludGFpbmVkLiANCiAN
CjYuMiAoVFJJTEwpICY2LjMgKEwyVlBOKQ0KIA0KSXQgaXMgbmVjZXNzYXJ5IHRvIHBvaW50IG91
dCB0aGF0IGJvdGggVFJJTEwgYW5kIEwyVlBOIGNhcnJ5IHRoZSBWTEFOLUlEIA0KZW1iZWRkZWQg
aW4gdGhlIG9yaWdpbmFsIEV0aGVybmV0IGZyYW1lcyBhY3Jvc3MgdGhlIE92ZXJsYXkgTmV0d29y
ayBhbmQgDQp0aGUgVkxBTi1JRCBtYWludGFpbiB0aGUgc2FtZSBtZWFuaW5nIGluIHR3byBzZXBh
cmF0ZSBMMiBpc2xhbmRzLiANCiANCldoZW4gVkxBTi1JRCB0YWdnZWQgRXRoZXJuZXQgZnJhbWVz
IHRyYXZlcnNlIGFjcm9zcyB0aGUgT3ZlcmxheSBOZXR3b3JrIA0KZm9yIERhdGEgQ2VudGVyLCB0
aGUgVkxBTi1JRCBjYXJyaWVkIGJ5IEV0aGVybmV0IGZyYW1lcyBsb3NlIGl0cyANCnNpZ25pZmlj
YW5jZS4gSW4gYW5vdGhlciB3b3JkcywgYSBkaWZmZXJlbnQgVkxBTi1JRCBtaWdodCBiZSByZS1h
c3NpZ25lZCANCnRvIHRoZSBFdGhlcm5ldCBmcmFtZXMgYnkgdGhlIEVncmVzcyBvdmVybGF5IGVk
Z2UuIA0KIA0KIA0KNS4zLiBBc3NvY2lhdGluZyBhIFZOSUQgd2l0aCBhbiBlbmRwb2ludC4gDQog
DQpZb3Ugc3RhdGVkIHRoYXQg4oCcdHlwaWNhbGx5LCBpdCBpcyBhIHZpcnR1YWwgTklDIGNvbWlu
ZyB1cCB0aGF0IHRyaWdnZXJzIA0KdGhpcyBhc3NvY2lhdGlvbuKAnS4gSXQgaXMgbm90IHF1aXRl
IHJpZ2h0LiBWaXJ0dWFsIE5JQywgc2FtZSBhcyBwaHlzaWNhbCANCk5JQyBmb3IgYSBub24tdmly
dHVhbGl6ZWQgc2VydmVyLCB1c3VhbGx5IGlzIG5vdCBhd2FyZSBvZiB3aGljaCBuZXR3b3JrIA0K
c2VnbWVudCBpdCBpcyBhdHRhY2hlZCB0by4gRm9yIEV0aGVybmV0IGRhdGEgZnJhbWVzIGNvbWlu
ZyBvdXQgZnJvbSBhIA0KdHJhZGl0aW9uYWwgc2VydmVyIChpLmUuIHNlcnZlciB3aXRob3V0IHZp
cnR1YWxpemF0aW9uKSwgdGhlIGRhdGEgZnJhbWUgaXMgDQpub3QgdGFnZ2VkLCBpLmUuIG5vdCBo
YXZpbmcgVklEIGFzc29jaWF0ZWQgd2l0aCBpdC4gSXQgaXMgdGhlIGZpcnN0IHN3aXRjaCANCndo
aWNoIGFzc2lnbiB0aGUgVklEIGZvciB0aGUgZGF0YSBmcmFtZS4gDQogDQpJdCBpcyBtb3JlIGFj
Y3VyYXRlIHRvIHNheSAg4oCcdmlydHVhbCBzd2l0Y2jigJ0gb3Ig4oCcSHlwZXJ2aXNvcuKAnSB3
aXRoaW4gdGhlIA0Kc2VydmVyIHdoaWNoIGNhbiB0cmlnZ2VyIHRoZSBhc3NvY2lhdGlvbi4gDQog
DQpTaW5jZSBvdmVybGF5IGhlYWRlciBjYW4gYmUgYWxzbyBhZGRlZCBieSBmaXJzdCBhY2Nlc3Mg
c3dpdGNoZXMgKGkuZS4gDQpzd2l0Y2hlcyBub3QgZW1iZWRkZWQgaW4gcGh5c2ljYWwgc2VydmVy
cyksIGl0IGlzIHZlcnkgcG9zc2libGUgdGhhdCAgZGF0YSANCmZyYW1lcyBhcnJpdmluZyBhdCB0
aGUgb3ZlcmxheSBlZGdlIChpLmUuIHRoZSBmaXJzdCBhY2Nlc3Mgc3dpdGNoKSBpcyANCmFscmVh
ZHkgVkxBTiB0YWdnZWQuIFRoZW4gYXNzb2NpYXRpb24gKG9yIG1hcHBpbmcpIGZyb20gdGhlIFZM
QU4tVGFnIHRvIA0KVk5JRCBoYXMgdG8gYmUgb3BlcmF0b3IgYWRtaW5pc3RyYXRlZC4gDQogDQog
DQpMaW5kYSBEdW5iYXINCiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fXw0KZGMgbWFpbGluZyBsaXN0DQpkY0BpZXRmLm9yZw0KaHR0cHM6Ly93d3cuaWV0Zi5v
cmcvbWFpbG1hbi9saXN0aW5mby9kYw0KDQoNCg==
--=_alternative 002A636388257992_=
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: base64

PGZvbnQgc2l6ZT0yIGZhY2U9InNhbnMtc2VyaWYiPkhpIExpbmRhLDwvZm9udD4NCjxicj4NCjxi
cj48Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJyaSI+Jmd0O1lvdSBzdGF0ZWQgdGhhdCDigJx0eXBp
Y2FsbHksIGl0IGlzDQphIHZpcnR1YWwgTklDIGNvbWluZyB1cCB0aGF0IHRyaWdnZXJzIHRoaXMg
YXNzb2NpYXRpb27igJ0uIEl0IGlzIG5vdCBxdWl0ZQ0KcmlnaHQuIFZpcnR1YWwgTklDLCBzYW1l
IGFzIHBoeXNpY2FsIE5JQyBmb3IgYSBub24tdmlydHVhbGl6ZWQgc2VydmVyLA0KdXN1YWxseSBp
cyBub3QgYXdhcmUgb2Ygd2hpY2ggJmd0O25ldHdvcmsgc2VnbWVudCBpdCBpcyBhdHRhY2hlZCB0
by4gPHU+Rm9yDQpFdGhlcm5ldCBkYXRhIGZyYW1lcyBjb21pbmcgb3V0IGZyb20gYSB0cmFkaXRp
b25hbCBzZXJ2ZXIgKGkuZS4gc2VydmVyDQp3aXRob3V0IHZpcnR1YWxpemF0aW9uKSwgdGhlIGRh
dGEgZnJhbWUgaXMgbm90IHRhZ2dlZDwvdT4sIGkuZS4gbm90IGhhdmluZw0KVklEIGFzc29jaWF0
ZWQgd2l0aCBpdC4gSXQgaXMgJmd0O3RoZSBmaXJzdCBzd2l0Y2ggd2hpY2ggYXNzaWduIHRoZSBW
SUQNCmZvciB0aGUgZGF0YSBmcmFtZS4gPC9mb250Pg0KPGJyPg0KPGJyPjxmb250IHNpemU9MiBm
YWNlPSJDYWxpYnJpIj5SZWdhcmRpbmcgJnF1b3Q7Rm9yIEV0aGVybmV0IDx1PmRhdGEgZnJhbWVz
DQpjb21pbmcgb3V0IGZyb20gYSB0cmFkaXRpb25hbCBzZXJ2ZXIgKGkuZS4gc2VydmVyIHdpdGhv
dXQgdmlydHVhbGl6YXRpb24pLA0KdGhlIGRhdGEgZnJhbWUgaXMgbm90IHRhZ2dlZDwvdT4mcXVv
dDssLi4uJnF1b3Q7IEl0IGlzICZndDt0aGUgZmlyc3Qgc3dpdGNoDQp3aGljaCBhc3NpZ24gdGhl
IFZJRCBmb3IgdGhlIGRhdGEgZnJhbWUmcXVvdDsgLi4uPC9mb250Pg0KPGJyPg0KPGJyPjxmb250
IHNpemU9MiBmYWNlPSJDYWxpYnJpIj5ub24tdmlydHVhbGl6ZWQgc2VydmVycyBhcmUgY2FwYWJs
ZSBvZg0Kc2VuZGluZyBWTEFOIHRhZ2dlZCBwYWNrZXRzLiBJdCBpcyBhIHN0YW5kYXJkIGZlYXR1
cmUgb2YgdG9kYXknYXMgdHJhZGl0aW9uYWwNCk9wZXJhdGluZyBTeXN0ZW1zIChOSUMgZHJpdmVy
cykgJm5ic3A7dG8gYWxsb3cgVkxBTnMgdG8gYmUgYXNzb2NpYXRlZCB3aXRoDQphbiBldGhlcm5l
dCBpbnRlcmZhY2UuPC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJDYWxpYnJpIj5TbyBh
IHRyYWRpdGlvbmFsIG5vbi12aXJ0dWFsaXplZCBzZXJ2ZXINCm1heSBzZW5kIHRhZ2dlZCBhbmQg
dW50YWdnZWQgZnJhbWVzLjwvZm9udD4NCjxicj4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0iQ2Fs
aWJyaSI+VGhhbmtzLDwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJyaSI+UmFr
ZXNoLjwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJyaSI+LT0tPC9mb250Pg0K
PGJyPg0KPGJyPg0KPGJyPg0KPGJyPg0KPGJyPg0KPGJyPg0KPGJyPjxmb250IHNpemU9MSBjb2xv
cj0jNWY1ZjVmIGZhY2U9InNhbnMtc2VyaWYiPkZyb206ICZuYnNwOyAmbmJzcDsgJm5ic3A7DQom
bmJzcDs8L2ZvbnQ+PGZvbnQgc2l6ZT0xIGZhY2U9InNhbnMtc2VyaWYiPkxpbmRhIER1bmJhciAm
bHQ7bGluZGEuZHVuYmFyQGh1YXdlaS5jb20mZ3Q7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MSBj
b2xvcj0jNWY1ZjVmIGZhY2U9InNhbnMtc2VyaWYiPlRvOiAmbmJzcDsgJm5ic3A7ICZuYnNwOw0K
Jm5ic3A7PC9mb250Pjxmb250IHNpemU9MSBmYWNlPSJzYW5zLXNlcmlmIj5uYXJ0ZW5Acm90YWxh
LnJhbGVpZ2guaWJtLmNvbSwNCiZxdW90O2RhdmlkLmJsYWNrQGVtYy5jb20mcXVvdDsgJmx0O2Rh
dmlkLmJsYWNrQGVtYy5jb20mZ3Q7LCBNdXJhcmkgU3JpZGhhcmFuDQombHQ7bXVyYXJpc0BtaWNy
b3NvZnQuY29tJmd0OywgRGluZXNoIER1dHQgJmx0O2RkdXR0QGNpc2NvLmNvbSZndDssICZxdW90
O2tyZWVnZXJAY2lzY28uY29tJnF1b3Q7DQombHQ7a3JlZWdlckBjaXNjby5jb20mZ3Q7PC9mb250
Pg0KPGJyPjxmb250IHNpemU9MSBjb2xvcj0jNWY1ZjVmIGZhY2U9InNhbnMtc2VyaWYiPkNjOiAm
bmJzcDsgJm5ic3A7ICZuYnNwOw0KJm5ic3A7PC9mb250Pjxmb250IHNpemU9MSBmYWNlPSJzYW5z
LXNlcmlmIj4mcXVvdDtkY0BpZXRmLm9yZyZxdW90Ow0KJmx0O2RjQGlldGYub3JnJmd0OzwvZm9u
dD4NCjxicj48Zm9udCBzaXplPTEgY29sb3I9IzVmNWY1ZiBmYWNlPSJzYW5zLXNlcmlmIj5EYXRl
OiAmbmJzcDsgJm5ic3A7ICZuYnNwOw0KJm5ic3A7PC9mb250Pjxmb250IHNpemU9MSBmYWNlPSJz
YW5zLXNlcmlmIj4wMS8yNi8yMDEyIDA0OjI1IFBNPC9mb250Pg0KPGJyPjxmb250IHNpemU9MSBj
b2xvcj0jNWY1ZjVmIGZhY2U9InNhbnMtc2VyaWYiPlN1YmplY3Q6ICZuYnNwOyAmbmJzcDsNCiZu
YnNwOyAmbmJzcDs8L2ZvbnQ+PGZvbnQgc2l6ZT0xIGZhY2U9InNhbnMtc2VyaWYiPltkY10gY29t
bWVudHMNCmFuZCBzdWdnZXN0aW9ucyB0byBkcmFmdC1uYXJ0ZW4tbnYwMy1vdmVybGF5LXByb2Js
ZW0tc3RhdG1lbnQtMDE8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0xIGNvbG9yPSM1ZjVmNWYgZmFj
ZT0ic2Fucy1zZXJpZiI+U2VudCBieTogJm5ic3A7ICZuYnNwOw0KJm5ic3A7ICZuYnNwOzwvZm9u
dD48Zm9udCBzaXplPTEgZmFjZT0ic2Fucy1zZXJpZiI+ZGMtYm91bmNlc0BpZXRmLm9yZzwvZm9u
dD4NCjxicj4NCjxociBub3NoYWRlPg0KPGJyPg0KPGJyPg0KPGJyPjxmb250IHNpemU9MiBmYWNl
PSJDYWxpYnJpIj5UaG9tYXMsIGV0IGFsLCA8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9
IkNhbGlicmkiPiZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJyaSI+
SGVyZSBhcmUgbXkgY29tbWVudHMgdG8gZHJhZnQtbmFydGVuLW52MDMtb3ZlcmxheS1wcm9ibGVt
LXN0YXRtZW50LTAxDQphbmQgc3VnZ2VzdGVkIHdvcmRpbmcgY2hhbmdlOjwvZm9udD4NCjxicj48
Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJyaSI+Jm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9
MiBmYWNlPSJDYWxpYnJpIj4zLjEuIExpbWl0YXRpb25zIG9mIEV4aXN0aW5nIFZpcnR1YWwgTmV0
d29yaw0KTW9kZWxzLiA8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9IkNhbGlicmkiPiZu
YnNwOzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJyaSI+U2luY2UgSUVFRTgw
Mi4xIGRlZmluZWQgVkxBTiBzZXBhcmF0aW9uDQppcyBtZW50aW9uZWQsIGl0IHdvdWxkIGJlIGFw
cHJvcHJpYXRlIHRvIG1lbnRpb24gUEJC4oCZcyBJU0lEIHNlcGFyYXRpb24uDQo8L2ZvbnQ+DQo8
YnI+PGZvbnQgc2l6ZT0yIGZhY2U9IkNhbGlicmkiPiZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBz
aXplPTIgZmFjZT0iQ2FsaWJyaSI+VGhlIGxpbWl0YXRpb24gZm9yIFBCQiBhbmQgVkxBTiBzaG91
bGQNCmluY2x1ZGUgdGhhdCBNQUMgYWRkcmVzc2VzIGNhbuKAmXQgYmUgYWdncmVnYXRlZCwgdGhl
cmVmb3JlIGZvcndhcmRpbmcgdGFibGUNCmNhbiBiZSB2ZXJ5IGxhcmdlIGZvciBsYXJnZSBkYXRh
IGNlbnRlcnMuIDwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJyaSI+Jm5ic3A7
PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJDYWxpYnJpIj5TZWNvbmQgcGFyYWdyYXBo
OiBXaHkg4oCcVkxBTnMgYXJlIGEgcHVyZQ0KYnJpZGdpbmcgY29uc3RydWN0IHdoaWxlIFZSRiBp
cyBwdXJlIHJvdXRpbmcgY29uc3RydWN04oCdIGlzIGEgcHJvYmxlbT8NCjwvZm9udD4NCjxicj48
Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJyaSI+Jm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9
MiBmYWNlPSJDYWxpYnJpIj40LiBOZXR3b3JrIE92ZXJsYXlzPC9mb250Pg0KPGJyPjxmb250IHNp
emU9MiBmYWNlPSJDYWxpYnJpIj4mbmJzcDs8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9
IkNhbGlicmkiPlNob3VsZCBhZGQgYSBzdWJzZWN0aW9uIHRvIGRlc2NyaWJlIFZpcnR1YWwNCk5l
dHdvcmsgSW5zdGFuY2UgSUQuIDwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJy
aSI+Jm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJDYWxpYnJpIj40LnguIFZp
cnR1YWwgTmV0d29yayBJbnN0YW5jZSBJRDwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0i
Q2FsaWJyaSI+Jm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJDYWxpYnJpIj5W
aXJ0dWFsIE5ldHdvcmsgSW5zdGFuY2UgaXMgZm9yIHNlZ3JlZ2F0aW5nDQp0cmFmZmljIGJlbG9u
Z2luZyB0byBkaWZmZXJlbnQgdGVuYW50cyBvciBkaWZmZXJlbnQgem9uZXMgb2Ygb25lIHRlbmFu
dC4NCldoZW4gYSBkYXRhIGNlbnRlciB1c2VzIE92ZXJsYXkgTmV0d29yayB0byBoaWRlIGhvc3Rz
IGFkZHJlc3NlcywgaXQgaXMNCmltcG9ydGFudCB0aGF0IFZpcnR1YWwgTmV0d29yayBJbnN0YW5j
ZSBpZGVudGlmaWVyIGNhbiBwcm9wZXJseSByZXByZXNlbnQNCnRoZSB6b25lcyBvciB0aGUgdGVu
YW50cyBpbiBlbnRpcmUgZGF0YSBjZW50ZXIsIGVzcGVjaWFsbHkgd2hlbiB0aGUgb3ZlcmxheQ0K
ZWRnZSBub2RlcyBhcmUgYWNjZXNzIHN3aXRjaGVzLCBpLmUuIG5vdCBlbWJlZGRlZCBpbiBoeXBl
cnZpc29ycy4gPC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJDYWxpYnJpIj4mbmJzcDs8
L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9IkNhbGlicmkiPldoZW4gb3ZlcmxheSBlZGdl
IG5vZGVzIGFyZSBhY2Nlc3Mgc3dpdGNoZXMsDQp0aGUgZGF0YSBmcmFtZXMgYmVmb3JlIGVudGVy
aW5nIHRoZSBPdmVybGF5IE5ldHdvcmsgb3IgYWZ0ZXIgZXhpdGluZyB0aGUNCk92ZXJsYXkgTmV0
d29yayBtaWdodCBjYXJyeSB0cmFkaXRpb25hbCBWTEFOLUlEIGZvciBwcm9wZXIgdHJhZmZpYyBz
ZWdyZWdhdGlvbi4NClRoZSBWaXJ0dWFsIE5ldHdvcmsgSW5zdGFuY2UgSUQgdmFsdWUgY2Fycmll
ZCBieSB0aGUgT3ZlcmxheSBIZWFkZXIgJm5ic3A7b2YNCnRoZSBkYXRhIGZyYW1lcyBtaWdodCBi
ZSAyNCBiaXRzIChhcyBkZXNjcmliZWQgaW4gMy4yKS4gVGhvc2UgVkxBTi1JRCBmb3INCmRhdGEg
ZnJhbWVzIHVuZGVyIGVhY2ggb3ZlcmxheSBlZGdlIG5vZGUgYXJlIG9ubHkgbG9jYWxseSBzaWdu
aWZpY2FudC4NClByb3BlciBtYXBwaW5nIGhhcyB0byBiZSBtYWludGFpbmVkLiA8L2ZvbnQ+DQo8
YnI+PGZvbnQgc2l6ZT0yIGZhY2U9IkNhbGlicmkiPiZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBz
aXplPTIgZmFjZT0iQ2FsaWJyaSI+Ni4yIChUUklMTCkgJmFtcDs2LjMgKEwyVlBOKTwvZm9udD4N
Cjxicj48Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJyaSI+Jm5ic3A7PC9mb250Pg0KPGJyPjxmb250
IHNpemU9MiBmYWNlPSJDYWxpYnJpIj5JdCBpcyBuZWNlc3NhcnkgdG8gcG9pbnQgb3V0IHRoYXQg
Ym90aA0KVFJJTEwgYW5kIEwyVlBOIGNhcnJ5IHRoZSBWTEFOLUlEIGVtYmVkZGVkIGluIHRoZSBv
cmlnaW5hbCBFdGhlcm5ldCBmcmFtZXMNCmFjcm9zcyB0aGUgT3ZlcmxheSBOZXR3b3JrIGFuZCB0
aGUgVkxBTi1JRCBtYWludGFpbiB0aGUgc2FtZSBtZWFuaW5nIGluDQp0d28gc2VwYXJhdGUgTDIg
aXNsYW5kcy4gPC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJDYWxpYnJpIj4mbmJzcDs8
L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9IkNhbGlicmkiPldoZW4gVkxBTi1JRCB0YWdn
ZWQgRXRoZXJuZXQgZnJhbWVzIHRyYXZlcnNlDQphY3Jvc3MgdGhlIE92ZXJsYXkgTmV0d29yayBm
b3IgRGF0YSBDZW50ZXIsIHRoZSBWTEFOLUlEIGNhcnJpZWQgYnkgRXRoZXJuZXQNCmZyYW1lcyBs
b3NlIGl0cyBzaWduaWZpY2FuY2UuIEluIGFub3RoZXIgd29yZHMsIGEgZGlmZmVyZW50IFZMQU4t
SUQgbWlnaHQNCmJlIHJlLWFzc2lnbmVkIHRvIHRoZSBFdGhlcm5ldCBmcmFtZXMgYnkgdGhlIEVn
cmVzcyBvdmVybGF5IGVkZ2UuIDwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJy
aSI+Jm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJDYWxpYnJpIj4mbmJzcDs8
L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9IkNhbGlicmkiPjUuMy4gQXNzb2NpYXRpbmcg
YSBWTklEIHdpdGggYW4gZW5kcG9pbnQuDQo8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9
IkNhbGlicmkiPiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7DQombmJz
cDsgJm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJDYWxpYnJpIj5Zb3Ugc3Rh
dGVkIHRoYXQg4oCcdHlwaWNhbGx5LCBpdCBpcyBhIHZpcnR1YWwNCk5JQyBjb21pbmcgdXAgdGhh
dCB0cmlnZ2VycyB0aGlzIGFzc29jaWF0aW9u4oCdLiBJdCBpcyBub3QgcXVpdGUgcmlnaHQuDQpW
aXJ0dWFsIE5JQywgc2FtZSBhcyBwaHlzaWNhbCBOSUMgZm9yIGEgbm9uLXZpcnR1YWxpemVkIHNl
cnZlciwgdXN1YWxseQ0KaXMgbm90IGF3YXJlIG9mIHdoaWNoIG5ldHdvcmsgc2VnbWVudCBpdCBp
cyBhdHRhY2hlZCB0by4gRm9yIEV0aGVybmV0IGRhdGENCmZyYW1lcyBjb21pbmcgb3V0IGZyb20g
YSB0cmFkaXRpb25hbCBzZXJ2ZXIgKGkuZS4gc2VydmVyIHdpdGhvdXQgdmlydHVhbGl6YXRpb24p
LA0KdGhlIGRhdGEgZnJhbWUgaXMgbm90IHRhZ2dlZCwgaS5lLiBub3QgaGF2aW5nIFZJRCBhc3Nv
Y2lhdGVkIHdpdGggaXQuIEl0DQppcyB0aGUgZmlyc3Qgc3dpdGNoIHdoaWNoIGFzc2lnbiB0aGUg
VklEIGZvciB0aGUgZGF0YSBmcmFtZS4gPC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJD
YWxpYnJpIj4mbmJzcDs8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9IkNhbGlicmkiPkl0
IGlzIG1vcmUgYWNjdXJhdGUgdG8gc2F5ICZuYnNwO+KAnHZpcnR1YWwNCnN3aXRjaOKAnSBvciDi
gJxIeXBlcnZpc29y4oCdIHdpdGhpbiB0aGUgc2VydmVyIHdoaWNoIGNhbiB0cmlnZ2VyIHRoZSBh
c3NvY2lhdGlvbi4NCjwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFjZT0iQ2FsaWJyaSI+Jm5i
c3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJDYWxpYnJpIj5TaW5jZSBvdmVybGF5
IGhlYWRlciBjYW4gYmUgYWxzbyBhZGRlZA0KYnkgZmlyc3QgYWNjZXNzIHN3aXRjaGVzIChpLmUu
IHN3aXRjaGVzIG5vdCBlbWJlZGRlZCBpbiBwaHlzaWNhbCBzZXJ2ZXJzKSwNCml0IGlzIHZlcnkg
cG9zc2libGUgdGhhdCAmbmJzcDtkYXRhIGZyYW1lcyBhcnJpdmluZyBhdCB0aGUgb3ZlcmxheSBl
ZGdlDQooaS5lLiB0aGUgZmlyc3QgYWNjZXNzIHN3aXRjaCkgaXMgYWxyZWFkeSBWTEFOIHRhZ2dl
ZC4gVGhlbiBhc3NvY2lhdGlvbg0KKG9yIG1hcHBpbmcpIGZyb20gdGhlIFZMQU4tVGFnIHRvIFZO
SUQgaGFzIHRvIGJlIG9wZXJhdG9yIGFkbWluaXN0cmF0ZWQuDQo8L2ZvbnQ+DQo8YnI+PGZvbnQg
c2l6ZT0yIGZhY2U9IkNhbGlicmkiPiZuYnNwOzwvZm9udD4NCjxicj48Zm9udCBzaXplPTIgZmFj
ZT0iQ2FsaWJyaSI+Jm5ic3A7PC9mb250Pg0KPGJyPjxmb250IHNpemU9MiBmYWNlPSJDYWxpYnJp
Ij5MaW5kYSBEdW5iYXI8L2ZvbnQ+DQo8YnI+PGZvbnQgc2l6ZT0yIGZhY2U9IkNhbGlicmkiPiZu
YnNwOzwvZm9udD48dHQ+PGZvbnQgc2l6ZT0yPl9fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fPGJyPg0KZGMgbWFpbGluZyBsaXN0PGJyPg0KZGNAaWV0Zi5vcmc8
YnI+DQo8L2ZvbnQ+PC90dD48YSBocmVmPWh0dHBzOi8vd3d3LmlldGYub3JnL21haWxtYW4vbGlz
dGluZm8vZGM+PHR0Pjxmb250IHNpemU9Mj5odHRwczovL3d3dy5pZXRmLm9yZy9tYWlsbWFuL2xp
c3RpbmZvL2RjPC9mb250PjwvdHQ+PC9hPjx0dD48Zm9udCBzaXplPTI+PGJyPg0KPC9mb250Pjwv
dHQ+DQo8YnI+DQo=
--=_alternative 002A636388257992_=--


From narten@us.ibm.com  Fri Jan 27 05:43:56 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id BFFFE21F852B for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 05:43:56 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -109.456
X-Spam-Level: 
X-Spam-Status: No, score=-109.456 tagged_above=-999 required=5 tests=[AWL=1.143, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0fX+o2knvTSs for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 05:43:56 -0800 (PST)
Received: from e37.co.us.ibm.com (e37.co.us.ibm.com [32.97.110.158]) by ietfa.amsl.com (Postfix) with ESMTP id 3A7D621F8528 for <dc@ietf.org>; Fri, 27 Jan 2012 05:43:55 -0800 (PST)
Received: from /spool/local by e37.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Fri, 27 Jan 2012 06:43:54 -0700
Received: from d03dlp01.boulder.ibm.com (9.17.202.177) by e37.co.us.ibm.com (192.168.1.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Fri, 27 Jan 2012 06:42:49 -0700
Received: from d03relay05.boulder.ibm.com (d03relay05.boulder.ibm.com [9.17.195.107]) by d03dlp01.boulder.ibm.com (Postfix) with ESMTP id 4F4B61FF004C for <dc@ietf.org>; Fri, 27 Jan 2012 06:42:48 -0700 (MST)
Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by d03relay05.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q0RDglaW135340 for <dc@ietf.org>; Fri, 27 Jan 2012 06:42:47 -0700
Received: from d03av03.boulder.ibm.com (loopback [127.0.0.1]) by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q0RDgkD7030861 for <dc@ietf.org>; Fri, 27 Jan 2012 06:42:47 -0700
Received: from cichlid.raleigh.ibm.com (sig-9-76-135-189.mts.ibm.com [9.76.135.189]) by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q0RDgjcd030848 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 27 Jan 2012 06:42:46 -0700
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q0RDghlw025377; Fri, 27 Jan 2012 08:42:44 -0500
Message-Id: <201201271342.q0RDghlw025377@cichlid.raleigh.ibm.com>
To: Paul Unbehagen <paul@unbehagen.net>
In-reply-to: <C5CE8493-6543-4EB0-BCEB-99EEBA3FD59E@unbehagen.net>
References: <4A95BA014132FF49AE685FAB4B9F17F632E17F2D@dfweml505-mbx> <C5CE8493-6543-4EB0-BCEB-99EEBA3FD59E@unbehagen.net>
Comments: In-reply-to Paul Unbehagen <paul@unbehagen.net> message dated "Thu, 26 Jan 2012 17:50:44 -0700."
Date: Fri, 27 Jan 2012 08:42:43 -0500
From: Thomas Narten <narten@us.ibm.com>
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12012713-7408-0000-0000-0000023499AB
Cc: "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, Linda Dunbar <linda.dunbar@huawei.com>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 13:43:56 -0000

Paul Unbehagen <paul@unbehagen.net> writes:

> On Jan 26, 2012, at 5:22 PM, Linda Dunbar <linda.dunbar@huawei.com> wrote:

> > Since IEEE802.1 defined VLAN separation is mentioned, it would be
> > appropriate to mention PBBâ€™s ISID separation.

> Agreed, clear service separation and the 16 million services should
> be clearly explained, additionally the ease of edge provisioning
> over STP's per hop configuration.

Will do.

> > The limitation for PBB and VLAN should include that MAC addresses
> > can't be aggregated, therefore forwarding table can be very large
> > for large data centers.

> Disagree, this isn't what we see in live deployments. Since the Mac
> learning of the host stations only happens at the edge switches and
> the services are distributed across many edge switches the ISID
> service Mac table sizes don't end up being that large on any given
> node. Additionally the core never sees any macs but the nodes
> participating in the backbone from edge switch to edge switch.

I think we need to distinguish between PBB-V and PBB-M.

With PBB-M, you get mac-in-mac encapsulation, so the PBB Bridge nodes
(as part of forwarding) only look at (and build tables for) the MACs
in the outer header. Consequently, you don't get MAC table explosion
here as your core network gets bigger. Is this what you mean above?

For PBB-V, the PBB bridges are routing on client MAC addrs (C-MACs),
so you presumably will run into issues with table size as your network
core increases in size.

Agreed?

Thomas


From linda.dunbar@huawei.com  Fri Jan 27 08:49:11 2012
Return-Path: <linda.dunbar@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 4ED3521F85EA for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 08:49:11 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.514
X-Spam-Level: 
X-Spam-Status: No, score=-2.514 tagged_above=-999 required=5 tests=[AWL=0.084,  BAYES_00=-2.599, HTML_MESSAGE=0.001]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vBmTgJoJLioA for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 08:49:10 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id 66F1E21F85D4 for <dc@ietf.org>; Fri, 27 Jan 2012 08:49:10 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml202-edg.china.huawei.com) ([172.18.9.243]) by dfwrg01-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACY91165; Fri, 27 Jan 2012 11:49:10 -0500 (EST)
Received: from DFWEML403-HUB.china.huawei.com (10.193.5.151) by dfweml202-edg.china.huawei.com (172.18.9.108) with Microsoft SMTP Server (TLS) id 14.1.323.3; Fri, 27 Jan 2012 08:47:27 -0800
Received: from DFWEML505-MBX.china.huawei.com ([10.124.31.100]) by dfweml403-hub.china.huawei.com ([10.193.5.151]) with mapi id 14.01.0323.003; Fri, 27 Jan 2012 08:47:10 -0800
From: Linda Dunbar <linda.dunbar@huawei.com>
To: Paul Unbehagen <paul@unbehagen.net>
Thread-Topic: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
Thread-Index: Aczcic4dhzNCqBUxQSqOznX8P2avFQARvMcAABCCe8A=
Date: Fri, 27 Jan 2012 16:47:09 +0000
Message-ID: <4A95BA014132FF49AE685FAB4B9F17F632E181E2@dfweml505-mbx>
References: <4A95BA014132FF49AE685FAB4B9F17F632E17F2D@dfweml505-mbx> <C5CE8493-6543-4EB0-BCEB-99EEBA3FD59E@unbehagen.net>
In-Reply-To: <C5CE8493-6543-4EB0-BCEB-99EEBA3FD59E@unbehagen.net>
Accept-Language: en-US, zh-CN
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.192.11.97]
Content-Type: multipart/alternative; boundary="_000_4A95BA014132FF49AE685FAB4B9F17F632E181E2dfweml505mbx_"
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: Thomas Narten <narten@us.ibm.com>, "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 16:49:11 -0000

--_000_4A95BA014132FF49AE685FAB4B9F17F632E181E2dfweml505mbx_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

UGF1bCwNCg0KU2VlIG15IGNvbW1lbnRzIGJlbG93Og0KDQoNCg0KVGhlIGxpbWl0YXRpb24gZm9y
IFBCQiBhbmQgVkxBTiBzaG91bGQgaW5jbHVkZSB0aGF0IE1BQyBhZGRyZXNzZXMgY2Fu4oCZdCBi
ZSBhZ2dyZWdhdGVkLCB0aGVyZWZvcmUgZm9yd2FyZGluZyB0YWJsZSBjYW4gYmUgdmVyeSBsYXJn
ZSBmb3IgbGFyZ2UgZGF0YSBjZW50ZXJzLg0KDQpEaXNhZ3JlZSwgdGhpcyBpc24ndCB3aGF0IHdl
IHNlZSBpbiBsaXZlIGRlcGxveW1lbnRzLiBTaW5jZSB0aGUgTWFjIGxlYXJuaW5nIG9mIHRoZSBo
b3N0IHN0YXRpb25zIG9ubHkgaGFwcGVucyBhdCB0aGUgZWRnZSBzd2l0Y2hlcyBhbmQgdGhlIHNl
cnZpY2VzIGFyZSBkaXN0cmlidXRlZCBhY3Jvc3MgbWFueSBlZGdlIHN3aXRjaGVzIHRoZSBJU0lE
IHNlcnZpY2UgTWFjIHRhYmxlIHNpemVzIGRvbid0IGVuZCB1cCBiZWluZyB0aGF0IGxhcmdlIG9u
IGFueSBnaXZlbiBub2RlLiBBZGRpdGlvbmFsbHkgdGhlIGNvcmUgbmV2ZXIgc2VlcyBhbnkgbWFj
cyBidXQgdGhlIG5vZGVzIHBhcnRpY2lwYXRpbmcgaW4gdGhlIGJhY2tib25lIGZyb20gZWRnZSBz
d2l0Y2ggdG8gZWRnZSBzd2l0Y2guDQoNCltMaW5kYV0gSWYgUEJC4oCZcyBNQUMtaW4tTUFDIE92
ZXJsYXkgZW5jYXBzdWxhdGlvbiBpcyBkb25lIGF0IHRoZSBhY2Nlc3Mgc3dpdGNoZXMgaW4gZGF0
YSBjZW50ZXIsIHRoZW4gdGhlIGNvcmUgc3dpdGNoZXMgZm9yd2FyZGluZyB0YWJsZSB3b27igJl0
IGV4cGxvZGUuIEhvd2V2ZXIsIGlmIHRoZSBNYWMtaW4tTWFjIG92ZXJsYXkgZW5jYXBzdWxhdGlv
biBpcyBwZXJmb3JtZWQgYnkgc2VydmVyc+KAmSBoeXBlcnZpc29yLCB0aGUgc3dpdGNoZXPigJkg
Zm9yd2FyZGluZyB0YWJsZSBjb3VsZCBleHBsb2RlIGlmIHRoZXJlIGFyZSBodW5kcmVkcyBvZiB0
aG91c2FuZHMgb2Ygc2VydmVycyBpbiBkYXRhIGNlbnRlciwgd2hpY2ggaXMgZ2V0dGluZyBtb3Jl
IGFuZCBtb3JlIGNvbW1vbiBpbiBsYXJnZSBkYXRhIGNlbnRlcnMuDQoNCkxpbmRhDQo=

--_000_4A95BA014132FF49AE685FAB4B9F17F632E181E2dfweml505mbx_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64

PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy
bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt
YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6eD0idXJuOnNjaGVtYXMtbWljcm9z
b2Z0LWNvbTpvZmZpY2U6ZXhjZWwiIHhtbG5zOnA9InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206
b2ZmaWNlOnBvd2VycG9pbnQiIHhtbG5zOmE9InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2Zm
aWNlOmFjY2VzcyIgeG1sbnM6ZHQ9InV1aWQ6QzJGNDEwMTAtNjVCMy0xMWQxLUEyOUYtMDBBQTAw
QzE0ODgyIiB4bWxuczpzPSJ1dWlkOkJEQzZFM0YwLTZEQTMtMTFkMS1BMkEzLTAwQUEwMEMxNDg4
MiIgeG1sbnM6cnM9InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206cm93c2V0IiB4bWxuczp6PSIj
Um93c2V0U2NoZW1hIiB4bWxuczpiPSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTpw
dWJsaXNoZXIiIHhtbG5zOnNzPSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTpzcHJl
YWRzaGVldCIgeG1sbnM6Yz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6Y29tcG9u
ZW50OnNwcmVhZHNoZWV0IiB4bWxuczpvZGM9InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2Zm
aWNlOm9kYyIgeG1sbnM6b2E9InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOmFjdGl2
YXRpb24iIHhtbG5zOmh0bWw9Imh0dHA6Ly93d3cudzMub3JnL1RSL1JFQy1odG1sNDAiIHhtbG5z
OnE9Imh0dHA6Ly9zY2hlbWFzLnhtbHNvYXAub3JnL3NvYXAvZW52ZWxvcGUvIiB4bWxuczpydGM9
Imh0dHA6Ly9taWNyb3NvZnQuY29tL29mZmljZW5ldC9jb25mZXJlbmNpbmciIHhtbG5zOkQ9IkRB
VjoiIHhtbG5zOlJlcGw9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vcmVwbC8iIHhtbG5z
Om10PSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3NoYXJlcG9pbnQvc29hcC9tZWV0aW5n
cy8iIHhtbG5zOngyPSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS9leGNlbC8y
MDAzL3htbCIgeG1sbnM6cHBkYT0iaHR0cDovL3d3dy5wYXNzcG9ydC5jb20vTmFtZVNwYWNlLnhz
ZCIgeG1sbnM6b2lzPSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3NoYXJlcG9pbnQvc29h
cC9vaXMvIiB4bWxuczpkaXI9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vc2hhcmVwb2lu
dC9zb2FwL2RpcmVjdG9yeS8iIHhtbG5zOmRzPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwLzA5L3ht
bGRzaWcjIiB4bWxuczpkc3A9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vc2hhcmVwb2lu
dC9kc3AiIHhtbG5zOnVkYz0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9kYXRhL3VkYyIg
eG1sbnM6eHNkPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNjaGVtYSIgeG1sbnM6c3ViPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3NoYXJlcG9pbnQvc29hcC8yMDAyLzEvYWxlcnRz
LyIgeG1sbnM6ZWM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDEvMDQveG1sZW5jIyIgeG1sbnM6c3A9
Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vc2hhcmVwb2ludC8iIHhtbG5zOnNwcz0iaHR0
cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9zaGFyZXBvaW50L3NvYXAvIiB4bWxuczp4c2k9Imh0
dHA6Ly93d3cudzMub3JnLzIwMDEvWE1MU2NoZW1hLWluc3RhbmNlIiB4bWxuczp1ZGNzPSJodHRw
Oi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL2RhdGEvdWRjL3NvYXAiIHhtbG5zOnVkY3hmPSJodHRw
Oi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL2RhdGEvdWRjL3htbGZpbGUiIHhtbG5zOnVkY3AycD0i
aHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9kYXRhL3VkYy9wYXJ0dG9wYXJ0IiB4bWxuczp3
Zj0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9zaGFyZXBvaW50L3NvYXAvd29ya2Zsb3cv
IiB4bWxuczpkc3NzPSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA2L2Rp
Z3NpZy1zZXR1cCIgeG1sbnM6ZHNzaT0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9vZmZp
Y2UvMjAwNi9kaWdzaWciIHhtbG5zOm1kc3NpPSJodHRwOi8vc2NoZW1hcy5vcGVueG1sZm9ybWF0
cy5vcmcvcGFja2FnZS8yMDA2L2RpZ2l0YWwtc2lnbmF0dXJlIiB4bWxuczptdmVyPSJodHRwOi8v
c2NoZW1hcy5vcGVueG1sZm9ybWF0cy5vcmcvbWFya3VwLWNvbXBhdGliaWxpdHkvMjAwNiIgeG1s
bnM6bT0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4
bWxuczptcmVscz0iaHR0cDovL3NjaGVtYXMub3BlbnhtbGZvcm1hdHMub3JnL3BhY2thZ2UvMjAw
Ni9yZWxhdGlvbnNoaXBzIiB4bWxuczpzcHdwPSJodHRwOi8vbWljcm9zb2Z0LmNvbS9zaGFyZXBv
aW50L3dlYnBhcnRwYWdlcyIgeG1sbnM6ZXgxMnQ9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5j
b20vZXhjaGFuZ2Uvc2VydmljZXMvMjAwNi90eXBlcyIgeG1sbnM6ZXgxMm09Imh0dHA6Ly9zY2hl
bWFzLm1pY3Jvc29mdC5jb20vZXhjaGFuZ2Uvc2VydmljZXMvMjAwNi9tZXNzYWdlcyIgeG1sbnM6
cHB0c2w9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vc2hhcmVwb2ludC9zb2FwL1NsaWRl
TGlicmFyeS8iIHhtbG5zOnNwc2w9Imh0dHA6Ly9taWNyb3NvZnQuY29tL3dlYnNlcnZpY2VzL1No
YXJlUG9pbnRQb3J0YWxTZXJ2ZXIvUHVibGlzaGVkTGlua3NTZXJ2aWNlIiB4bWxuczpaPSJ1cm46
c2NoZW1hcy1taWNyb3NvZnQtY29tOiIgeG1sbnM6c3Q9IiYjMTsiIHhtbG5zPSJodHRwOi8vd3d3
LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVpdj0iQ29udGVu
dC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1ldGEgbmFtZT0i
R2VuZXJhdG9yIiBjb250ZW50PSJNaWNyb3NvZnQgV29yZCAxMiAoZmlsdGVyZWQgbWVkaXVtKSI+
DQo8c3R5bGU+PCEtLQ0KLyogRm9udCBEZWZpbml0aW9ucyAqLw0KQGZvbnQtZmFjZQ0KCXtmb250
LWZhbWlseTrlrovkvZM7DQoJcGFub3NlLTE6MiAxIDYgMCAzIDEgMSAxIDEgMTt9DQpAZm9udC1m
YWNlDQoJe2ZvbnQtZmFtaWx5OiJDYW1icmlhIE1hdGgiOw0KCXBhbm9zZS0xOjIgNCA1IDMgNSA0
IDYgMyAyIDQ7fQ0KQGZvbnQtZmFjZQ0KCXtmb250LWZhbWlseTpDYWxpYnJpOw0KCXBhbm9zZS0x
OjIgMTUgNSAyIDIgMiA0IDMgMiA0O30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6IlxA5a6L
5L2TIjsNCglwYW5vc2UtMToyIDEgNiAwIDMgMSAxIDEgMSAxO30NCi8qIFN0eWxlIERlZmluaXRp
b25zICovDQpwLk1zb05vcm1hbCwgbGkuTXNvTm9ybWFsLCBkaXYuTXNvTm9ybWFsDQoJe21hcmdp
bjowaW47DQoJbWFyZ2luLWJvdHRvbTouMDAwMXB0Ow0KCWZvbnQtc2l6ZToxMS4wcHQ7DQoJZm9u
dC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjt9DQphOmxpbmssIHNwYW4uTXNvSHlwZXJs
aW5rDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjpibHVlOw0KCXRleHQtZGVjb3Jh
dGlvbjp1bmRlcmxpbmU7fQ0KYTp2aXNpdGVkLCBzcGFuLk1zb0h5cGVybGlua0ZvbGxvd2VkDQoJ
e21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjpwdXJwbGU7DQoJdGV4dC1kZWNvcmF0aW9u
OnVuZGVybGluZTt9DQpzcGFuLkVtYWlsU3R5bGUxNw0KCXttc28tc3R5bGUtdHlwZTpwZXJzb25h
bDsNCglmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiOw0KCWNvbG9yOndpbmRvd3Rl
eHQ7fQ0Kc3Bhbi5FbWFpbFN0eWxlMTgNCgl7bXNvLXN0eWxlLXR5cGU6cGVyc29uYWwtcmVwbHk7
DQoJZm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjsNCgljb2xvcjojMUY0OTdEO30N
Ci5Nc29DaHBEZWZhdWx0DQoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5Ow0KCWZvbnQtc2l6
ZToxMC4wcHQ7fQ0KQHBhZ2UgV29yZFNlY3Rpb24xDQoJe3NpemU6OC41aW4gMTEuMGluOw0KCW1h
cmdpbjoxLjBpbiAxLjI1aW4gMS4waW4gMS4yNWluO30NCmRpdi5Xb3JkU2VjdGlvbjENCgl7cGFn
ZTpXb3JkU2VjdGlvbjE7fQ0KLS0+PC9zdHlsZT48IS0tW2lmIGd0ZSBtc28gOV0+PHhtbD4NCjxv
OnNoYXBlZGVmYXVsdHMgdjpleHQ9ImVkaXQiIHNwaWRtYXg9IjEwMjYiIC8+DQo8L3htbD48IVtl
bmRpZl0tLT48IS0tW2lmIGd0ZSBtc28gOV0+PHhtbD4NCjxvOnNoYXBlbGF5b3V0IHY6ZXh0PSJl
ZGl0Ij4NCjxvOmlkbWFwIHY6ZXh0PSJlZGl0IiBkYXRhPSIxIiAvPg0KPC9vOnNoYXBlbGF5b3V0
PjwveG1sPjwhW2VuZGlmXS0tPg0KPC9oZWFkPg0KPGJvZHkgYmdjb2xvcj0id2hpdGUiIGxhbmc9
IkVOLVVTIiBsaW5rPSJibHVlIiB2bGluaz0icHVycGxlIj4NCjxkaXYgY2xhc3M9IldvcmRTZWN0
aW9uMSI+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iY29sb3I6IzFGNDk3RCI+
UGF1bCwgPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4g
c3R5bGU9ImNvbG9yOiMxRjQ5N0QiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNs
YXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJjb2xvcjojMUY0OTdEIj5TZWUgbXkgY29tbWVu
dHMgYmVsb3c6PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNw
YW4gc3R5bGU9ImNvbG9yOiMxRjQ5N0QiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxk
aXYgc3R5bGU9ImJvcmRlcjpub25lO2JvcmRlci1sZWZ0OnNvbGlkIGJsdWUgMS41cHQ7cGFkZGlu
ZzowaW4gMGluIDBpbiA0LjBwdCI+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0i
Zm9udC1zaXplOjEyLjBwdDtmb250LWZhbWlseTomcXVvdDtUaW1lcyBOZXcgUm9tYW4mcXVvdDss
JnF1b3Q7c2VyaWYmcXVvdDsiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxkaXY+DQo8
cCBjbGFzcz0iTXNvTm9ybWFsIiBzdHlsZT0ibWFyZ2luLWxlZnQ6LjVpbiI+Jm5ic3A7PG86cD48
L286cD48L3A+DQo8L2Rpdj4NCjxibG9ja3F1b3RlIHN0eWxlPSJtYXJnaW4tdG9wOjUuMHB0O21h
cmdpbi1ib3R0b206NS4wcHQiPg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiIHN0eWxlPSJt
YXJnaW4tbGVmdDouNWluIj5UaGUgbGltaXRhdGlvbiBmb3IgUEJCIGFuZCBWTEFOIHNob3VsZCBp
bmNsdWRlIHRoYXQgTUFDIGFkZHJlc3NlcyBjYW7igJl0IGJlIGFnZ3JlZ2F0ZWQsIHRoZXJlZm9y
ZSBmb3J3YXJkaW5nIHRhYmxlIGNhbiBiZSB2ZXJ5IGxhcmdlIGZvciBsYXJnZSBkYXRhIGNlbnRl
cnMuDQo8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPC9ibG9ja3F1b3RlPg0KPGRpdj4NCjxwIGNs
YXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTIuMHB0O2ZvbnQtZmFtaWx5
OiZxdW90O1RpbWVzIE5ldyBSb21hbiZxdW90OywmcXVvdDtzZXJpZiZxdW90OyI+PG86cD4mbmJz
cDs8L286cD48L3NwYW4+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+
PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMi4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7VGltZXMgTmV3
IFJvbWFuJnF1b3Q7LCZxdW90O3NlcmlmJnF1b3Q7Ij5EaXNhZ3JlZSwgdGhpcyBpc24ndCB3aGF0
IHdlIHNlZSBpbiBsaXZlIGRlcGxveW1lbnRzLiBTaW5jZSB0aGUgTWFjIGxlYXJuaW5nIG9mIHRo
ZSBob3N0IHN0YXRpb25zIG9ubHkgaGFwcGVucyBhdCB0aGUgZWRnZSBzd2l0Y2hlcyBhbmQgdGhl
IHNlcnZpY2VzIGFyZSBkaXN0cmlidXRlZCBhY3Jvc3MNCiBtYW55IGVkZ2Ugc3dpdGNoZXMgdGhl
IElTSUQgc2VydmljZSBNYWMgdGFibGUgc2l6ZXMgZG9uJ3QgZW5kIHVwIGJlaW5nIHRoYXQgbGFy
Z2Ugb24gYW55IGdpdmVuIG5vZGUuIEFkZGl0aW9uYWxseSB0aGUgY29yZSBuZXZlciBzZWVzIGFu
eSBtYWNzIGJ1dCB0aGUgbm9kZXMgcGFydGljaXBhdGluZyBpbiB0aGUgYmFja2JvbmUgZnJvbSBl
ZGdlIHN3aXRjaCB0byBlZGdlIHN3aXRjaC4mbmJzcDs8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8
L2Rpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTIuMHB0
O2ZvbnQtZmFtaWx5OiZxdW90O1RpbWVzIE5ldyBSb21hbiZxdW90OywmcXVvdDtzZXJpZiZxdW90
OyI+PGJyPg0KPHNwYW4gc3R5bGU9ImNvbG9yOiMxRjQ5N0QiPltMaW5kYV0gSWYgUEJC4oCZcyBN
QUMtaW4tTUFDIE92ZXJsYXkgZW5jYXBzdWxhdGlvbiBpcyBkb25lIGF0IHRoZSBhY2Nlc3Mgc3dp
dGNoZXMgaW4gZGF0YSBjZW50ZXIsIHRoZW4gdGhlIGNvcmUgc3dpdGNoZXMgZm9yd2FyZGluZyB0
YWJsZSB3b27igJl0IGV4cGxvZGUuIEhvd2V2ZXIsIGlmIHRoZSBNYWMtaW4tTWFjIG92ZXJsYXkg
ZW5jYXBzdWxhdGlvbiBpcyBwZXJmb3JtZWQgYnkgc2VydmVyc+KAmSBoeXBlcnZpc29yLA0KIHRo
ZSBzd2l0Y2hlc+KAmSBmb3J3YXJkaW5nIHRhYmxlIGNvdWxkIGV4cGxvZGUgaWYgdGhlcmUgYXJl
IGh1bmRyZWRzIG9mIHRob3VzYW5kcyBvZiBzZXJ2ZXJzIGluIGRhdGEgY2VudGVyLCB3aGljaCBp
cyBnZXR0aW5nIG1vcmUgYW5kIG1vcmUgY29tbW9uIGluIGxhcmdlIGRhdGEgY2VudGVycy4NCjwv
c3Bhbj48bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBz
dHlsZT0iY29sb3I6IzFGNDk3RCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAgY2xh
c3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImNvbG9yOiMxRjQ5N0QiPkxpbmRhPG86cD48L286
cD48L3NwYW4+PC9wPg0KPC9kaXY+DQo8L2Rpdj4NCjwvYm9keT4NCjwvaHRtbD4NCg==

--_000_4A95BA014132FF49AE685FAB4B9F17F632E181E2dfweml505mbx_--

From david.i.allan@ericsson.com  Fri Jan 27 08:51:08 2012
Return-Path: <david.i.allan@ericsson.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9453621F8637 for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 08:51:08 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.598
X-Spam-Level: 
X-Spam-Status: No, score=-6.598 tagged_above=-999 required=5 tests=[AWL=-0.000, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UfPZyzVddcnK for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 08:51:07 -0800 (PST)
Received: from imr4.ericy.com (imr4.ericy.com [198.24.6.9]) by ietfa.amsl.com (Postfix) with ESMTP id 2DB4C21F8635 for <dc@ietf.org>; Fri, 27 Jan 2012 08:51:07 -0800 (PST)
Received: from eusaamw0707.eamcs.ericsson.se ([147.117.20.32]) by imr4.ericy.com (8.14.3/8.14.3/Debian-9.1ubuntu1) with ESMTP id q0RGouov032353; Fri, 27 Jan 2012 10:50:59 -0600
Received: from EUSAACMS0703.eamcs.ericsson.se ([169.254.1.142]) by eusaamw0707.eamcs.ericsson.se ([147.117.20.32]) with mapi; Fri, 27 Jan 2012 11:50:58 -0500
From: David Allan I <david.i.allan@ericsson.com>
To: Linda Dunbar <linda.dunbar@huawei.com>, Paul Unbehagen <paul@unbehagen.net>
Date: Fri, 27 Jan 2012 11:50:57 -0500
Thread-Topic: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
Thread-Index: Aczcic4dhzNCqBUxQSqOznX8P2avFQARvMcAABCCe8AAADywEA==
Message-ID: <60C093A41B5E45409A19D42CF7786DFD522A6C5E3B@EUSAACMS0703.eamcs.ericsson.se>
References: <4A95BA014132FF49AE685FAB4B9F17F632E17F2D@dfweml505-mbx> <C5CE8493-6543-4EB0-BCEB-99EEBA3FD59E@unbehagen.net> <4A95BA014132FF49AE685FAB4B9F17F632E181E2@dfweml505-mbx>
In-Reply-To: <4A95BA014132FF49AE685FAB4B9F17F632E181E2@dfweml505-mbx>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/alternative; boundary="_000_60C093A41B5E45409A19D42CF7786DFD522A6C5E3BEUSAACMS0703e_"
MIME-Version: 1.0
Cc: Thomas Narten <narten@us.ibm.com>, "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 16:51:08 -0000

--_000_60C093A41B5E45409A19D42CF7786DFD522A6C5E3BEUSAACMS0703e_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

So what you are saying is if you architect your network wrong, you have a p=
roblem.

That issue is not confined to PBB

Dave

________________________________
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Linda D=
unbar
Sent: Friday, January 27, 2012 8:47 AM
To: Paul Unbehagen
Cc: Thomas Narten; dc@ietf.org; david.black@emc.com; Dinesh Dutt; Murari Sr=
idharan; kreeger@cisco.com
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-pro=
blem-statment-01

Paul,

See my comments below:



The limitation for PBB and VLAN should include that MAC addresses can't be =
aggregated, therefore forwarding table can be very large for large data cen=
ters.

Disagree, this isn't what we see in live deployments. Since the Mac learnin=
g of the host stations only happens at the edge switches and the services a=
re distributed across many edge switches the ISID service Mac table sizes d=
on't end up being that large on any given node. Additionally the core never=
 sees any macs but the nodes participating in the backbone from edge switch=
 to edge switch.

[Linda] If PBB's MAC-in-MAC Overlay encapsulation is done at the access swi=
tches in data center, then the core switches forwarding table won't explode=
. However, if the Mac-in-Mac overlay encapsulation is performed by servers'=
 hypervisor, the switches' forwarding table could explode if there are hund=
reds of thousands of servers in data center, which is getting more and more=
 common in large data centers.

Linda

--_000_60C093A41B5E45409A19D42CF7786DFD522A6C5E3BEUSAACMS0703e_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML xmlns=3D"http://www.w3.org/TR/REC-html40" xmlns:v =3D=20
"urn:schemas-microsoft-com:vml" xmlns:o =3D=20
"urn:schemas-microsoft-com:office:office" xmlns:w =3D=20
"urn:schemas-microsoft-com:office:word" xmlns:x =3D=20
"urn:schemas-microsoft-com:office:excel" xmlns:p =3D=20
"urn:schemas-microsoft-com:office:powerpoint" xmlns:a =3D=20
"urn:schemas-microsoft-com:office:access" xmlns:dt =3D=20
"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:s =3D=20
"uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:rs =3D=20
"urn:schemas-microsoft-com:rowset" xmlns:z =3D "#RowsetSchema" xmlns:b =3D=
=20
"urn:schemas-microsoft-com:office:publisher" xmlns:ss =3D=20
"urn:schemas-microsoft-com:office:spreadsheet" xmlns:c =3D=20
"urn:schemas-microsoft-com:office:component:spreadsheet" xmlns:odc =3D=20
"urn:schemas-microsoft-com:office:odc" xmlns:oa =3D=20
"urn:schemas-microsoft-com:office:activation" xmlns:html =3D=20
"http://www.w3.org/TR/REC-html40" xmlns:q =3D=20
"http://schemas.xmlsoap.org/soap/envelope/" xmlns:rtc =3D=20
"http://microsoft.com/officenet/conferencing" XMLNS:D =3D "DAV:" XMLNS:Repl=
 =3D=20
"http://schemas.microsoft.com/repl/" xmlns:mt =3D=20
"http://schemas.microsoft.com/sharepoint/soap/meetings/" xmlns:x2 =3D=20
"http://schemas.microsoft.com/office/excel/2003/xml" xmlns:ppda =3D=20
"http://www.passport.com/NameSpace.xsd" xmlns:ois =3D=20
"http://schemas.microsoft.com/sharepoint/soap/ois/" xmlns:dir =3D=20
"http://schemas.microsoft.com/sharepoint/soap/directory/" xmlns:ds =3D=20
"http://www.w3.org/2000/09/xmldsig#" xmlns:dsp =3D=20
"http://schemas.microsoft.com/sharepoint/dsp" xmlns:udc =3D=20
"http://schemas.microsoft.com/data/udc" xmlns:xsd =3D=20
"http://www.w3.org/2001/XMLSchema" xmlns:sub =3D=20
"http://schemas.microsoft.com/sharepoint/soap/2002/1/alerts/" xmlns:ec =3D=
=20
"http://www.w3.org/2001/04/xmlenc#" xmlns:sp =3D=20
"http://schemas.microsoft.com/sharepoint/" xmlns:sps =3D=20
"http://schemas.microsoft.com/sharepoint/soap/" xmlns:xsi =3D=20
"http://www.w3.org/2001/XMLSchema-instance" xmlns:udcs =3D=20
"http://schemas.microsoft.com/data/udc/soap" xmlns:udcxf =3D=20
"http://schemas.microsoft.com/data/udc/xmlfile" xmlns:udcp2p =3D=20
"http://schemas.microsoft.com/data/udc/parttopart" xmlns:wf =3D=20
"http://schemas.microsoft.com/sharepoint/soap/workflow/" xmlns:dsss =3D=20
"http://schemas.microsoft.com/office/2006/digsig-setup" xmlns:dssi =3D=20
"http://schemas.microsoft.com/office/2006/digsig" xmlns:mdssi =3D=20
"http://schemas.openxmlformats.org/package/2006/digital-signature" xmlns:mv=
er =3D=20
"http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:m =3D=20
"http://schemas.microsoft.com/office/2004/12/omml" xmlns:mrels =3D=20
"http://schemas.openxmlformats.org/package/2006/relationships" xmlns:spwp =
=3D=20
"http://microsoft.com/sharepoint/webpartpages" xmlns:ex12t =3D=20
"http://schemas.microsoft.com/exchange/services/2006/types" xmlns:ex12m =3D=
=20
"http://schemas.microsoft.com/exchange/services/2006/messages" xmlns:pptsl =
=3D=20
"http://schemas.microsoft.com/sharepoint/soap/SlideLibrary/" xmlns:spsl =3D=
=20
"http://microsoft.com/webservices/SharePointPortalServer/PublishedLinksServ=
ice"=20
XMLNS:Z =3D "urn:schemas-microsoft-com:" xmlns:st =3D "=01"><HEAD>
<META content=3D"text/html; charset=3Dus-ascii" http-equiv=3DContent-Type>
<META name=3DGENERATOR content=3D"MSHTML 9.00.8112.16440">
<STYLE>@font-face {
	font-family: &#23435;&#20307;;
}
@font-face {
	font-family: Cambria Math;
}
@font-face {
	font-family: Calibri;
}
@font-face {
	font-family: @&#23435;&#20307;;
}
@page WordSection1 {size: 8.5in 11.0in; margin: 1.0in 1.25in 1.0in 1.25in; =
}
P.MsoNormal {
	MARGIN: 0in 0in 0pt; FONT-FAMILY: "Calibri","sans-serif"; FONT-SIZE: 11pt
}
LI.MsoNormal {
	MARGIN: 0in 0in 0pt; FONT-FAMILY: "Calibri","sans-serif"; FONT-SIZE: 11pt
}
DIV.MsoNormal {
	MARGIN: 0in 0in 0pt; FONT-FAMILY: "Calibri","sans-serif"; FONT-SIZE: 11pt
}
A:link {
	COLOR: blue; TEXT-DECORATION: underline; mso-style-priority: 99
}
SPAN.MsoHyperlink {
	COLOR: blue; TEXT-DECORATION: underline; mso-style-priority: 99
}
A:visited {
	COLOR: purple; TEXT-DECORATION: underline; mso-style-priority: 99
}
SPAN.MsoHyperlinkFollowed {
	COLOR: purple; TEXT-DECORATION: underline; mso-style-priority: 99
}
SPAN.EmailStyle17 {
	FONT-FAMILY: "Calibri","sans-serif"; COLOR: windowtext; mso-style-type: pe=
rsonal
}
SPAN.EmailStyle18 {
	FONT-FAMILY: "Calibri","sans-serif"; COLOR: #1f497d; mso-style-type: perso=
nal-reply
}
.MsoChpDefault {
	FONT-SIZE: 10pt; mso-style-type: export-only
}
DIV.WordSection1 {
	page: WordSection1
}
</STYLE>
<!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></HEAD>
<BODY lang=3DEN-US bgColor=3Dwhite vLink=3Dpurple link=3Dblue>
<DIV dir=3Dltr align=3Dleft><SPAN class=3D722145016-27012012><FONT color=3D=
#0000ff=20
size=3D2 face=3DArial>So what you are saying is if you architect your netwo=
rk wrong,=20
you have a problem. </FONT></SPAN></DIV>
<DIV dir=3Dltr align=3Dleft><SPAN class=3D722145016-27012012><FONT color=3D=
#0000ff=20
size=3D2 face=3DArial></FONT></SPAN>&nbsp;</DIV>
<DIV dir=3Dltr align=3Dleft><SPAN class=3D722145016-27012012><FONT color=3D=
#0000ff=20
size=3D2 face=3DArial>That issue is not confined to PBB</FONT></SPAN></DIV>
<DIV dir=3Dltr align=3Dleft><SPAN class=3D722145016-27012012><FONT color=3D=
#0000ff=20
size=3D2 face=3DArial></FONT></SPAN>&nbsp;</DIV>
<DIV dir=3Dltr align=3Dleft><SPAN class=3D722145016-27012012><FONT color=3D=
#0000ff=20
size=3D2 face=3DArial>Dave</FONT></SPAN></DIV><BR>
<DIV dir=3Dltr lang=3Den-us class=3DOutlookMessageHeader align=3Dleft>
<HR tabIndex=3D-1>
<FONT size=3D2 face=3DTahoma><B>From:</B> dc-bounces@ietf.org=20
[mailto:dc-bounces@ietf.org] <B>On Behalf Of </B>Linda Dunbar<BR><B>Sent:</=
B>=20
Friday, January 27, 2012 8:47 AM<BR><B>To:</B> Paul Unbehagen<BR><B>Cc:</B>=
=20
Thomas Narten; dc@ietf.org; david.black@emc.com; Dinesh Dutt; Murari Sridha=
ran;=20
kreeger@cisco.com<BR><B>Subject:</B> Re: [dc] comments and suggestions to=20
draft-narten-nv03-overlay-problem-statment-01<BR></FONT><BR></DIV>
<DIV></DIV>
<DIV class=3DWordSection1>
<P class=3DMsoNormal><SPAN style=3D"COLOR: #1f497d">Paul, <o:p></o:p></SPAN=
></P>
<P class=3DMsoNormal><SPAN style=3D"COLOR: #1f497d"><o:p>&nbsp;</o:p></SPAN=
></P>
<P class=3DMsoNormal><SPAN style=3D"COLOR: #1f497d">See my comments=20
below:<o:p></o:p></SPAN></P>
<P class=3DMsoNormal><SPAN style=3D"COLOR: #1f497d"><o:p>&nbsp;</o:p></SPAN=
></P>
<DIV=20
style=3D"BORDER-BOTTOM: medium none; BORDER-LEFT: blue 1.5pt solid; PADDING=
-BOTTOM: 0in; PADDING-LEFT: 4pt; PADDING-RIGHT: 0in; BORDER-TOP: medium non=
e; BORDER-RIGHT: medium none; PADDING-TOP: 0in">
<P class=3DMsoNormal><SPAN=20
style=3D"FONT-FAMILY: 'Times New Roman','serif'; FONT-SIZE: 12pt"><o:p>&nbs=
p;</o:p></SPAN></P>
<DIV>
<P style=3D"MARGIN-LEFT: 0.5in" class=3DMsoNormal>&nbsp;<o:p></o:p></P></DI=
V>
<BLOCKQUOTE style=3D"MARGIN-TOP: 5pt; MARGIN-BOTTOM: 5pt">
  <DIV>
  <P style=3D"MARGIN-LEFT: 0.5in" class=3DMsoNormal>The limitation for PBB =
and VLAN=20
  should include that MAC addresses can&#8217;t be aggregated, therefore fo=
rwarding=20
  table can be very large for large data centers.=20
<o:p></o:p></P></DIV></BLOCKQUOTE>
<DIV>
<P class=3DMsoNormal><SPAN=20
style=3D"FONT-FAMILY: 'Times New Roman','serif'; FONT-SIZE: 12pt"><o:p>&nbs=
p;</o:p></SPAN></P></DIV>
<DIV>
<P class=3DMsoNormal><SPAN=20
style=3D"FONT-FAMILY: 'Times New Roman','serif'; FONT-SIZE: 12pt">Disagree,=
 this=20
isn't what we see in live deployments. Since the Mac learning of the host=20
stations only happens at the edge switches and the services are distributed=
=20
across many edge switches the ISID service Mac table sizes don't end up bei=
ng=20
that large on any given node. Additionally the core never sees any macs but=
 the=20
nodes participating in the backbone from edge switch to edge=20
switch.&nbsp;<o:p></o:p></SPAN></P></DIV>
<P class=3DMsoNormal><SPAN=20
style=3D"FONT-FAMILY: 'Times New Roman','serif'; FONT-SIZE: 12pt"><BR><SPAN=
=20
style=3D"COLOR: #1f497d">[Linda] If PBB&#8217;s MAC-in-MAC Overlay encapsul=
ation is done=20
at the access switches in data center, then the core switches forwarding ta=
ble=20
won&#8217;t explode. However, if the Mac-in-Mac overlay encapsulation is pe=
rformed by=20
servers&#8217; hypervisor, the switches&#8217; forwarding table could explo=
de if there are=20
hundreds of thousands of servers in data center, which is getting more and =
more=20
common in large data centers. </SPAN><o:p></o:p></SPAN></P>
<P class=3DMsoNormal><SPAN style=3D"COLOR: #1f497d"><o:p>&nbsp;</o:p></SPAN=
></P>
<P class=3DMsoNormal><SPAN=20
style=3D"COLOR: #1f497d">Linda<o:p></o:p></SPAN></P></DIV></DIV></BODY></HT=
ML>

--_000_60C093A41B5E45409A19D42CF7786DFD522A6C5E3BEUSAACMS0703e_--

From linda.dunbar@huawei.com  Fri Jan 27 08:53:03 2012
Return-Path: <linda.dunbar@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id AEFBD21F85E1; Fri, 27 Jan 2012 08:53:03 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.219
X-Spam-Level: 
X-Spam-Status: No, score=-2.219 tagged_above=-999 required=5 tests=[AWL=-0.221, BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_52=0.6]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UzXWGN8TyGyH; Fri, 27 Jan 2012 08:53:02 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id 6BF4B21F8613; Fri, 27 Jan 2012 08:53:02 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml202-edg.china.huawei.com) ([172.18.9.243]) by dfwrg01-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACY91445; Fri, 27 Jan 2012 11:53:02 -0500 (EST)
Received: from DFWEML403-HUB.china.huawei.com (10.193.5.151) by dfweml202-edg.china.huawei.com (172.18.9.108) with Microsoft SMTP Server (TLS) id 14.1.323.3; Fri, 27 Jan 2012 08:50:57 -0800
Received: from DFWEML505-MBX.china.huawei.com ([10.124.31.100]) by dfweml403-hub.china.huawei.com ([10.193.5.151]) with mapi id 14.01.0323.003; Fri, 27 Jan 2012 08:50:51 -0800
From: Linda Dunbar <linda.dunbar@huawei.com>
To: Rakesh Saha <rsaha@us.ibm.com>
Thread-Topic: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
Thread-Index: Aczcic4dhzNCqBUxQSqOznX8P2avFQAgIy2AAAJDuRA=
Date: Fri, 27 Jan 2012 16:50:50 +0000
Message-ID: <4A95BA014132FF49AE685FAB4B9F17F632E181F3@dfweml505-mbx>
References: <4A95BA014132FF49AE685FAB4B9F17F632E17F2D@dfweml505-mbx> <OFF6866E01.3C35AF59-ON88257992.0029C4E2-88257992.002A64BA@us.ibm.com>
In-Reply-To: <OFF6866E01.3C35AF59-ON88257992.0029C4E2-88257992.002A64BA@us.ibm.com>
Accept-Language: en-US, zh-CN
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.192.11.97]
Content-Type: multipart/alternative; boundary="_000_4A95BA014132FF49AE685FAB4B9F17F632E181F3dfweml505mbx_"
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, "dc-bounces@ietf.org" <dc-bounces@ietf.org>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>, "narten@rotala.raleigh.ibm.com" <narten@rotala.raleigh.ibm.com>
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 16:53:04 -0000

--_000_4A95BA014132FF49AE685FAB4B9F17F632E181F3dfweml505mbx_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

UmFrZXNoLA0KDQpJIGFncmVlIHdpdGggeW91IHRoYXQgc29tZSB0cmFkaXRpb25hbCBzZXJ2ZXJz
IGNhbiBhdHRhY2ggVkxBTi1JRCBmb3IgZGF0YSBmcmFtZXMgY29taW5nIG91dC4NCkJ1dCBtYW55
IHNlcnZlcnMgZGVwbG95ZWQgdG9kYXkgZG9u4oCZdCBhdHRhY2ggVklELg0KDQoNCkxpbmRhDQoN
CkZyb206IFJha2VzaCBTYWhhIFttYWlsdG86cnNhaGFAdXMuaWJtLmNvbV0NClNlbnQ6IEZyaWRh
eSwgSmFudWFyeSAyNywgMjAxMiAxOjQzIEFNDQpUbzogTGluZGEgRHVuYmFyDQpDYzogZGF2aWQu
YmxhY2tAZW1jLmNvbTsgZGNAaWV0Zi5vcmc7IGRjLWJvdW5jZXNAaWV0Zi5vcmc7IERpbmVzaCBE
dXR0OyBrcmVlZ2VyQGNpc2NvLmNvbTsgTXVyYXJpIFNyaWRoYXJhbjsgbmFydGVuQHJvdGFsYS5y
YWxlaWdoLmlibS5jb20NClN1YmplY3Q6IFJlOiBbZGNdIGNvbW1lbnRzIGFuZCBzdWdnZXN0aW9u
cyB0byBkcmFmdC1uYXJ0ZW4tbnYwMy1vdmVybGF5LXByb2JsZW0tc3RhdG1lbnQtMDENCg0KSGkg
TGluZGEsDQoNCj5Zb3Ugc3RhdGVkIHRoYXQg4oCcdHlwaWNhbGx5LCBpdCBpcyBhIHZpcnR1YWwg
TklDIGNvbWluZyB1cCB0aGF0IHRyaWdnZXJzIHRoaXMgYXNzb2NpYXRpb27igJ0uIEl0IGlzIG5v
dCBxdWl0ZSByaWdodC4gVmlydHVhbCBOSUMsIHNhbWUgYXMgcGh5c2ljYWwgTklDIGZvciBhIG5v
bi12aXJ0dWFsaXplZCBzZXJ2ZXIsIHVzdWFsbHkgaXMgbm90IGF3YXJlIG9mIHdoaWNoID5uZXR3
b3JrIHNlZ21lbnQgaXQgaXMgYXR0YWNoZWQgdG8uIEZvciBFdGhlcm5ldCBkYXRhIGZyYW1lcyBj
b21pbmcgb3V0IGZyb20gYSB0cmFkaXRpb25hbCBzZXJ2ZXIgKGkuZS4gc2VydmVyIHdpdGhvdXQg
dmlydHVhbGl6YXRpb24pLCB0aGUgZGF0YSBmcmFtZSBpcyBub3QgdGFnZ2VkLCBpLmUuIG5vdCBo
YXZpbmcgVklEIGFzc29jaWF0ZWQgd2l0aCBpdC4gSXQgaXMgPnRoZSBmaXJzdCBzd2l0Y2ggd2hp
Y2ggYXNzaWduIHRoZSBWSUQgZm9yIHRoZSBkYXRhIGZyYW1lLg0KDQpSZWdhcmRpbmcgIkZvciBF
dGhlcm5ldCBkYXRhIGZyYW1lcyBjb21pbmcgb3V0IGZyb20gYSB0cmFkaXRpb25hbCBzZXJ2ZXIg
KGkuZS4gc2VydmVyIHdpdGhvdXQgdmlydHVhbGl6YXRpb24pLCB0aGUgZGF0YSBmcmFtZSBpcyBu
b3QgdGFnZ2VkIiwuLi4iIEl0IGlzID50aGUgZmlyc3Qgc3dpdGNoIHdoaWNoIGFzc2lnbiB0aGUg
VklEIGZvciB0aGUgZGF0YSBmcmFtZSIgLi4uDQoNCm5vbi12aXJ0dWFsaXplZCBzZXJ2ZXJzIGFy
ZSBjYXBhYmxlIG9mIHNlbmRpbmcgVkxBTiB0YWdnZWQgcGFja2V0cy4gSXQgaXMgYSBzdGFuZGFy
ZCBmZWF0dXJlIG9mIHRvZGF5J2FzIHRyYWRpdGlvbmFsIE9wZXJhdGluZyBTeXN0ZW1zIChOSUMg
ZHJpdmVycykgIHRvIGFsbG93IFZMQU5zIHRvIGJlIGFzc29jaWF0ZWQgd2l0aCBhbiBldGhlcm5l
dCBpbnRlcmZhY2UuDQpTbyBhIHRyYWRpdGlvbmFsIG5vbi12aXJ0dWFsaXplZCBzZXJ2ZXIgbWF5
IHNlbmQgdGFnZ2VkIGFuZCB1bnRhZ2dlZCBmcmFtZXMuDQoNClRoYW5rcywNClJha2VzaC4NCi09
LQ0KDQoNCg0KDQoNCg0KRnJvbTogICAgICAgIExpbmRhIER1bmJhciA8bGluZGEuZHVuYmFyQGh1
YXdlaS5jb20+DQpUbzogICAgICAgIG5hcnRlbkByb3RhbGEucmFsZWlnaC5pYm0uY29tLCAiZGF2
aWQuYmxhY2tAZW1jLmNvbSIgPGRhdmlkLmJsYWNrQGVtYy5jb20+LCBNdXJhcmkgU3JpZGhhcmFu
IDxtdXJhcmlzQG1pY3Jvc29mdC5jb20+LCBEaW5lc2ggRHV0dCA8ZGR1dHRAY2lzY28uY29tPiwg
ImtyZWVnZXJAY2lzY28uY29tIiA8a3JlZWdlckBjaXNjby5jb20+DQpDYzogICAgICAgICJkY0Bp
ZXRmLm9yZyIgPGRjQGlldGYub3JnPg0KRGF0ZTogICAgICAgIDAxLzI2LzIwMTIgMDQ6MjUgUE0N
ClN1YmplY3Q6ICAgICAgICBbZGNdIGNvbW1lbnRzIGFuZCBzdWdnZXN0aW9ucyB0byBkcmFmdC1u
YXJ0ZW4tbnYwMy1vdmVybGF5LXByb2JsZW0tc3RhdG1lbnQtMDENClNlbnQgYnk6ICAgICAgICBk
Yy1ib3VuY2VzQGlldGYub3JnDQpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KDQoN
Cg0KVGhvbWFzLCBldCBhbCwNCg0KSGVyZSBhcmUgbXkgY29tbWVudHMgdG8gZHJhZnQtbmFydGVu
LW52MDMtb3ZlcmxheS1wcm9ibGVtLXN0YXRtZW50LTAxIGFuZCBzdWdnZXN0ZWQgd29yZGluZyBj
aGFuZ2U6DQoNCjMuMS4gTGltaXRhdGlvbnMgb2YgRXhpc3RpbmcgVmlydHVhbCBOZXR3b3JrIE1v
ZGVscy4NCg0KU2luY2UgSUVFRTgwMi4xIGRlZmluZWQgVkxBTiBzZXBhcmF0aW9uIGlzIG1lbnRp
b25lZCwgaXQgd291bGQgYmUgYXBwcm9wcmlhdGUgdG8gbWVudGlvbiBQQkLigJlzIElTSUQgc2Vw
YXJhdGlvbi4NCg0KVGhlIGxpbWl0YXRpb24gZm9yIFBCQiBhbmQgVkxBTiBzaG91bGQgaW5jbHVk
ZSB0aGF0IE1BQyBhZGRyZXNzZXMgY2Fu4oCZdCBiZSBhZ2dyZWdhdGVkLCB0aGVyZWZvcmUgZm9y
d2FyZGluZyB0YWJsZSBjYW4gYmUgdmVyeSBsYXJnZSBmb3IgbGFyZ2UgZGF0YSBjZW50ZXJzLg0K
DQpTZWNvbmQgcGFyYWdyYXBoOiBXaHkg4oCcVkxBTnMgYXJlIGEgcHVyZSBicmlkZ2luZyBjb25z
dHJ1Y3Qgd2hpbGUgVlJGIGlzIHB1cmUgcm91dGluZyBjb25zdHJ1Y3TigJ0gaXMgYSBwcm9ibGVt
Pw0KDQo0LiBOZXR3b3JrIE92ZXJsYXlzDQoNClNob3VsZCBhZGQgYSBzdWJzZWN0aW9uIHRvIGRl
c2NyaWJlIFZpcnR1YWwgTmV0d29yayBJbnN0YW5jZSBJRC4NCg0KNC54LiBWaXJ0dWFsIE5ldHdv
cmsgSW5zdGFuY2UgSUQNCg0KVmlydHVhbCBOZXR3b3JrIEluc3RhbmNlIGlzIGZvciBzZWdyZWdh
dGluZyB0cmFmZmljIGJlbG9uZ2luZyB0byBkaWZmZXJlbnQgdGVuYW50cyBvciBkaWZmZXJlbnQg
em9uZXMgb2Ygb25lIHRlbmFudC4gV2hlbiBhIGRhdGEgY2VudGVyIHVzZXMgT3ZlcmxheSBOZXR3
b3JrIHRvIGhpZGUgaG9zdHMgYWRkcmVzc2VzLCBpdCBpcyBpbXBvcnRhbnQgdGhhdCBWaXJ0dWFs
IE5ldHdvcmsgSW5zdGFuY2UgaWRlbnRpZmllciBjYW4gcHJvcGVybHkgcmVwcmVzZW50IHRoZSB6
b25lcyBvciB0aGUgdGVuYW50cyBpbiBlbnRpcmUgZGF0YSBjZW50ZXIsIGVzcGVjaWFsbHkgd2hl
biB0aGUgb3ZlcmxheSBlZGdlIG5vZGVzIGFyZSBhY2Nlc3Mgc3dpdGNoZXMsIGkuZS4gbm90IGVt
YmVkZGVkIGluIGh5cGVydmlzb3JzLg0KDQpXaGVuIG92ZXJsYXkgZWRnZSBub2RlcyBhcmUgYWNj
ZXNzIHN3aXRjaGVzLCB0aGUgZGF0YSBmcmFtZXMgYmVmb3JlIGVudGVyaW5nIHRoZSBPdmVybGF5
IE5ldHdvcmsgb3IgYWZ0ZXIgZXhpdGluZyB0aGUgT3ZlcmxheSBOZXR3b3JrIG1pZ2h0IGNhcnJ5
IHRyYWRpdGlvbmFsIFZMQU4tSUQgZm9yIHByb3BlciB0cmFmZmljIHNlZ3JlZ2F0aW9uLiBUaGUg
VmlydHVhbCBOZXR3b3JrIEluc3RhbmNlIElEIHZhbHVlIGNhcnJpZWQgYnkgdGhlIE92ZXJsYXkg
SGVhZGVyICBvZiB0aGUgZGF0YSBmcmFtZXMgbWlnaHQgYmUgMjQgYml0cyAoYXMgZGVzY3JpYmVk
IGluIDMuMikuIFRob3NlIFZMQU4tSUQgZm9yIGRhdGEgZnJhbWVzIHVuZGVyIGVhY2ggb3Zlcmxh
eSBlZGdlIG5vZGUgYXJlIG9ubHkgbG9jYWxseSBzaWduaWZpY2FudC4gUHJvcGVyIG1hcHBpbmcg
aGFzIHRvIGJlIG1haW50YWluZWQuDQoNCjYuMiAoVFJJTEwpICY2LjMgKEwyVlBOKQ0KDQpJdCBp
cyBuZWNlc3NhcnkgdG8gcG9pbnQgb3V0IHRoYXQgYm90aCBUUklMTCBhbmQgTDJWUE4gY2Fycnkg
dGhlIFZMQU4tSUQgZW1iZWRkZWQgaW4gdGhlIG9yaWdpbmFsIEV0aGVybmV0IGZyYW1lcyBhY3Jv
c3MgdGhlIE92ZXJsYXkgTmV0d29yayBhbmQgdGhlIFZMQU4tSUQgbWFpbnRhaW4gdGhlIHNhbWUg
bWVhbmluZyBpbiB0d28gc2VwYXJhdGUgTDIgaXNsYW5kcy4NCg0KV2hlbiBWTEFOLUlEIHRhZ2dl
ZCBFdGhlcm5ldCBmcmFtZXMgdHJhdmVyc2UgYWNyb3NzIHRoZSBPdmVybGF5IE5ldHdvcmsgZm9y
IERhdGEgQ2VudGVyLCB0aGUgVkxBTi1JRCBjYXJyaWVkIGJ5IEV0aGVybmV0IGZyYW1lcyBsb3Nl
IGl0cyBzaWduaWZpY2FuY2UuIEluIGFub3RoZXIgd29yZHMsIGEgZGlmZmVyZW50IFZMQU4tSUQg
bWlnaHQgYmUgcmUtYXNzaWduZWQgdG8gdGhlIEV0aGVybmV0IGZyYW1lcyBieSB0aGUgRWdyZXNz
IG92ZXJsYXkgZWRnZS4NCg0KDQo1LjMuIEFzc29jaWF0aW5nIGEgVk5JRCB3aXRoIGFuIGVuZHBv
aW50Lg0KDQpZb3Ugc3RhdGVkIHRoYXQg4oCcdHlwaWNhbGx5LCBpdCBpcyBhIHZpcnR1YWwgTklD
IGNvbWluZyB1cCB0aGF0IHRyaWdnZXJzIHRoaXMgYXNzb2NpYXRpb27igJ0uIEl0IGlzIG5vdCBx
dWl0ZSByaWdodC4gVmlydHVhbCBOSUMsIHNhbWUgYXMgcGh5c2ljYWwgTklDIGZvciBhIG5vbi12
aXJ0dWFsaXplZCBzZXJ2ZXIsIHVzdWFsbHkgaXMgbm90IGF3YXJlIG9mIHdoaWNoIG5ldHdvcmsg
c2VnbWVudCBpdCBpcyBhdHRhY2hlZCB0by4gRm9yIEV0aGVybmV0IGRhdGEgZnJhbWVzIGNvbWlu
ZyBvdXQgZnJvbSBhIHRyYWRpdGlvbmFsIHNlcnZlciAoaS5lLiBzZXJ2ZXIgd2l0aG91dCB2aXJ0
dWFsaXphdGlvbiksIHRoZSBkYXRhIGZyYW1lIGlzIG5vdCB0YWdnZWQsIGkuZS4gbm90IGhhdmlu
ZyBWSUQgYXNzb2NpYXRlZCB3aXRoIGl0LiBJdCBpcyB0aGUgZmlyc3Qgc3dpdGNoIHdoaWNoIGFz
c2lnbiB0aGUgVklEIGZvciB0aGUgZGF0YSBmcmFtZS4NCg0KSXQgaXMgbW9yZSBhY2N1cmF0ZSB0
byBzYXkgIOKAnHZpcnR1YWwgc3dpdGNo4oCdIG9yIOKAnEh5cGVydmlzb3LigJ0gd2l0aGluIHRo
ZSBzZXJ2ZXIgd2hpY2ggY2FuIHRyaWdnZXIgdGhlIGFzc29jaWF0aW9uLg0KDQpTaW5jZSBvdmVy
bGF5IGhlYWRlciBjYW4gYmUgYWxzbyBhZGRlZCBieSBmaXJzdCBhY2Nlc3Mgc3dpdGNoZXMgKGku
ZS4gc3dpdGNoZXMgbm90IGVtYmVkZGVkIGluIHBoeXNpY2FsIHNlcnZlcnMpLCBpdCBpcyB2ZXJ5
IHBvc3NpYmxlIHRoYXQgIGRhdGEgZnJhbWVzIGFycml2aW5nIGF0IHRoZSBvdmVybGF5IGVkZ2Ug
KGkuZS4gdGhlIGZpcnN0IGFjY2VzcyBzd2l0Y2gpIGlzIGFscmVhZHkgVkxBTiB0YWdnZWQuIFRo
ZW4gYXNzb2NpYXRpb24gKG9yIG1hcHBpbmcpIGZyb20gdGhlIFZMQU4tVGFnIHRvIFZOSUQgaGFz
IHRvIGJlIG9wZXJhdG9yIGFkbWluaXN0cmF0ZWQuDQoNCg0KTGluZGEgRHVuYmFyDQogX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCmRjIG1haWxpbmcgbGlz
dA0KZGNAaWV0Zi5vcmcNCmh0dHBzOi8vd3d3LmlldGYub3JnL21haWxtYW4vbGlzdGluZm8vZGMN
Cg==

--_000_4A95BA014132FF49AE685FAB4B9F17F632E181F3dfweml505mbx_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64

PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy
bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt
YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6eD0idXJuOnNjaGVtYXMtbWljcm9z
b2Z0LWNvbTpvZmZpY2U6ZXhjZWwiIHhtbG5zOnA9InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206
b2ZmaWNlOnBvd2VycG9pbnQiIHhtbG5zOmE9InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2Zm
aWNlOmFjY2VzcyIgeG1sbnM6ZHQ9InV1aWQ6QzJGNDEwMTAtNjVCMy0xMWQxLUEyOUYtMDBBQTAw
QzE0ODgyIiB4bWxuczpzPSJ1dWlkOkJEQzZFM0YwLTZEQTMtMTFkMS1BMkEzLTAwQUEwMEMxNDg4
MiIgeG1sbnM6cnM9InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206cm93c2V0IiB4bWxuczp6PSIj
Um93c2V0U2NoZW1hIiB4bWxuczpiPSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTpw
dWJsaXNoZXIiIHhtbG5zOnNzPSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTpzcHJl
YWRzaGVldCIgeG1sbnM6Yz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6Y29tcG9u
ZW50OnNwcmVhZHNoZWV0IiB4bWxuczpvZGM9InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2Zm
aWNlOm9kYyIgeG1sbnM6b2E9InVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOmFjdGl2
YXRpb24iIHhtbG5zOmh0bWw9Imh0dHA6Ly93d3cudzMub3JnL1RSL1JFQy1odG1sNDAiIHhtbG5z
OnE9Imh0dHA6Ly9zY2hlbWFzLnhtbHNvYXAub3JnL3NvYXAvZW52ZWxvcGUvIiB4bWxuczpydGM9
Imh0dHA6Ly9taWNyb3NvZnQuY29tL29mZmljZW5ldC9jb25mZXJlbmNpbmciIHhtbG5zOkQ9IkRB
VjoiIHhtbG5zOlJlcGw9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vcmVwbC8iIHhtbG5z
Om10PSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3NoYXJlcG9pbnQvc29hcC9tZWV0aW5n
cy8iIHhtbG5zOngyPSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS9leGNlbC8y
MDAzL3htbCIgeG1sbnM6cHBkYT0iaHR0cDovL3d3dy5wYXNzcG9ydC5jb20vTmFtZVNwYWNlLnhz
ZCIgeG1sbnM6b2lzPSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3NoYXJlcG9pbnQvc29h
cC9vaXMvIiB4bWxuczpkaXI9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vc2hhcmVwb2lu
dC9zb2FwL2RpcmVjdG9yeS8iIHhtbG5zOmRzPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwLzA5L3ht
bGRzaWcjIiB4bWxuczpkc3A9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vc2hhcmVwb2lu
dC9kc3AiIHhtbG5zOnVkYz0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9kYXRhL3VkYyIg
eG1sbnM6eHNkPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNjaGVtYSIgeG1sbnM6c3ViPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3NoYXJlcG9pbnQvc29hcC8yMDAyLzEvYWxlcnRz
LyIgeG1sbnM6ZWM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDEvMDQveG1sZW5jIyIgeG1sbnM6c3A9
Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vc2hhcmVwb2ludC8iIHhtbG5zOnNwcz0iaHR0
cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9zaGFyZXBvaW50L3NvYXAvIiB4bWxuczp4c2k9Imh0
dHA6Ly93d3cudzMub3JnLzIwMDEvWE1MU2NoZW1hLWluc3RhbmNlIiB4bWxuczp1ZGNzPSJodHRw
Oi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL2RhdGEvdWRjL3NvYXAiIHhtbG5zOnVkY3hmPSJodHRw
Oi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL2RhdGEvdWRjL3htbGZpbGUiIHhtbG5zOnVkY3AycD0i
aHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9kYXRhL3VkYy9wYXJ0dG9wYXJ0IiB4bWxuczp3
Zj0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9zaGFyZXBvaW50L3NvYXAvd29ya2Zsb3cv
IiB4bWxuczpkc3NzPSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA2L2Rp
Z3NpZy1zZXR1cCIgeG1sbnM6ZHNzaT0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9vZmZp
Y2UvMjAwNi9kaWdzaWciIHhtbG5zOm1kc3NpPSJodHRwOi8vc2NoZW1hcy5vcGVueG1sZm9ybWF0
cy5vcmcvcGFja2FnZS8yMDA2L2RpZ2l0YWwtc2lnbmF0dXJlIiB4bWxuczptdmVyPSJodHRwOi8v
c2NoZW1hcy5vcGVueG1sZm9ybWF0cy5vcmcvbWFya3VwLWNvbXBhdGliaWxpdHkvMjAwNiIgeG1s
bnM6bT0iaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4
bWxuczptcmVscz0iaHR0cDovL3NjaGVtYXMub3BlbnhtbGZvcm1hdHMub3JnL3BhY2thZ2UvMjAw
Ni9yZWxhdGlvbnNoaXBzIiB4bWxuczpzcHdwPSJodHRwOi8vbWljcm9zb2Z0LmNvbS9zaGFyZXBv
aW50L3dlYnBhcnRwYWdlcyIgeG1sbnM6ZXgxMnQ9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5j
b20vZXhjaGFuZ2Uvc2VydmljZXMvMjAwNi90eXBlcyIgeG1sbnM6ZXgxMm09Imh0dHA6Ly9zY2hl
bWFzLm1pY3Jvc29mdC5jb20vZXhjaGFuZ2Uvc2VydmljZXMvMjAwNi9tZXNzYWdlcyIgeG1sbnM6
cHB0c2w9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vc2hhcmVwb2ludC9zb2FwL1NsaWRl
TGlicmFyeS8iIHhtbG5zOnNwc2w9Imh0dHA6Ly9taWNyb3NvZnQuY29tL3dlYnNlcnZpY2VzL1No
YXJlUG9pbnRQb3J0YWxTZXJ2ZXIvUHVibGlzaGVkTGlua3NTZXJ2aWNlIiB4bWxuczpaPSJ1cm46
c2NoZW1hcy1taWNyb3NvZnQtY29tOiIgeG1sbnM6c3Q9IiYjMTsiIHhtbG5zPSJodHRwOi8vd3d3
LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVpdj0iQ29udGVu
dC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1ldGEgbmFtZT0i
R2VuZXJhdG9yIiBjb250ZW50PSJNaWNyb3NvZnQgV29yZCAxMiAoZmlsdGVyZWQgbWVkaXVtKSI+
DQo8IS0tW2lmICFtc29dPjxzdHlsZT52XDoqIHtiZWhhdmlvcjp1cmwoI2RlZmF1bHQjVk1MKTt9
DQpvXDoqIHtiZWhhdmlvcjp1cmwoI2RlZmF1bHQjVk1MKTt9DQp3XDoqIHtiZWhhdmlvcjp1cmwo
I2RlZmF1bHQjVk1MKTt9DQouc2hhcGUge2JlaGF2aW9yOnVybCgjZGVmYXVsdCNWTUwpO30NCjwv
c3R5bGU+PCFbZW5kaWZdLS0+PHN0eWxlPjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8NCkBm
b250LWZhY2UNCgl7Zm9udC1mYW1pbHk65a6L5L2TOw0KCXBhbm9zZS0xOjIgMSA2IDAgMyAxIDEg
MSAxIDE7fQ0KQGZvbnQtZmFjZQ0KCXtmb250LWZhbWlseToiQ2FtYnJpYSBNYXRoIjsNCglwYW5v
c2UtMToyIDQgNSAzIDUgNCA2IDMgMiA0O30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6Q2Fs
aWJyaTsNCglwYW5vc2UtMToyIDE1IDUgMiAyIDIgNCAzIDIgNDt9DQpAZm9udC1mYWNlDQoJe2Zv
bnQtZmFtaWx5OlRhaG9tYTsNCglwYW5vc2UtMToyIDExIDYgNCAzIDUgNCA0IDIgNDt9DQpAZm9u
dC1mYWNlDQoJe2ZvbnQtZmFtaWx5OiJcQOWui+S9kyI7DQoJcGFub3NlLTE6MiAxIDYgMCAzIDEg
MSAxIDEgMTt9DQovKiBTdHlsZSBEZWZpbml0aW9ucyAqLw0KcC5Nc29Ob3JtYWwsIGxpLk1zb05v
cm1hbCwgZGl2Lk1zb05vcm1hbA0KCXttYXJnaW46MGluOw0KCW1hcmdpbi1ib3R0b206LjAwMDFw
dDsNCglmb250LXNpemU6MTIuMHB0Ow0KCWZvbnQtZmFtaWx5OiJUaW1lcyBOZXcgUm9tYW4iLCJz
ZXJpZiI7fQ0KYTpsaW5rLCBzcGFuLk1zb0h5cGVybGluaw0KCXttc28tc3R5bGUtcHJpb3JpdHk6
OTk7DQoJY29sb3I6Ymx1ZTsNCgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30NCmE6dmlzaXRl
ZCwgc3Bhbi5Nc29IeXBlcmxpbmtGb2xsb3dlZA0KCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJ
Y29sb3I6cHVycGxlOw0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQ0KdHQNCgl7bXNvLXN0
eWxlLXByaW9yaXR5Ojk5Ow0KCWZvbnQtZmFtaWx5OiJDb3VyaWVyIE5ldyI7fQ0Kc3Bhbi5FbWFp
bFN0eWxlMTgNCgl7bXNvLXN0eWxlLXR5cGU6cGVyc29uYWwtcmVwbHk7DQoJZm9udC1mYW1pbHk6
IkNhbGlicmkiLCJzYW5zLXNlcmlmIjsNCgljb2xvcjojMUY0OTdEO30NCi5Nc29DaHBEZWZhdWx0
DQoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5O30NCkBwYWdlIFdvcmRTZWN0aW9uMQ0KCXtz
aXplOjguNWluIDExLjBpbjsNCgltYXJnaW46MS4waW4gMS4yNWluIDEuMGluIDEuMjVpbjt9DQpk
aXYuV29yZFNlY3Rpb24xDQoJe3BhZ2U6V29yZFNlY3Rpb24xO30NCi0tPjwvc3R5bGU+PCEtLVtp
ZiBndGUgbXNvIDldPjx4bWw+DQo8bzpzaGFwZWRlZmF1bHRzIHY6ZXh0PSJlZGl0IiBzcGlkbWF4
PSIxMDI2IiAvPg0KPC94bWw+PCFbZW5kaWZdLS0+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+DQo8
bzpzaGFwZWxheW91dCB2OmV4dD0iZWRpdCI+DQo8bzppZG1hcCB2OmV4dD0iZWRpdCIgZGF0YT0i
MSIgLz4NCjwvbzpzaGFwZWxheW91dD48L3htbD48IVtlbmRpZl0tLT4NCjwvaGVhZD4NCjxib2R5
IGxhbmc9IkVOLVVTIiBsaW5rPSJibHVlIiB2bGluaz0icHVycGxlIj4NCjxkaXYgY2xhc3M9Ildv
cmRTZWN0aW9uMSI+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYm
cXVvdDs7Y29sb3I6IzFGNDk3RCI+UmFrZXNoLA0KPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1p
bHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5
N0QiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVv
dDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5JIGFncmVlIHdpdGggeW91
IHRoYXQgc29tZSB0cmFkaXRpb25hbCBzZXJ2ZXJzIGNhbiBhdHRhY2ggVkxBTi1JRCBmb3IgZGF0
YSBmcmFtZXMgY29taW5nIG91dC4NCjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90
O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5CdXQg
bWFueSBzZXJ2ZXJzIGRlcGxveWVkIHRvZGF5IGRvbuKAmXQgYXR0YWNoIFZJRC4NCjxvOnA+PC9v
OnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNp
emU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJp
ZiZxdW90Oztjb2xvcjojMUY0OTdEIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWls
eTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3
RCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNw
YW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90
OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPkxpbmRhDQo8bzpwPjwvbzpw
Pjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYm
cXVvdDs7Y29sb3I6IzFGNDk3RCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPGRpdiBz
dHlsZT0iYm9yZGVyOm5vbmU7Ym9yZGVyLWxlZnQ6c29saWQgYmx1ZSAxLjVwdDtwYWRkaW5nOjBp
biAwaW4gMGluIDQuMHB0Ij4NCjxkaXY+DQo8ZGl2IHN0eWxlPSJib3JkZXI6bm9uZTtib3JkZXIt
dG9wOnNvbGlkICNCNUM0REYgMS4wcHQ7cGFkZGluZzozLjBwdCAwaW4gMGluIDBpbiI+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj48Yj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZh
bWlseTomcXVvdDtUYWhvbWEmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+RnJvbTo8L3Nw
YW4+PC9iPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O1Rh
aG9tYSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4gUmFrZXNoIFNhaGEgW21haWx0bzpy
c2FoYUB1cy5pYm0uY29tXQ0KPGJyPg0KPGI+U2VudDo8L2I+IEZyaWRheSwgSmFudWFyeSAyNywg
MjAxMiAxOjQzIEFNPGJyPg0KPGI+VG86PC9iPiBMaW5kYSBEdW5iYXI8YnI+DQo8Yj5DYzo8L2I+
IGRhdmlkLmJsYWNrQGVtYy5jb207IGRjQGlldGYub3JnOyBkYy1ib3VuY2VzQGlldGYub3JnOyBE
aW5lc2ggRHV0dDsga3JlZWdlckBjaXNjby5jb207IE11cmFyaSBTcmlkaGFyYW47IG5hcnRlbkBy
b3RhbGEucmFsZWlnaC5pYm0uY29tPGJyPg0KPGI+U3ViamVjdDo8L2I+IFJlOiBbZGNdIGNvbW1l
bnRzIGFuZCBzdWdnZXN0aW9ucyB0byBkcmFmdC1uYXJ0ZW4tbnYwMy1vdmVybGF5LXByb2JsZW0t
c3RhdG1lbnQtMDE8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8L2Rpdj4NCjwvZGl2Pg0KPHAgY2xh
c3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFs
Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZx
dW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5IaSBMaW5kYSw8L3NwYW4+DQo8YnI+DQo8YnI+
DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJp
JnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPiZndDtZb3Ugc3RhdGVkIHRoYXQg4oCcdHlw
aWNhbGx5LCBpdCBpcyBhIHZpcnR1YWwgTklDIGNvbWluZyB1cCB0aGF0IHRyaWdnZXJzIHRoaXMg
YXNzb2NpYXRpb27igJ0uIEl0IGlzIG5vdCBxdWl0ZSByaWdodC4gVmlydHVhbCBOSUMsIHNhbWUg
YXMgcGh5c2ljYWwgTklDIGZvciBhIG5vbi12aXJ0dWFsaXplZCBzZXJ2ZXIsIHVzdWFsbHkgaXMg
bm90DQogYXdhcmUgb2Ygd2hpY2ggJmd0O25ldHdvcmsgc2VnbWVudCBpdCBpcyBhdHRhY2hlZCB0
by4gPHU+Rm9yIEV0aGVybmV0IGRhdGEgZnJhbWVzIGNvbWluZyBvdXQgZnJvbSBhIHRyYWRpdGlv
bmFsIHNlcnZlciAoaS5lLiBzZXJ2ZXIgd2l0aG91dCB2aXJ0dWFsaXphdGlvbiksIHRoZSBkYXRh
IGZyYW1lIGlzIG5vdCB0YWdnZWQ8L3U+LCBpLmUuIG5vdCBoYXZpbmcgVklEIGFzc29jaWF0ZWQg
d2l0aCBpdC4gSXQgaXMgJmd0O3RoZSBmaXJzdCBzd2l0Y2ggd2hpY2gNCiBhc3NpZ24gdGhlIFZJ
RCBmb3IgdGhlIGRhdGEgZnJhbWUuIDwvc3Bhbj48YnI+DQo8YnI+DQo8c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMt
c2VyaWYmcXVvdDsiPlJlZ2FyZGluZyAmcXVvdDtGb3IgRXRoZXJuZXQNCjx1PmRhdGEgZnJhbWVz
IGNvbWluZyBvdXQgZnJvbSBhIHRyYWRpdGlvbmFsIHNlcnZlciAoaS5lLiBzZXJ2ZXIgd2l0aG91
dCB2aXJ0dWFsaXphdGlvbiksIHRoZSBkYXRhIGZyYW1lIGlzIG5vdCB0YWdnZWQ8L3U+JnF1b3Q7
LC4uLiZxdW90OyBJdCBpcyAmZ3Q7dGhlIGZpcnN0IHN3aXRjaCB3aGljaCBhc3NpZ24gdGhlIFZJ
RCBmb3IgdGhlIGRhdGEgZnJhbWUmcXVvdDsgLi4uPC9zcGFuPg0KPGJyPg0KPGJyPg0KPHNwYW4g
c3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90Oywm
cXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5ub24tdmlydHVhbGl6ZWQgc2VydmVycyBhcmUgY2FwYWJs
ZSBvZiBzZW5kaW5nIFZMQU4gdGFnZ2VkIHBhY2tldHMuIEl0IGlzIGEgc3RhbmRhcmQgZmVhdHVy
ZSBvZiB0b2RheSdhcyB0cmFkaXRpb25hbCBPcGVyYXRpbmcgU3lzdGVtcyAoTklDIGRyaXZlcnMp
ICZuYnNwO3RvIGFsbG93IFZMQU5zIHRvIGJlIGFzc29jaWF0ZWQgd2l0aCBhbiBldGhlcm5ldA0K
IGludGVyZmFjZS48L3NwYW4+IDxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2Zv
bnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+U28g
YSB0cmFkaXRpb25hbCBub24tdmlydHVhbGl6ZWQgc2VydmVyIG1heSBzZW5kIHRhZ2dlZCBhbmQg
dW50YWdnZWQgZnJhbWVzLjwvc3Bhbj4NCjxicj4NCjxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNp
emU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJp
ZiZxdW90OyI+VGhhbmtzLDwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4w
cHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7
Ij5SYWtlc2guPC9zcGFuPiA8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250
LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPi09LTwv
c3Bhbj4gPGJyPg0KPGJyPg0KPGJyPg0KPGJyPg0KPGJyPg0KPGJyPg0KPGJyPg0KPHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZTo3LjVwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OywmcXVvdDtz
YW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiM1RjVGNUYiPkZyb206ICZuYnNwOyAmbmJzcDsgJm5ic3A7
ICZuYnNwOzwvc3Bhbj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjcuNXB0O2ZvbnQtZmFtaWx5OiZx
dW90O0FyaWFsJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPkxpbmRhIER1bmJhciAmbHQ7
bGluZGEuZHVuYmFyQGh1YXdlaS5jb20mZ3Q7PC9zcGFuPg0KPGJyPg0KPHNwYW4gc3R5bGU9ImZv
bnQtc2l6ZTo3LjVwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OywmcXVvdDtzYW5zLXNl
cmlmJnF1b3Q7O2NvbG9yOiM1RjVGNUYiPlRvOiAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDs8
L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo3LjVwdDtmb250LWZhbWlseTomcXVvdDtBcmlh
bCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5uYXJ0ZW5Acm90YWxhLnJhbGVpZ2guaWJt
LmNvbSwgJnF1b3Q7ZGF2aWQuYmxhY2tAZW1jLmNvbSZxdW90OyAmbHQ7ZGF2aWQuYmxhY2tAZW1j
LmNvbSZndDssIE11cmFyaSBTcmlkaGFyYW4gJmx0O211cmFyaXNAbWljcm9zb2Z0LmNvbSZndDss
DQogRGluZXNoIER1dHQgJmx0O2RkdXR0QGNpc2NvLmNvbSZndDssICZxdW90O2tyZWVnZXJAY2lz
Y28uY29tJnF1b3Q7ICZsdDtrcmVlZ2VyQGNpc2NvLmNvbSZndDs8L3NwYW4+IDxicj4NCjxzcGFu
IHN0eWxlPSJmb250LXNpemU6Ny41cHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssJnF1
b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojNUY1RjVGIj5DYzogJm5ic3A7ICZuYnNwOyAmbmJz
cDsgJm5ic3A7PC9zcGFuPjxzcGFuIHN0eWxlPSJmb250LXNpemU6Ny41cHQ7Zm9udC1mYW1pbHk6
JnF1b3Q7QXJpYWwmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+JnF1b3Q7ZGNAaWV0Zi5v
cmcmcXVvdDsgJmx0O2RjQGlldGYub3JnJmd0Ozwvc3Bhbj4NCjxicj4NCjxzcGFuIHN0eWxlPSJm
b250LXNpemU6Ny41cHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssJnF1b3Q7c2Fucy1z
ZXJpZiZxdW90Oztjb2xvcjojNUY1RjVGIj5EYXRlOiAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJz
cDs8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo3LjVwdDtmb250LWZhbWlseTomcXVvdDtB
cmlhbCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4wMS8yNi8yMDEyIDA0OjI1IFBNPC9z
cGFuPg0KPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo3LjVwdDtmb250LWZhbWlseTomcXVv
dDtBcmlhbCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiM1RjVGNUYiPlN1Ympl
Y3Q6ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOzwvc3Bhbj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOjcuNXB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYm
cXVvdDsiPltkY10gY29tbWVudHMgYW5kIHN1Z2dlc3Rpb25zIHRvIGRyYWZ0LW5hcnRlbi1udjAz
LW92ZXJsYXktcHJvYmxlbS1zdGF0bWVudC0wMTwvc3Bhbj4NCjxicj4NCjxzcGFuIHN0eWxlPSJm
b250LXNpemU6Ny41cHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssJnF1b3Q7c2Fucy1z
ZXJpZiZxdW90Oztjb2xvcjojNUY1RjVGIj5TZW50IGJ5OiAmbmJzcDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDs8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo3LjVwdDtmb250LWZhbWlseTomcXVv
dDtBcmlhbCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5kYy1ib3VuY2VzQGlldGYub3Jn
PC9zcGFuPg0KPG86cD48L286cD48L3A+DQo8ZGl2IGNsYXNzPSJNc29Ob3JtYWwiIGFsaWduPSJj
ZW50ZXIiIHN0eWxlPSJ0ZXh0LWFsaWduOmNlbnRlciI+DQo8aHIgc2l6ZT0iMyIgd2lkdGg9IjEw
MCUiIG5vc2hhZGU9IiIgc3R5bGU9ImNvbG9yOiNBQ0E4OTkiIGFsaWduPSJjZW50ZXIiPg0KPC9k
aXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIiBzdHlsZT0ibWFyZ2luLWJvdHRvbToxMi4wcHQiPjxi
cj4NCjxicj4NCjxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5
OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+VGhvbWFzLCBldCBh
bCwgPC9zcGFuPg0KPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1p
bHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4mbmJzcDs8L3Nw
YW4+IDxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90
O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+SGVyZSBhcmUgbXkgY29tbWVu
dHMgdG8gZHJhZnQtbmFydGVuLW52MDMtb3ZlcmxheS1wcm9ibGVtLXN0YXRtZW50LTAxIGFuZCBz
dWdnZXN0ZWQgd29yZGluZyBjaGFuZ2U6PC9zcGFuPg0KPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNl
cmlmJnF1b3Q7Ij4mbmJzcDs8L3NwYW4+IDxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAu
MHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90
OyI+My4xLiBMaW1pdGF0aW9ucyBvZiBFeGlzdGluZyBWaXJ0dWFsIE5ldHdvcmsgTW9kZWxzLg0K
PC9zcGFuPjxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZx
dW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8
YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxp
YnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPlNpbmNlIElFRUU4MDIuMSBkZWZpbmVk
IFZMQU4gc2VwYXJhdGlvbiBpcyBtZW50aW9uZWQsIGl0IHdvdWxkIGJlIGFwcHJvcHJpYXRlIHRv
IG1lbnRpb24gUEJC4oCZcyBJU0lEIHNlcGFyYXRpb24uDQo8L3NwYW4+PGJyPg0KPHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVv
dDtzYW5zLXNlcmlmJnF1b3Q7Ij4mbmJzcDs8L3NwYW4+IDxicj4NCjxzcGFuIHN0eWxlPSJmb250
LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1z
ZXJpZiZxdW90OyI+VGhlIGxpbWl0YXRpb24gZm9yIFBCQiBhbmQgVkxBTiBzaG91bGQgaW5jbHVk
ZSB0aGF0IE1BQyBhZGRyZXNzZXMgY2Fu4oCZdCBiZSBhZ2dyZWdhdGVkLCB0aGVyZWZvcmUgZm9y
d2FyZGluZyB0YWJsZSBjYW4gYmUgdmVyeSBsYXJnZSBmb3IgbGFyZ2UgZGF0YSBjZW50ZXJzLg0K
PC9zcGFuPjxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZx
dW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8
YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxp
YnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPlNlY29uZCBwYXJhZ3JhcGg6IFdoeSDi
gJxWTEFOcyBhcmUgYSBwdXJlIGJyaWRnaW5nIGNvbnN0cnVjdCB3aGlsZSBWUkYgaXMgcHVyZSBy
b3V0aW5nIGNvbnN0cnVjdOKAnSBpcyBhIHByb2JsZW0/DQo8L3NwYW4+PGJyPg0KPHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVv
dDtzYW5zLXNlcmlmJnF1b3Q7Ij4mbmJzcDs8L3NwYW4+IDxicj4NCjxzcGFuIHN0eWxlPSJmb250
LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1z
ZXJpZiZxdW90OyI+NC4gTmV0d29yayBPdmVybGF5czwvc3Bhbj4NCjxicj4NCjxzcGFuIHN0eWxl
PSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7
c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1z
aXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2Vy
aWYmcXVvdDsiPlNob3VsZCBhZGQgYSBzdWJzZWN0aW9uIHRvIGRlc2NyaWJlIFZpcnR1YWwgTmV0
d29yayBJbnN0YW5jZSBJRC4NCjwvc3Bhbj48YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEw
LjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVv
dDsiPiZuYnNwOzwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9u
dC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij40Lngu
IFZpcnR1YWwgTmV0d29yayBJbnN0YW5jZSBJRDwvc3Bhbj4NCjxicj4NCjxzcGFuIHN0eWxlPSJm
b250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fu
cy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYm
cXVvdDsiPlZpcnR1YWwgTmV0d29yayBJbnN0YW5jZSBpcyBmb3Igc2VncmVnYXRpbmcgdHJhZmZp
YyBiZWxvbmdpbmcgdG8gZGlmZmVyZW50IHRlbmFudHMgb3IgZGlmZmVyZW50IHpvbmVzIG9mIG9u
ZSB0ZW5hbnQuIFdoZW4gYSBkYXRhIGNlbnRlciB1c2VzIE92ZXJsYXkgTmV0d29yayB0byBoaWRl
IGhvc3RzIGFkZHJlc3NlcywgaXQgaXMgaW1wb3J0YW50DQogdGhhdCBWaXJ0dWFsIE5ldHdvcmsg
SW5zdGFuY2UgaWRlbnRpZmllciBjYW4gcHJvcGVybHkgcmVwcmVzZW50IHRoZSB6b25lcyBvciB0
aGUgdGVuYW50cyBpbiBlbnRpcmUgZGF0YSBjZW50ZXIsIGVzcGVjaWFsbHkgd2hlbiB0aGUgb3Zl
cmxheSBlZGdlIG5vZGVzIGFyZSBhY2Nlc3Mgc3dpdGNoZXMsIGkuZS4gbm90IGVtYmVkZGVkIGlu
IGh5cGVydmlzb3JzLg0KPC9zcGFuPjxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0
O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+
Jm5ic3A7PC9zcGFuPiA8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZh
bWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPldoZW4gb3Zl
cmxheSBlZGdlIG5vZGVzIGFyZSBhY2Nlc3Mgc3dpdGNoZXMsIHRoZSBkYXRhIGZyYW1lcyBiZWZv
cmUgZW50ZXJpbmcgdGhlIE92ZXJsYXkgTmV0d29yayBvciBhZnRlciBleGl0aW5nIHRoZSBPdmVy
bGF5IE5ldHdvcmsgbWlnaHQgY2FycnkgdHJhZGl0aW9uYWwgVkxBTi1JRCBmb3IgcHJvcGVyIHRy
YWZmaWMgc2VncmVnYXRpb24uDQogVGhlIFZpcnR1YWwgTmV0d29yayBJbnN0YW5jZSBJRCB2YWx1
ZSBjYXJyaWVkIGJ5IHRoZSBPdmVybGF5IEhlYWRlciAmbmJzcDtvZiB0aGUgZGF0YSBmcmFtZXMg
bWlnaHQgYmUgMjQgYml0cyAoYXMgZGVzY3JpYmVkIGluIDMuMikuIFRob3NlIFZMQU4tSUQgZm9y
IGRhdGEgZnJhbWVzIHVuZGVyIGVhY2ggb3ZlcmxheSBlZGdlIG5vZGUgYXJlIG9ubHkgbG9jYWxs
eSBzaWduaWZpY2FudC4gUHJvcGVyIG1hcHBpbmcgaGFzIHRvIGJlIG1haW50YWluZWQuDQo8L3Nw
YW4+PGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7
Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4mbmJzcDs8L3NwYW4+IDxicj4N
CjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkm
cXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Ni4yIChUUklMTCkgJmFtcDs2LjMgKEwyVlBO
KTwvc3Bhbj4NCjxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5
OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFu
PiA8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtD
YWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPkl0IGlzIG5lY2Vzc2FyeSB0byBw
b2ludCBvdXQgdGhhdCBib3RoIFRSSUxMIGFuZCBMMlZQTiBjYXJyeSB0aGUgVkxBTi1JRCBlbWJl
ZGRlZCBpbiB0aGUgb3JpZ2luYWwgRXRoZXJuZXQgZnJhbWVzIGFjcm9zcyB0aGUgT3ZlcmxheSBO
ZXR3b3JrIGFuZCB0aGUgVkxBTi1JRCBtYWludGFpbiB0aGUgc2FtZSBtZWFuaW5nIGluIHR3byBz
ZXBhcmF0ZQ0KIEwyIGlzbGFuZHMuIDwvc3Bhbj48YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYm
cXVvdDsiPiZuYnNwOzwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7
Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5X
aGVuIFZMQU4tSUQgdGFnZ2VkIEV0aGVybmV0IGZyYW1lcyB0cmF2ZXJzZSBhY3Jvc3MgdGhlIE92
ZXJsYXkgTmV0d29yayBmb3IgRGF0YSBDZW50ZXIsIHRoZSBWTEFOLUlEIGNhcnJpZWQgYnkgRXRo
ZXJuZXQgZnJhbWVzIGxvc2UgaXRzIHNpZ25pZmljYW5jZS4gSW4gYW5vdGhlciB3b3JkcywgYSBk
aWZmZXJlbnQgVkxBTi1JRCBtaWdodA0KIGJlIHJlLWFzc2lnbmVkIHRvIHRoZSBFdGhlcm5ldCBm
cmFtZXMgYnkgdGhlIEVncmVzcyBvdmVybGF5IGVkZ2UuIDwvc3Bhbj48YnI+DQo8c3BhbiBzdHls
ZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90
O3NhbnMtc2VyaWYmcXVvdDsiPiZuYnNwOzwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNl
cmlmJnF1b3Q7Ij4mbmJzcDs8L3NwYW4+IDxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAu
MHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90
OyI+NS4zLiBBc3NvY2lhdGluZyBhIFZOSUQgd2l0aCBhbiBlbmRwb2ludC4NCjwvc3Bhbj48YnI+
DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJp
JnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPiZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNw
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDs8L3NwYW4+DQo8YnI+DQo8c3BhbiBzdHlsZT0i
Zm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3Nh
bnMtc2VyaWYmcXVvdDsiPllvdSBzdGF0ZWQgdGhhdCDigJx0eXBpY2FsbHksIGl0IGlzIGEgdmly
dHVhbCBOSUMgY29taW5nIHVwIHRoYXQgdHJpZ2dlcnMgdGhpcyBhc3NvY2lhdGlvbuKAnS4gSXQg
aXMgbm90IHF1aXRlIHJpZ2h0LiBWaXJ0dWFsIE5JQywgc2FtZSBhcyBwaHlzaWNhbCBOSUMgZm9y
IGEgbm9uLXZpcnR1YWxpemVkIHNlcnZlciwgdXN1YWxseSBpcyBub3QNCiBhd2FyZSBvZiB3aGlj
aCBuZXR3b3JrIHNlZ21lbnQgaXQgaXMgYXR0YWNoZWQgdG8uIEZvciBFdGhlcm5ldCBkYXRhIGZy
YW1lcyBjb21pbmcgb3V0IGZyb20gYSB0cmFkaXRpb25hbCBzZXJ2ZXIgKGkuZS4gc2VydmVyIHdp
dGhvdXQgdmlydHVhbGl6YXRpb24pLCB0aGUgZGF0YSBmcmFtZSBpcyBub3QgdGFnZ2VkLCBpLmUu
IG5vdCBoYXZpbmcgVklEIGFzc29jaWF0ZWQgd2l0aCBpdC4gSXQgaXMgdGhlIGZpcnN0IHN3aXRj
aCB3aGljaCBhc3NpZ24NCiB0aGUgVklEIGZvciB0aGUgZGF0YSBmcmFtZS4gPC9zcGFuPjxicj4N
CjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkm
cXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8YnI+DQo8c3BhbiBz
dHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZx
dW90O3NhbnMtc2VyaWYmcXVvdDsiPkl0IGlzIG1vcmUgYWNjdXJhdGUgdG8gc2F5ICZuYnNwO+KA
nHZpcnR1YWwgc3dpdGNo4oCdIG9yIOKAnEh5cGVydmlzb3LigJ0gd2l0aGluIHRoZSBzZXJ2ZXIg
d2hpY2ggY2FuIHRyaWdnZXIgdGhlIGFzc29jaWF0aW9uLg0KPC9zcGFuPjxicj4NCjxzcGFuIHN0
eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1
b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8YnI+DQo8c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMt
c2VyaWYmcXVvdDsiPlNpbmNlIG92ZXJsYXkgaGVhZGVyIGNhbiBiZSBhbHNvIGFkZGVkIGJ5IGZp
cnN0IGFjY2VzcyBzd2l0Y2hlcyAoaS5lLiBzd2l0Y2hlcyBub3QgZW1iZWRkZWQgaW4gcGh5c2lj
YWwgc2VydmVycyksIGl0IGlzIHZlcnkgcG9zc2libGUgdGhhdCAmbmJzcDtkYXRhIGZyYW1lcyBh
cnJpdmluZyBhdCB0aGUgb3ZlcmxheSBlZGdlIChpLmUuIHRoZQ0KIGZpcnN0IGFjY2VzcyBzd2l0
Y2gpIGlzIGFscmVhZHkgVkxBTiB0YWdnZWQuIFRoZW4gYXNzb2NpYXRpb24gKG9yIG1hcHBpbmcp
IGZyb20gdGhlIFZMQU4tVGFnIHRvIFZOSUQgaGFzIHRvIGJlIG9wZXJhdG9yIGFkbWluaXN0cmF0
ZWQuDQo8L3NwYW4+PGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1p
bHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4mbmJzcDs8L3Nw
YW4+IDxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90
O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8YnI+
DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJp
JnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPkxpbmRhIER1bmJhcjwvc3Bhbj4NCjxicj4N
CjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkm
cXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPjx0dD48c3BhbiBzdHls
ZT0iZm9udC1zaXplOjEwLjBwdCI+X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX188L3NwYW4+PC90dD48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250
LWZhbWlseTomcXVvdDtDb3VyaWVyIE5ldyZxdW90OyI+PGJyPg0KPHR0PmRjIG1haWxpbmcgbGlz
dDwvdHQ+PGJyPg0KPHR0PmRjQGlldGYub3JnPC90dD48YnI+DQo8L3NwYW4+PGEgaHJlZj0iaHR0
cHM6Ly93d3cuaWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9kYyI+PHR0PjxzcGFuIHN0eWxlPSJm
b250LXNpemU6MTAuMHB0Ij5odHRwczovL3d3dy5pZXRmLm9yZy9tYWlsbWFuL2xpc3RpbmZvL2Rj
PC9zcGFuPjwvdHQ+PC9hPjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8L2Rpdj4NCjwvYm9keT4N
CjwvaHRtbD4NCg==

--_000_4A95BA014132FF49AE685FAB4B9F17F632E181F3dfweml505mbx_--

From Peter.AshwoodSmith@huawei.com  Fri Jan 27 10:25:57 2012
Return-Path: <Peter.AshwoodSmith@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id EF8DA21F8673 for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 10:25:57 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.449
X-Spam-Level: 
X-Spam-Status: No, score=-2.449 tagged_above=-999 required=5 tests=[AWL=0.150,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ld63KmL-zv4a for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 10:25:57 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id 7760C21F8670 for <dc@ietf.org>; Fri, 27 Jan 2012 10:25:57 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml201-edg.china.huawei.com) ([172.18.9.243]) by dfwrg01-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACY97078; Fri, 27 Jan 2012 13:25:56 -0500 (EST)
Received: from DFWEML403-HUB.china.huawei.com (10.193.5.151) by dfweml201-edg.china.huawei.com (172.18.9.107) with Microsoft SMTP Server (TLS) id 14.1.323.3; Fri, 27 Jan 2012 10:23:53 -0800
Received: from DFWEML503-MBX.china.huawei.com ([10.124.31.29]) by dfweml403-hub.china.huawei.com ([10.193.5.151]) with mapi id 14.01.0323.003; Fri, 27 Jan 2012 10:23:52 -0800
From: AshwoodsmithPeter <Peter.AshwoodSmith@huawei.com>
To: Thomas Narten <narten@us.ibm.com>, Paul Unbehagen <paul@unbehagen.net>
Thread-Topic: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
Thread-Index: AQHM3I2/QKud3CAYeUScvvWj8JJYcZYgwBeA///G0dA=
Date: Fri, 27 Jan 2012 18:23:51 +0000
Message-ID: <7AE6A4247B044C4ABE0A5B6BF427F8E290D14D@dfweml503-mbx>
In-Reply-To: <201201271342.q0RDghlw025377@cichlid.raleigh.ibm.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.193.60.134]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, Linda Dunbar <linda.dunbar@huawei.com>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Subject: Re: [dc] comments and suggestions to	draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 18:25:58 -0000

> I think we need to distinguish between PBB-V and PBB-M.
> Agreed?

Thomas, No such beasts as PBB-V and PBB-M ;) I think you are referring to S=
PBV and SPBM which have the properties you describe.

Peter


From narten@us.ibm.com  Fri Jan 27 10:29:33 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0F35A21F863C for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 10:29:33 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -109.599
X-Spam-Level: 
X-Spam-Status: No, score=-109.599 tagged_above=-999 required=5 tests=[AWL=1.000, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0-9FE4cqvwQi for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 10:29:32 -0800 (PST)
Received: from e37.co.us.ibm.com (e37.co.us.ibm.com [32.97.110.158]) by ietfa.amsl.com (Postfix) with ESMTP id 7288421F862B for <dc@ietf.org>; Fri, 27 Jan 2012 10:29:32 -0800 (PST)
Received: from /spool/local by e37.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Fri, 27 Jan 2012 11:29:30 -0700
Received: from d03dlp01.boulder.ibm.com (9.17.202.177) by e37.co.us.ibm.com (192.168.1.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Fri, 27 Jan 2012 11:29:14 -0700
Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com [9.17.195.228]) by d03dlp01.boulder.ibm.com (Postfix) with ESMTP id 34C001FF0049 for <dc@ietf.org>; Fri, 27 Jan 2012 11:29:14 -0700 (MST)
Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q0RITDT0097480 for <dc@ietf.org>; Fri, 27 Jan 2012 11:29:13 -0700
Received: from d03av03.boulder.ibm.com (loopback [127.0.0.1]) by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q0RITCM9014282 for <dc@ietf.org>; Fri, 27 Jan 2012 11:29:12 -0700
Received: from cichlid.raleigh.ibm.com (sig-9-76-135-189.mts.ibm.com [9.76.135.189]) by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q0RITBG5014131 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 27 Jan 2012 11:29:12 -0700
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q0RIT9N7027486; Fri, 27 Jan 2012 13:29:09 -0500
Message-Id: <201201271829.q0RIT9N7027486@cichlid.raleigh.ibm.com>
To: AshwoodsmithPeter <Peter.AshwoodSmith@huawei.com>
In-reply-to: <7AE6A4247B044C4ABE0A5B6BF427F8E290D14D@dfweml503-mbx>
References: <7AE6A4247B044C4ABE0A5B6BF427F8E290D14D@dfweml503-mbx>
Comments: In-reply-to AshwoodsmithPeter <Peter.AshwoodSmith@huawei.com> message dated "Fri, 27 Jan 2012 18:23:51 +0000."
Date: Fri, 27 Jan 2012 13:29:08 -0500
From: Thomas Narten <narten@us.ibm.com>
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12012718-7408-0000-0000-000002367FFF
Cc: "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, Linda Dunbar <linda.dunbar@huawei.com>, Paul Unbehagen <paul@unbehagen.net>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 18:29:33 -0000

> Thomas, No such beasts as PBB-V and PBB-M ;) I think you are
>  referring to SPBV and SPBM which have the properties you describe.

Indeed! Too many acronyms... Sigh. :-(

Thomas


From paul@unbehagen.net  Fri Jan 27 11:18:34 2012
Return-Path: <paul@unbehagen.net>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 8FB8E21F859E for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 11:18:34 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.552
X-Spam-Level: 
X-Spam-Status: No, score=-2.552 tagged_above=-999 required=5 tests=[AWL=-0.349, BAYES_00=-2.599, MIME_QP_LONG_LINE=1.396, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xLKUSYmrUdyV for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 11:18:34 -0800 (PST)
Received: from mail-gy0-f172.google.com (mail-gy0-f172.google.com [209.85.160.172]) by ietfa.amsl.com (Postfix) with ESMTP id 14E1921F8592 for <dc@ietf.org>; Fri, 27 Jan 2012 11:18:34 -0800 (PST)
Received: by ghbg16 with SMTP id g16so1062150ghb.31 for <dc@ietf.org>; Fri, 27 Jan 2012 11:18:33 -0800 (PST)
Received: by 10.236.165.1 with SMTP id d1mr12605185yhl.54.1327691913649; Fri, 27 Jan 2012 11:18:33 -0800 (PST)
Received: from [10.0.1.11] (c-67-161-144-217.hsd1.co.comcast.net. [67.161.144.217]) by mx.google.com with ESMTPS id u39sm15204420yhe.5.2012.01.27.11.18.31 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 27 Jan 2012 11:18:32 -0800 (PST)
References: <7AE6A4247B044C4ABE0A5B6BF427F8E290D14D@dfweml503-mbx> <201201271829.q0RIT9N7027486@cichlid.raleigh.ibm.com>
In-Reply-To: <201201271829.q0RIT9N7027486@cichlid.raleigh.ibm.com>
Mime-Version: 1.0 (1.0)
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=us-ascii
Message-Id: <8122B69C-6EE4-4EBB-88C5-2649ABDA5872@unbehagen.net>
X-Mailer: iPad Mail (9A405)
From: Paul Unbehagen <paul@unbehagen.net>
Date: Fri, 27 Jan 2012 12:18:28 -0700
To: Thomas Narten <narten@us.ibm.com>
Cc: "dc@ietf.org" <dc@ietf.org>, AshwoodsmithPeter <Peter.AshwoodSmith@huawei.com>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, Linda Dunbar <linda.dunbar@huawei.com>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 19:18:34 -0000

Also, all implementations and deployments of SPB are SPBm, the scaling and a=
bility to deal with Mac explosions help in the DC.=20

--
Paul Unbehagen

Sent from my iPad

On Jan 27, 2012, at 11:29 AM, Thomas Narten <narten@us.ibm.com> wrote:

>> Thomas, No such beasts as PBB-V and PBB-M ;) I think you are
>> referring to SPBV and SPBM which have the properties you describe.
>=20
> Indeed! Too many acronyms... Sigh. :-(
>=20
> Thomas
>=20
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From narten@us.ibm.com  Fri Jan 27 11:24:36 2012
Return-Path: <narten@us.ibm.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0AAAD21F85EA for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 11:24:36 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -109.71
X-Spam-Level: 
X-Spam-Status: No, score=-109.71 tagged_above=-999 required=5 tests=[AWL=0.889, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id MLANaZuQzsII for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 11:24:35 -0800 (PST)
Received: from e9.ny.us.ibm.com (e9.ny.us.ibm.com [32.97.182.139]) by ietfa.amsl.com (Postfix) with ESMTP id 63C1821F85D4 for <dc@ietf.org>; Fri, 27 Jan 2012 11:24:35 -0800 (PST)
Received: from /spool/local by e9.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for <dc@ietf.org> from <narten@us.ibm.com>; Fri, 27 Jan 2012 14:24:31 -0500
Received: from d01dlp01.pok.ibm.com (9.56.224.56) by e9.ny.us.ibm.com (192.168.1.109) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted;  Fri, 27 Jan 2012 14:24:28 -0500
Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by d01dlp01.pok.ibm.com (Postfix) with ESMTP id 2F2E938C805E for <dc@ietf.org>; Fri, 27 Jan 2012 14:24:28 -0500 (EST)
Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay02.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q0RJO5FO220114 for <dc@ietf.org>; Fri, 27 Jan 2012 14:24:13 -0500
Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q0RJO4q9008673 for <dc@ietf.org>; Fri, 27 Jan 2012 14:24:05 -0500
Received: from cichlid.raleigh.ibm.com (sig-9-76-135-189.mts.ibm.com [9.76.135.189]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q0RJO2t3008580 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 27 Jan 2012 14:24:03 -0500
Received: from cichlid.raleigh.ibm.com (localhost [127.0.0.1]) by cichlid.raleigh.ibm.com (8.14.5/8.12.5) with ESMTP id q0RJO0mc027840; Fri, 27 Jan 2012 14:24:01 -0500
Message-Id: <201201271924.q0RJO0mc027840@cichlid.raleigh.ibm.com>
To: Paul Unbehagen <paul@unbehagen.net>
In-reply-to: <8122B69C-6EE4-4EBB-88C5-2649ABDA5872@unbehagen.net>
References: <7AE6A4247B044C4ABE0A5B6BF427F8E290D14D@dfweml503-mbx> <201201271829.q0RIT9N7027486@cichlid.raleigh.ibm.com> <8122B69C-6EE4-4EBB-88C5-2649ABDA5872@unbehagen.net>
Comments: In-reply-to Paul Unbehagen <paul@unbehagen.net> message dated "Fri, 27 Jan 2012 12:18:28 -0700."
Date: Fri, 27 Jan 2012 14:23:59 -0500
From: Thomas Narten <narten@us.ibm.com>
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 12012719-7182-0000-0000-0000009753AE
Cc: "dc@ietf.org" <dc@ietf.org>, AshwoodsmithPeter <Peter.AshwoodSmith@huawei.com>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, Linda Dunbar <linda.dunbar@huawei.com>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 19:24:36 -0000

> Also, all implementations and deployments of SPB are SPBm, the
>  scaling and ability to deal with Mac explosions help in the DC.

Can others confirm that SPB-M is what deployments are using? And that
SPB-V is essentially irrelevant?

The benefits of SPB-M over SPB-V seem pretty compelling to me. It
would be nice if we could focus all SPB discussions on SPB-M rather
than worry about SPB-V.

Thomas


From Peter.AshwoodSmith@huawei.com  Fri Jan 27 11:24:56 2012
Return-Path: <Peter.AshwoodSmith@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0C6D821F861B for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 11:24:56 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.499
X-Spam-Level: 
X-Spam-Status: No, score=-2.499 tagged_above=-999 required=5 tests=[AWL=0.100,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id i6QG4ypZos3F for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 11:24:55 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id 279EF21F85D4 for <dc@ietf.org>; Fri, 27 Jan 2012 11:24:50 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml201-edg.china.huawei.com) ([172.18.9.243]) by dfwrg01-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACZ00531; Fri, 27 Jan 2012 14:24:49 -0500 (EST)
Received: from DFWEML404-HUB.china.huawei.com (10.193.5.203) by dfweml201-edg.china.huawei.com (172.18.9.107) with Microsoft SMTP Server (TLS) id 14.1.323.3; Fri, 27 Jan 2012 11:22:03 -0800
Received: from DFWEML503-MBX.china.huawei.com ([10.124.31.29]) by dfweml404-hub.china.huawei.com ([10.193.5.203]) with mapi id 14.01.0323.003; Fri, 27 Jan 2012 11:22:02 -0800
From: AshwoodsmithPeter <Peter.AshwoodSmith@huawei.com>
To: Thomas Narten <narten@us.ibm.com>
Thread-Topic: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
Thread-Index: AQHM3SGOQKud3CAYeUScvvWj8JJYcZYgiVVQ
Date: Fri, 27 Jan 2012 19:22:02 +0000
Message-ID: <7AE6A4247B044C4ABE0A5B6BF427F8E290D2A6@dfweml503-mbx>
In-Reply-To: <201201271829.q0RIT9N7027486@cichlid.raleigh.ibm.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.193.60.134]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, Linda Dunbar <linda.dunbar@huawei.com>, Paul Unbehagen <paul@unbehagen.net>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Subject: Re: [dc] comments and suggestions to	draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 19:24:56 -0000

Agreed, just for clarity Shortest Path Bridging Mac-in-Mac mode (SPBM) is t=
he one that matters in this context (DC) and SPBM 'basically' uses the PBB =
(mac-in-mac) datapath.

On the ENCAP location, I suppose you 'could' do the full PBB encap in a hyp=
ervisor, if that hypervisor were controlling 1000's of VMs, but otherwise t=
he only part of the PBB encap that would make sense in the hypervisor would=
 be perhaps to add the ISID (I-TAG) instead of (C/S-TAGs) and save the TOR =
the trouble of the mapping.

Peter


-----Original Message-----
From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Thomas =
Narten
Sent: Friday, January 27, 2012 1:29 PM
To: AshwoodsmithPeter
Cc: dc@ietf.org; david.black@emc.com; Dinesh Dutt; Linda Dunbar; Paul Unbeh=
agen; Murari Sridharan; kreeger@cisco.com
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-pro=
blem-statment-01

> Thomas, No such beasts as PBB-V and PBB-M ;) I think you are
>  referring to SPBV and SPBM which have the properties you describe.

Indeed! Too many acronyms... Sigh. :-(

Thomas

_______________________________________________
dc mailing list
dc@ietf.org
https://www.ietf.org/mailman/listinfo/dc

From pthaler@broadcom.com  Fri Jan 27 13:35:37 2012
Return-Path: <pthaler@broadcom.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 1BAB821F86C8; Fri, 27 Jan 2012 13:35:37 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.299
X-Spam-Level: 
X-Spam-Status: No, score=-6.299 tagged_above=-999 required=5 tests=[AWL=-0.300, BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_52=0.6, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id N8GIgiWy+Y8c; Fri, 27 Jan 2012 13:35:35 -0800 (PST)
Received: from MMS3.broadcom.com (mms3.broadcom.com [216.31.210.19]) by ietfa.amsl.com (Postfix) with ESMTP id 4841021F86CA; Fri, 27 Jan 2012 13:35:35 -0800 (PST)
Received: from [10.16.192.232] by MMS3.broadcom.com with ESMTP (Broadcom SMTP Relay (Email Firewall v6.3.2)); Fri, 27 Jan 2012 13:43:45 -0800
X-Server-Uuid: B55A25B1-5D7D-41F8-BC53-C57E7AD3C201
Received: from SJEXCHCAS01.corp.ad.broadcom.com (10.16.192.31) by SJEXCHHUB02.corp.ad.broadcom.com (10.16.192.232) with Microsoft SMTP Server (TLS) id 8.2.247.2; Fri, 27 Jan 2012 13:35:27 -0800
Received: from SJEXCHMB09.corp.ad.broadcom.com ( [fe80::3da7:665e:cc78:181f]) by sjexchcas01.corp.ad.broadcom.com ( [::1]) with mapi id 14.01.0355.002; Fri, 27 Jan 2012 13:35:13 -0800
From: "Pat Thaler" <pthaler@broadcom.com>
To: "Linda Dunbar" <linda.dunbar@huawei.com>, "Rakesh Saha" <rsaha@us.ibm.com>
Thread-Topic: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
Thread-Index: Aczcic4dhzNCqBUxQSqOznX8P2avFQAgIy2AAAJDuRAACPsCQA==
Date: Fri, 27 Jan 2012 21:35:13 +0000
Message-ID: <EB9B93801780FD4CA165E0FBCB3C3E6701A46B@SJEXCHMB09.corp.ad.broadcom.com>
References: <4A95BA014132FF49AE685FAB4B9F17F632E17F2D@dfweml505-mbx> <OFF6866E01.3C35AF59-ON88257992.0029C4E2-88257992.002A64BA@us.ibm.com> <4A95BA014132FF49AE685FAB4B9F17F632E181F3@dfweml505-mbx>
In-Reply-To: <4A95BA014132FF49AE685FAB4B9F17F632E181F3@dfweml505-mbx>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.240.250.119]
MIME-Version: 1.0
X-WSS-ID: 633DC51B50417942073-01-01
Content-Type: multipart/alternative; boundary=_000_EB9B93801780FD4CA165E0FBCB3C3E6701A46BSJEXCHMB09corpadb_
Cc: "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, "dc-bounces@ietf.org" <dc-bounces@ietf.org>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>, "narten@rotala.raleigh.ibm.com" <narten@rotala.raleigh.ibm.com>
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 21:35:37 -0000

--_000_EB9B93801780FD4CA165E0FBCB3C3E6701A46BSJEXCHMB09corpadb_
Content-Type: text/plain;
 charset=utf-8
Content-Transfer-Encoding: base64

TGluZGEsDQoNClNvbWV0aW1lcyBib3RoIGNhbiBiZSB0cnVlIG9mIHRoZSBzYW1lIHNlcnZlci4g
QSBzZXJ2ZXIgdGhhdCBpcyBjYW4gYXR0YWNoIGEgVkxBTi1JRCB0byBkYXRhIGZyYW1lcyBjb21p
bmcgb3V0IG1heSBub3QgYmUgY29uZmlndXJlZCB0byBkbyBzby4gQSBsYXJnZSBwYXJ0IG9mIHRo
ZSBwcm9ibGVtIGZvciBWTEFOcyBhbmQgb3ZlcmxheSBuZXR3b3JrcyB3aXRoIHZpcnR1YWxpemF0
aW9uIGlzIGEgY29udHJvbCBwbGFuZSBwcm9ibGVtIG9uIGhvdyB0byBnZXQgdGhlIG5ldHdvcmsg
Y29uZmlndXJhdGlvbiBpbmZvcm1hdGlvbiB3aGVyZSBpdCBuZWVkcyB0byBiZSBhcyB2aXJ0dWFs
IHNlcnZlcnMgYXJlIG1vdmVkLg0KDQo1LjMgZG9lc27igJl0IHNheSB0aGF0IHRoZSB2aXJ0dWFs
IE5JQyBhZGRzIHRoZSBWTEFOIElEcyB0byBmcmFtZXMgb3IgdGhhdCB0aGUgdmlydHVhbCBOSUMg
aXRzZWxmIHRyaWdnZXJzIHRoZSBhc3NvY2lhdGlvbi4NCg0K4oCcVHlwaWNhbGx5LCBpdCBpcyBh
IHZpcnR1YWwgTklDICh0aGUgb25lIGNvbm5lY3RlZCB0byB0aGUgVk0pDQoNCiBjb21pbmcgdXAg
dGhhdCB0cmlnZ2VycyB0aGlzIGFzc29jaWF0aW9uLiBUaGUgYWNjZXNzIHN3aXRjaCBjYW4gdGhl
bg0KICAgZGV0ZXJtaW5lIHRoZSBWTklEIHRvIGJlIGFzc29jaWF0ZWQgd2l0aCB0aGlzIHZpcnR1
YWwgTklDLuKAnQ0KVG8gbWUgdGhpcyBzYXlzIHRoYXQgYnJpbmdpbmcgdXAgdGhlIHZpcnR1YWwg
TklDIGNhdXNlcyBzb21ldGhpbmcgdG8gdHJpZ2dlciBhbiBhc3NvY2lhdGlvbiBvZiBhIFZOSUQg
d2l0aCBhIFZNLiBUaGF0IHNvbWV0aGluZyBtYXkgbm90IGJlIHRoZSB2TklDLiBUaGUgYXNzb2Np
YXRpb24gbWlnaHQgYmUgbWFkZSBieSB0aGUgaHlwZXJ2aXNvciBhcyBwYXJ0IG9mIHRoZSBpbnN0
YWxsYXRpb24gb2YgdGhlIHZpcnR1YWwgTklDLg0KDQpGb3IgZXhhbXBsZSwgaW4gdGhlIGRyYWZ0
IG9mIElFRUUgODAyLjFRYmcsIGEgaHlwZXJ2aXNvciBjYW4gdXNlIFZEUCAoVlNJIERpc2NvdmVy
eSBhbmQgY29uZmlndXJhdGlvbiBQcm90b2NvbCkgdG8gdGVsbCB0aGUgYXR0YWNoZWQgc3dpdGNo
IHRoYXQgYSBWU0kgKFZpcnR1YWwgU3RhdGlvbiBJbnRlcmZhY2UsIGUuZy4gYSB2TklDKSBpcyBj
b21pbmcgdXAgYW5kIGFzayBpdCB0byBjcmVhdGUgdGhlIGFzc29jaWF0aW9uLiBWRFAgY2FuIGVp
dGhlciB0ZWxsIHRoZSBhdHRhY2hlZCBzd2l0Y2ggd2hhdCBWTEFOIHRvIHVzZSBmb3IgdGhlIFZT
SSBvciBpdCBjYW4gbGVhdmUgdGhlIFZJRCBmaWVsZCBudWxsIHRvIGxldCB0aGUgYXR0YWNoZWQg
c3dpdGNoIGZpbGwgaW4gdGhlIFZMQU4gYmFzZWQgb24gaW5mb3JtYXRpb24gc3VjaCBhcyBWU0kg
cHJvZmlsZSBpZGVudGlmaWVyIGFuZCBhIDMyLWJpdCBHcm91cElELg0KDQpSZWdhcmRzLA0KUGF0
DQoNCg0KRnJvbTogZGMtYm91bmNlc0BpZXRmLm9yZyBbbWFpbHRvOmRjLWJvdW5jZXNAaWV0Zi5v
cmddIE9uIEJlaGFsZiBPZiBMaW5kYSBEdW5iYXINClNlbnQ6IEZyaWRheSwgSmFudWFyeSAyNywg
MjAxMiA4OjUxIEFNDQpUbzogUmFrZXNoIFNhaGENCkNjOiBkY0BpZXRmLm9yZzsgZGF2aWQuYmxh
Y2tAZW1jLmNvbTsgRGluZXNoIER1dHQ7IGRjLWJvdW5jZXNAaWV0Zi5vcmc7IE11cmFyaSBTcmlk
aGFyYW47IGtyZWVnZXJAY2lzY28uY29tOyBuYXJ0ZW5Acm90YWxhLnJhbGVpZ2guaWJtLmNvbQ0K
U3ViamVjdDogUmU6IFtkY10gY29tbWVudHMgYW5kIHN1Z2dlc3Rpb25zIHRvIGRyYWZ0LW5hcnRl
bi1udjAzLW92ZXJsYXktcHJvYmxlbS1zdGF0bWVudC0wMQ0KDQpSYWtlc2gsDQoNCkkgYWdyZWUg
d2l0aCB5b3UgdGhhdCBzb21lIHRyYWRpdGlvbmFsIHNlcnZlcnMgY2FuIGF0dGFjaCBWTEFOLUlE
IGZvciBkYXRhIGZyYW1lcyBjb21pbmcgb3V0Lg0KQnV0IG1hbnkgc2VydmVycyBkZXBsb3llZCB0
b2RheSBkb27igJl0IGF0dGFjaCBWSUQuDQoNCg0KTGluZGENCg0KRnJvbTogUmFrZXNoIFNhaGEg
W21haWx0bzpyc2FoYUB1cy5pYm0uY29tXTxtYWlsdG86W21haWx0bzpyc2FoYUB1cy5pYm0uY29t
XT4NClNlbnQ6IEZyaWRheSwgSmFudWFyeSAyNywgMjAxMiAxOjQzIEFNDQpUbzogTGluZGEgRHVu
YmFyDQpDYzogZGF2aWQuYmxhY2tAZW1jLmNvbTxtYWlsdG86ZGF2aWQuYmxhY2tAZW1jLmNvbT47
IGRjQGlldGYub3JnPG1haWx0bzpkY0BpZXRmLm9yZz47IGRjLWJvdW5jZXNAaWV0Zi5vcmc8bWFp
bHRvOmRjLWJvdW5jZXNAaWV0Zi5vcmc+OyBEaW5lc2ggRHV0dDsga3JlZWdlckBjaXNjby5jb208
bWFpbHRvOmtyZWVnZXJAY2lzY28uY29tPjsgTXVyYXJpIFNyaWRoYXJhbjsgbmFydGVuQHJvdGFs
YS5yYWxlaWdoLmlibS5jb208bWFpbHRvOm5hcnRlbkByb3RhbGEucmFsZWlnaC5pYm0uY29tPg0K
U3ViamVjdDogUmU6IFtkY10gY29tbWVudHMgYW5kIHN1Z2dlc3Rpb25zIHRvIGRyYWZ0LW5hcnRl
bi1udjAzLW92ZXJsYXktcHJvYmxlbS1zdGF0bWVudC0wMQ0KDQpIaSBMaW5kYSwNCg0KPllvdSBz
dGF0ZWQgdGhhdCDigJx0eXBpY2FsbHksIGl0IGlzIGEgdmlydHVhbCBOSUMgY29taW5nIHVwIHRo
YXQgdHJpZ2dlcnMgdGhpcyBhc3NvY2lhdGlvbuKAnS4gSXQgaXMgbm90IHF1aXRlIHJpZ2h0LiBW
aXJ0dWFsIE5JQywgc2FtZSBhcyBwaHlzaWNhbCBOSUMgZm9yIGEgbm9uLXZpcnR1YWxpemVkIHNl
cnZlciwgdXN1YWxseSBpcyBub3QgYXdhcmUgb2Ygd2hpY2ggPm5ldHdvcmsgc2VnbWVudCBpdCBp
cyBhdHRhY2hlZCB0by4gRm9yIEV0aGVybmV0IGRhdGEgZnJhbWVzIGNvbWluZyBvdXQgZnJvbSBh
IHRyYWRpdGlvbmFsIHNlcnZlciAoaS5lLiBzZXJ2ZXIgd2l0aG91dCB2aXJ0dWFsaXphdGlvbiks
IHRoZSBkYXRhIGZyYW1lIGlzIG5vdCB0YWdnZWQsIGkuZS4gbm90IGhhdmluZyBWSUQgYXNzb2Np
YXRlZCB3aXRoIGl0LiBJdCBpcyA+dGhlIGZpcnN0IHN3aXRjaCB3aGljaCBhc3NpZ24gdGhlIFZJ
RCBmb3IgdGhlIGRhdGEgZnJhbWUuDQoNClJlZ2FyZGluZyAiRm9yIEV0aGVybmV0IGRhdGEgZnJh
bWVzIGNvbWluZyBvdXQgZnJvbSBhIHRyYWRpdGlvbmFsIHNlcnZlciAoaS5lLiBzZXJ2ZXIgd2l0
aG91dCB2aXJ0dWFsaXphdGlvbiksIHRoZSBkYXRhIGZyYW1lIGlzIG5vdCB0YWdnZWQiLC4uLiIg
SXQgaXMgPnRoZSBmaXJzdCBzd2l0Y2ggd2hpY2ggYXNzaWduIHRoZSBWSUQgZm9yIHRoZSBkYXRh
IGZyYW1lIiAuLi4NCg0Kbm9uLXZpcnR1YWxpemVkIHNlcnZlcnMgYXJlIGNhcGFibGUgb2Ygc2Vu
ZGluZyBWTEFOIHRhZ2dlZCBwYWNrZXRzLiBJdCBpcyBhIHN0YW5kYXJkIGZlYXR1cmUgb2YgdG9k
YXknYXMgdHJhZGl0aW9uYWwgT3BlcmF0aW5nIFN5c3RlbXMgKE5JQyBkcml2ZXJzKSAgdG8gYWxs
b3cgVkxBTnMgdG8gYmUgYXNzb2NpYXRlZCB3aXRoIGFuIGV0aGVybmV0IGludGVyZmFjZS4NClNv
IGEgdHJhZGl0aW9uYWwgbm9uLXZpcnR1YWxpemVkIHNlcnZlciBtYXkgc2VuZCB0YWdnZWQgYW5k
IHVudGFnZ2VkIGZyYW1lcy4NCg0KVGhhbmtzLA0KUmFrZXNoLg0KLT0tDQoNCg0KDQoNCg0KDQpG
cm9tOiAgICAgICAgTGluZGEgRHVuYmFyIDxsaW5kYS5kdW5iYXJAaHVhd2VpLmNvbTxtYWlsdG86
bGluZGEuZHVuYmFyQGh1YXdlaS5jb20+Pg0KVG86ICAgICAgICBuYXJ0ZW5Acm90YWxhLnJhbGVp
Z2guaWJtLmNvbTxtYWlsdG86bmFydGVuQHJvdGFsYS5yYWxlaWdoLmlibS5jb20+LCAiZGF2aWQu
YmxhY2tAZW1jLmNvbTxtYWlsdG86ZGF2aWQuYmxhY2tAZW1jLmNvbT4iIDxkYXZpZC5ibGFja0Bl
bWMuY29tPG1haWx0bzpkYXZpZC5ibGFja0BlbWMuY29tPj4sIE11cmFyaSBTcmlkaGFyYW4gPG11
cmFyaXNAbWljcm9zb2Z0LmNvbTxtYWlsdG86bXVyYXJpc0BtaWNyb3NvZnQuY29tPj4sIERpbmVz
aCBEdXR0IDxkZHV0dEBjaXNjby5jb208bWFpbHRvOmRkdXR0QGNpc2NvLmNvbT4+LCAia3JlZWdl
ckBjaXNjby5jb208bWFpbHRvOmtyZWVnZXJAY2lzY28uY29tPiIgPGtyZWVnZXJAY2lzY28uY29t
PG1haWx0bzprcmVlZ2VyQGNpc2NvLmNvbT4+DQpDYzogICAgICAgICJkY0BpZXRmLm9yZzxtYWls
dG86ZGNAaWV0Zi5vcmc+IiA8ZGNAaWV0Zi5vcmc8bWFpbHRvOmRjQGlldGYub3JnPj4NCkRhdGU6
ICAgICAgICAwMS8yNi8yMDEyIDA0OjI1IFBNDQpTdWJqZWN0OiAgICAgICAgW2RjXSBjb21tZW50
cyBhbmQgc3VnZ2VzdGlvbnMgdG8gZHJhZnQtbmFydGVuLW52MDMtb3ZlcmxheS1wcm9ibGVtLXN0
YXRtZW50LTAxDQpTZW50IGJ5OiAgICAgICAgZGMtYm91bmNlc0BpZXRmLm9yZzxtYWlsdG86ZGMt
Ym91bmNlc0BpZXRmLm9yZz4NCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQoNCg0K
DQpUaG9tYXMsIGV0IGFsLA0KDQpIZXJlIGFyZSBteSBjb21tZW50cyB0byBkcmFmdC1uYXJ0ZW4t
bnYwMy1vdmVybGF5LXByb2JsZW0tc3RhdG1lbnQtMDEgYW5kIHN1Z2dlc3RlZCB3b3JkaW5nIGNo
YW5nZToNCg0KMy4xLiBMaW1pdGF0aW9ucyBvZiBFeGlzdGluZyBWaXJ0dWFsIE5ldHdvcmsgTW9k
ZWxzLg0KDQpTaW5jZSBJRUVFODAyLjEgZGVmaW5lZCBWTEFOIHNlcGFyYXRpb24gaXMgbWVudGlv
bmVkLCBpdCB3b3VsZCBiZSBhcHByb3ByaWF0ZSB0byBtZW50aW9uIFBCQuKAmXMgSVNJRCBzZXBh
cmF0aW9uLg0KDQpUaGUgbGltaXRhdGlvbiBmb3IgUEJCIGFuZCBWTEFOIHNob3VsZCBpbmNsdWRl
IHRoYXQgTUFDIGFkZHJlc3NlcyBjYW7igJl0IGJlIGFnZ3JlZ2F0ZWQsIHRoZXJlZm9yZSBmb3J3
YXJkaW5nIHRhYmxlIGNhbiBiZSB2ZXJ5IGxhcmdlIGZvciBsYXJnZSBkYXRhIGNlbnRlcnMuDQoN
ClNlY29uZCBwYXJhZ3JhcGg6IFdoeSDigJxWTEFOcyBhcmUgYSBwdXJlIGJyaWRnaW5nIGNvbnN0
cnVjdCB3aGlsZSBWUkYgaXMgcHVyZSByb3V0aW5nIGNvbnN0cnVjdOKAnSBpcyBhIHByb2JsZW0/
DQoNCjQuIE5ldHdvcmsgT3ZlcmxheXMNCg0KU2hvdWxkIGFkZCBhIHN1YnNlY3Rpb24gdG8gZGVz
Y3JpYmUgVmlydHVhbCBOZXR3b3JrIEluc3RhbmNlIElELg0KDQo0LnguIFZpcnR1YWwgTmV0d29y
ayBJbnN0YW5jZSBJRA0KDQpWaXJ0dWFsIE5ldHdvcmsgSW5zdGFuY2UgaXMgZm9yIHNlZ3JlZ2F0
aW5nIHRyYWZmaWMgYmVsb25naW5nIHRvIGRpZmZlcmVudCB0ZW5hbnRzIG9yIGRpZmZlcmVudCB6
b25lcyBvZiBvbmUgdGVuYW50LiBXaGVuIGEgZGF0YSBjZW50ZXIgdXNlcyBPdmVybGF5IE5ldHdv
cmsgdG8gaGlkZSBob3N0cyBhZGRyZXNzZXMsIGl0IGlzIGltcG9ydGFudCB0aGF0IFZpcnR1YWwg
TmV0d29yayBJbnN0YW5jZSBpZGVudGlmaWVyIGNhbiBwcm9wZXJseSByZXByZXNlbnQgdGhlIHpv
bmVzIG9yIHRoZSB0ZW5hbnRzIGluIGVudGlyZSBkYXRhIGNlbnRlciwgZXNwZWNpYWxseSB3aGVu
IHRoZSBvdmVybGF5IGVkZ2Ugbm9kZXMgYXJlIGFjY2VzcyBzd2l0Y2hlcywgaS5lLiBub3QgZW1i
ZWRkZWQgaW4gaHlwZXJ2aXNvcnMuDQoNCldoZW4gb3ZlcmxheSBlZGdlIG5vZGVzIGFyZSBhY2Nl
c3Mgc3dpdGNoZXMsIHRoZSBkYXRhIGZyYW1lcyBiZWZvcmUgZW50ZXJpbmcgdGhlIE92ZXJsYXkg
TmV0d29yayBvciBhZnRlciBleGl0aW5nIHRoZSBPdmVybGF5IE5ldHdvcmsgbWlnaHQgY2Fycnkg
dHJhZGl0aW9uYWwgVkxBTi1JRCBmb3IgcHJvcGVyIHRyYWZmaWMgc2VncmVnYXRpb24uIFRoZSBW
aXJ0dWFsIE5ldHdvcmsgSW5zdGFuY2UgSUQgdmFsdWUgY2FycmllZCBieSB0aGUgT3ZlcmxheSBI
ZWFkZXIgIG9mIHRoZSBkYXRhIGZyYW1lcyBtaWdodCBiZSAyNCBiaXRzIChhcyBkZXNjcmliZWQg
aW4gMy4yKS4gVGhvc2UgVkxBTi1JRCBmb3IgZGF0YSBmcmFtZXMgdW5kZXIgZWFjaCBvdmVybGF5
IGVkZ2Ugbm9kZSBhcmUgb25seSBsb2NhbGx5IHNpZ25pZmljYW50LiBQcm9wZXIgbWFwcGluZyBo
YXMgdG8gYmUgbWFpbnRhaW5lZC4NCg0KNi4yIChUUklMTCkgJjYuMyAoTDJWUE4pDQoNCkl0IGlz
IG5lY2Vzc2FyeSB0byBwb2ludCBvdXQgdGhhdCBib3RoIFRSSUxMIGFuZCBMMlZQTiBjYXJyeSB0
aGUgVkxBTi1JRCBlbWJlZGRlZCBpbiB0aGUgb3JpZ2luYWwgRXRoZXJuZXQgZnJhbWVzIGFjcm9z
cyB0aGUgT3ZlcmxheSBOZXR3b3JrIGFuZCB0aGUgVkxBTi1JRCBtYWludGFpbiB0aGUgc2FtZSBt
ZWFuaW5nIGluIHR3byBzZXBhcmF0ZSBMMiBpc2xhbmRzLg0KDQpXaGVuIFZMQU4tSUQgdGFnZ2Vk
IEV0aGVybmV0IGZyYW1lcyB0cmF2ZXJzZSBhY3Jvc3MgdGhlIE92ZXJsYXkgTmV0d29yayBmb3Ig
RGF0YSBDZW50ZXIsIHRoZSBWTEFOLUlEIGNhcnJpZWQgYnkgRXRoZXJuZXQgZnJhbWVzIGxvc2Ug
aXRzIHNpZ25pZmljYW5jZS4gSW4gYW5vdGhlciB3b3JkcywgYSBkaWZmZXJlbnQgVkxBTi1JRCBt
aWdodCBiZSByZS1hc3NpZ25lZCB0byB0aGUgRXRoZXJuZXQgZnJhbWVzIGJ5IHRoZSBFZ3Jlc3Mg
b3ZlcmxheSBlZGdlLg0KDQoNCjUuMy4gQXNzb2NpYXRpbmcgYSBWTklEIHdpdGggYW4gZW5kcG9p
bnQuDQoNCllvdSBzdGF0ZWQgdGhhdCDigJx0eXBpY2FsbHksIGl0IGlzIGEgdmlydHVhbCBOSUMg
Y29taW5nIHVwIHRoYXQgdHJpZ2dlcnMgdGhpcyBhc3NvY2lhdGlvbuKAnS4gSXQgaXMgbm90IHF1
aXRlIHJpZ2h0LiBWaXJ0dWFsIE5JQywgc2FtZSBhcyBwaHlzaWNhbCBOSUMgZm9yIGEgbm9uLXZp
cnR1YWxpemVkIHNlcnZlciwgdXN1YWxseSBpcyBub3QgYXdhcmUgb2Ygd2hpY2ggbmV0d29yayBz
ZWdtZW50IGl0IGlzIGF0dGFjaGVkIHRvLiBGb3IgRXRoZXJuZXQgZGF0YSBmcmFtZXMgY29taW5n
IG91dCBmcm9tIGEgdHJhZGl0aW9uYWwgc2VydmVyIChpLmUuIHNlcnZlciB3aXRob3V0IHZpcnR1
YWxpemF0aW9uKSwgdGhlIGRhdGEgZnJhbWUgaXMgbm90IHRhZ2dlZCwgaS5lLiBub3QgaGF2aW5n
IFZJRCBhc3NvY2lhdGVkIHdpdGggaXQuIEl0IGlzIHRoZSBmaXJzdCBzd2l0Y2ggd2hpY2ggYXNz
aWduIHRoZSBWSUQgZm9yIHRoZSBkYXRhIGZyYW1lLg0KDQpJdCBpcyBtb3JlIGFjY3VyYXRlIHRv
IHNheSAg4oCcdmlydHVhbCBzd2l0Y2jigJ0gb3Ig4oCcSHlwZXJ2aXNvcuKAnSB3aXRoaW4gdGhl
IHNlcnZlciB3aGljaCBjYW4gdHJpZ2dlciB0aGUgYXNzb2NpYXRpb24uDQoNClNpbmNlIG92ZXJs
YXkgaGVhZGVyIGNhbiBiZSBhbHNvIGFkZGVkIGJ5IGZpcnN0IGFjY2VzcyBzd2l0Y2hlcyAoaS5l
LiBzd2l0Y2hlcyBub3QgZW1iZWRkZWQgaW4gcGh5c2ljYWwgc2VydmVycyksIGl0IGlzIHZlcnkg
cG9zc2libGUgdGhhdCAgZGF0YSBmcmFtZXMgYXJyaXZpbmcgYXQgdGhlIG92ZXJsYXkgZWRnZSAo
aS5lLiB0aGUgZmlyc3QgYWNjZXNzIHN3aXRjaCkgaXMgYWxyZWFkeSBWTEFOIHRhZ2dlZC4gVGhl
biBhc3NvY2lhdGlvbiAob3IgbWFwcGluZykgZnJvbSB0aGUgVkxBTi1UYWcgdG8gVk5JRCBoYXMg
dG8gYmUgb3BlcmF0b3IgYWRtaW5pc3RyYXRlZC4NCg0KDQpMaW5kYSBEdW5iYXINCiBfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KZGMgbWFpbGluZyBsaXN0
DQpkY0BpZXRmLm9yZzxtYWlsdG86ZGNAaWV0Zi5vcmc+DQpodHRwczovL3d3dy5pZXRmLm9yZy9t
YWlsbWFuL2xpc3RpbmZvL2RjDQo=

--_000_EB9B93801780FD4CA165E0FBCB3C3E6701A46BSJEXCHMB09corpadb_
Content-Type: text/html;
 charset=utf-8
Content-Transfer-Encoding: base64

PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy
bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt
YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj
cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv
VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIg
Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxtZXRhIG5hbWU9IkdlbmVyYXRv
ciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTQgKGZpbHRlcmVkIG1lZGl1bSkiPg0KPCEtLVtp
ZiAhbXNvXT48c3R5bGU+dlw6KiB7YmVoYXZpb3I6dXJsKCNkZWZhdWx0I1ZNTCk7fQ0Kb1w6KiB7
YmVoYXZpb3I6dXJsKCNkZWZhdWx0I1ZNTCk7fQ0Kd1w6KiB7YmVoYXZpb3I6dXJsKCNkZWZhdWx0
I1ZNTCk7fQ0KLnNoYXBlIHtiZWhhdmlvcjp1cmwoI2RlZmF1bHQjVk1MKTt9DQo8L3N0eWxlPjwh
W2VuZGlmXS0tPjxzdHlsZT48IS0tDQovKiBGb250IERlZmluaXRpb25zICovDQpAZm9udC1mYWNl
DQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJcGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAyIDQ7
fQ0KQGZvbnQtZmFjZQ0KCXtmb250LWZhbWlseTpUYWhvbWE7DQoJcGFub3NlLTE6MiAxMSA2IDQg
MyA1IDQgNCAyIDQ7fQ0KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8NCnAuTXNvTm9ybWFsLCBsaS5N
c29Ob3JtYWwsIGRpdi5Nc29Ob3JtYWwNCgl7bWFyZ2luOjBpbjsNCgltYXJnaW4tYm90dG9tOi4w
MDAxcHQ7DQoJZm9udC1zaXplOjEyLjBwdDsNCglmb250LWZhbWlseToiVGltZXMgTmV3IFJvbWFu
Iiwic2VyaWYiO30NCmE6bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXByaW9y
aXR5Ojk5Ow0KCWNvbG9yOmJsdWU7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQphOnZp
c2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQNCgl7bXNvLXN0eWxlLXByaW9yaXR5Ojk5
Ow0KCWNvbG9yOnB1cnBsZTsNCgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30NCnByZQ0KCXtt
c28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJbXNvLXN0eWxlLWxpbms6IkhUTUwgUHJlZm9ybWF0dGVk
IENoYXIiOw0KCW1hcmdpbjowaW47DQoJbWFyZ2luLWJvdHRvbTouMDAwMXB0Ow0KCWZvbnQtc2l6
ZToxMi4wcHQ7DQoJZm9udC1mYW1pbHk6IkNvdXJpZXIgTmV3Ijt9DQp0dA0KCXttc28tc3R5bGUt
cHJpb3JpdHk6OTk7DQoJZm9udC1mYW1pbHk6IkNvdXJpZXIgTmV3Ijt9DQpwLk1zb0FjZXRhdGUs
IGxpLk1zb0FjZXRhdGUsIGRpdi5Nc29BY2V0YXRlDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsN
Cgltc28tc3R5bGUtbGluazoiQmFsbG9vbiBUZXh0IENoYXIiOw0KCW1hcmdpbjowaW47DQoJbWFy
Z2luLWJvdHRvbTouMDAwMXB0Ow0KCWZvbnQtc2l6ZTo4LjBwdDsNCglmb250LWZhbWlseToiVGFo
b21hIiwic2Fucy1zZXJpZiI7fQ0Kc3Bhbi5FbWFpbFN0eWxlMTgNCgl7bXNvLXN0eWxlLXR5cGU6
cGVyc29uYWw7DQoJZm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjsNCgljb2xvcjoj
MUY0OTdEO30NCnNwYW4uQmFsbG9vblRleHRDaGFyDQoJe21zby1zdHlsZS1uYW1lOiJCYWxsb29u
IFRleHQgQ2hhciI7DQoJbXNvLXN0eWxlLXByaW9yaXR5Ojk5Ow0KCW1zby1zdHlsZS1saW5rOiJC
YWxsb29uIFRleHQiOw0KCWZvbnQtZmFtaWx5OiJUYWhvbWEiLCJzYW5zLXNlcmlmIjt9DQpzcGFu
LkVtYWlsU3R5bGUyMQ0KCXttc28tc3R5bGUtdHlwZTpwZXJzb25hbC1yZXBseTsNCglmb250LWZh
bWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiOw0KCWNvbG9yOiMxRjQ5N0Q7fQ0Kc3Bhbi5IVE1M
UHJlZm9ybWF0dGVkQ2hhcg0KCXttc28tc3R5bGUtbmFtZToiSFRNTCBQcmVmb3JtYXR0ZWQgQ2hh
ciI7DQoJbXNvLXN0eWxlLXByaW9yaXR5Ojk5Ow0KCW1zby1zdHlsZS1saW5rOiJIVE1MIFByZWZv
cm1hdHRlZCI7DQoJZm9udC1mYW1pbHk6IkNvdXJpZXIgTmV3Ijt9DQouTXNvQ2hwRGVmYXVsdA0K
CXttc28tc3R5bGUtdHlwZTpleHBvcnQtb25seTsNCglmb250LXNpemU6MTAuMHB0O30NCkBwYWdl
IFdvcmRTZWN0aW9uMQ0KCXtzaXplOjguNWluIDExLjBpbjsNCgltYXJnaW46MS4waW4gMS4yNWlu
IDEuMGluIDEuMjVpbjt9DQpkaXYuV29yZFNlY3Rpb24xDQoJe3BhZ2U6V29yZFNlY3Rpb24xO30N
Ci0tPjwvc3R5bGU+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+DQo8bzpzaGFwZWRlZmF1bHRzIHY6
ZXh0PSJlZGl0IiBzcGlkbWF4PSIxMDI2IiAvPg0KPC94bWw+PCFbZW5kaWZdLS0+PCEtLVtpZiBn
dGUgbXNvIDldPjx4bWw+DQo8bzpzaGFwZWxheW91dCB2OmV4dD0iZWRpdCI+DQo8bzppZG1hcCB2
OmV4dD0iZWRpdCIgZGF0YT0iMSIgLz4NCjwvbzpzaGFwZWxheW91dD48L3htbD48IVtlbmRpZl0t
LT4NCjwvaGVhZD4NCjxib2R5IGxhbmc9IkVOLVVTIiBsaW5rPSJibHVlIiB2bGluaz0icHVycGxl
Ij4NCjxkaXYgY2xhc3M9IldvcmRTZWN0aW9uMSI+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3Bh
biBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7
LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+TGluZGEsPG86cD48L286cD48
L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTox
MS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1
b3Q7O2NvbG9yOiMxRjQ5N0QiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZx
dW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5T
b21ldGltZXMgYm90aCBjYW4gYmUgdHJ1ZSBvZiB0aGUgc2FtZSBzZXJ2ZXIuIEEgc2VydmVyIHRo
YXQgaXMgY2FuIGF0dGFjaCBhIFZMQU4tSUQgdG8gZGF0YSBmcmFtZXMgY29taW5nIG91dCBtYXkg
bm90IGJlIGNvbmZpZ3VyZWQgdG8gZG8gc28uIEEgbGFyZ2UgcGFydA0KIG9mIHRoZSBwcm9ibGVt
IGZvciBWTEFOcyBhbmQgb3ZlcmxheSBuZXR3b3JrcyB3aXRoIHZpcnR1YWxpemF0aW9uIGlzIGEg
Y29udHJvbCBwbGFuZSBwcm9ibGVtIG9uIGhvdyB0byBnZXQgdGhlIG5ldHdvcmsgY29uZmlndXJh
dGlvbiBpbmZvcm1hdGlvbiB3aGVyZSBpdCBuZWVkcyB0byBiZSBhcyB2aXJ0dWFsIHNlcnZlcnMg
YXJlIG1vdmVkLg0KPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+
PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZx
dW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPjxvOnA+Jm5ic3A7PC9v
OnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNp
emU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJp
ZiZxdW90Oztjb2xvcjojMUY0OTdEIj41LjMgZG9lc27igJl0IHNheSB0aGF0IHRoZSB2aXJ0dWFs
IE5JQyBhZGRzIHRoZSBWTEFOIElEcyB0byBmcmFtZXMgb3IgdGhhdCB0aGUgdmlydHVhbCBOSUMg
aXRzZWxmIHRyaWdnZXJzIHRoZSBhc3NvY2lhdGlvbi4NCjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4N
CjxwcmUgc3R5bGU9InBhZ2UtYnJlYWstYmVmb3JlOmFsd2F5cyI+PHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNl
cmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPuKAnDwvc3Bhbj48c3BhbiBsYW5nPSJFTiIgc3R5bGU9
ImZvbnQtc2l6ZToxMC4wcHQiPlR5cGljYWxseSwgaXQgaXMgYSB2aXJ0dWFsIE5JQyAodGhlIG9u
ZSBjb25uZWN0ZWQgdG8gdGhlIFZNKTxvOnA+PC9vOnA+PC9zcGFuPjwvcHJlPg0KPHByZSBzdHls
ZT0icGFnZS1icmVhay1iZWZvcmU6YWx3YXlzIj48c3BhbiBsYW5nPSJFTiIgc3R5bGU9ImZvbnQt
c2l6ZToxMC4wcHQiPiBjb21pbmcgdXAgdGhhdCB0cmlnZ2VycyB0aGlzIGFzc29jaWF0aW9uLiBU
aGUgYWNjZXNzIHN3aXRjaCBjYW4gdGhlbjxvOnA+PC9vOnA+PC9zcGFuPjwvcHJlPg0KPHAgY2xh
c3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4iIHN0eWxlPSJmb250LXNpemU6MTAuMHB0Ij4m
bmJzcDsmbmJzcDsgZGV0ZXJtaW5lIHRoZSBWTklEIHRvIGJlIGFzc29jaWF0ZWQgd2l0aCB0aGlz
IHZpcnR1YWwgTklDLuKAnTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGli
cmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5UbyBtZSB0aGlz
IHNheXMgdGhhdCBicmluZ2luZyB1cCB0aGUgdmlydHVhbCBOSUMgY2F1c2VzIHNvbWV0aGluZyB0
byB0cmlnZ2VyIGFuIGFzc29jaWF0aW9uIG9mIGEgVk5JRCB3aXRoIGEgVk0uIFRoYXQgc29tZXRo
aW5nIG1heSBub3QgYmUgdGhlIHZOSUMuIFRoZSBhc3NvY2lhdGlvbg0KIG1pZ2h0IGJlIG1hZGUg
YnkgdGhlIGh5cGVydmlzb3IgYXMgcGFydCBvZiB0aGUgaW5zdGFsbGF0aW9uIG9mIHRoZSB2aXJ0
dWFsIE5JQy4gPG86cD4NCjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1
b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+PG86cD4mbmJzcDs8L286
cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6
ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlm
JnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPkZvciBleGFtcGxlLCBpbiB0aGUgZHJhZnQgb2YgSUVFRSA4
MDIuMVFiZywgYSBoeXBlcnZpc29yIGNhbiB1c2UgVkRQIChWU0kgRGlzY292ZXJ5IGFuZCBjb25m
aWd1cmF0aW9uIFByb3RvY29sKSB0byB0ZWxsIHRoZSBhdHRhY2hlZCBzd2l0Y2ggdGhhdCBhIFZT
SSAoVmlydHVhbA0KIFN0YXRpb24gSW50ZXJmYWNlLCBlLmcuIGEgdk5JQykgaXMgY29taW5nIHVw
IGFuZCBhc2sgaXQgdG8gY3JlYXRlIHRoZSBhc3NvY2lhdGlvbi4gVkRQIGNhbiBlaXRoZXIgdGVs
bCB0aGUgYXR0YWNoZWQgc3dpdGNoIHdoYXQgVkxBTiB0byB1c2UgZm9yIHRoZSBWU0kgb3IgaXQg
Y2FuIGxlYXZlIHRoZSBWSUQgZmllbGQgbnVsbCB0byBsZXQgdGhlIGF0dGFjaGVkIHN3aXRjaCBm
aWxsIGluIHRoZSBWTEFOIGJhc2VkIG9uIGluZm9ybWF0aW9uIHN1Y2gNCiBhcyBWU0kgcHJvZmls
ZSBpZGVudGlmaWVyIGFuZCBhIDMyLWJpdCBHcm91cElELjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4N
CjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQt
ZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjoj
MUY0OTdEIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFs
Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJp
JnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+UmVnYXJkcyw8bzpw
PjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMt
c2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+UGF0PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1p
bHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMxRjQ5
N0QiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVv
dDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj48bzpwPiZuYnNwOzwvbzpw
Pjwvc3Bhbj48L3A+DQo8ZGl2Pg0KPGRpdiBzdHlsZT0iYm9yZGVyOm5vbmU7Ym9yZGVyLXRvcDpz
b2xpZCAjQjVDNERGIDEuMHB0O3BhZGRpbmc6My4wcHQgMGluIDBpbiAwaW4iPg0KPHAgY2xhc3M9
Ik1zb05vcm1hbCI+PGI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6
JnF1b3Q7VGFob21hJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPkZyb206PC9zcGFuPjwv
Yj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtUYWhvbWEm
cXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+IGRjLWJvdW5jZXNAaWV0Zi5vcmcgW21haWx0
bzpkYy1ib3VuY2VzQGlldGYub3JnXQ0KPGI+T24gQmVoYWxmIE9mIDwvYj5MaW5kYSBEdW5iYXI8
YnI+DQo8Yj5TZW50OjwvYj4gRnJpZGF5LCBKYW51YXJ5IDI3LCAyMDEyIDg6NTEgQU08YnI+DQo8
Yj5Ubzo8L2I+IFJha2VzaCBTYWhhPGJyPg0KPGI+Q2M6PC9iPiBkY0BpZXRmLm9yZzsgZGF2aWQu
YmxhY2tAZW1jLmNvbTsgRGluZXNoIER1dHQ7IGRjLWJvdW5jZXNAaWV0Zi5vcmc7IE11cmFyaSBT
cmlkaGFyYW47IGtyZWVnZXJAY2lzY28uY29tOyBuYXJ0ZW5Acm90YWxhLnJhbGVpZ2guaWJtLmNv
bTxicj4NCjxiPlN1YmplY3Q6PC9iPiBSZTogW2RjXSBjb21tZW50cyBhbmQgc3VnZ2VzdGlvbnMg
dG8gZHJhZnQtbmFydGVuLW52MDMtb3ZlcmxheS1wcm9ibGVtLXN0YXRtZW50LTAxPG86cD48L286
cD48L3NwYW4+PC9wPg0KPC9kaXY+DQo8L2Rpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+
Jm5ic3A7PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNl
cmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPlJha2VzaCwNCjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4N
CjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQt
ZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjoj
MUY0OTdEIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFs
Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJp
JnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+SSBhZ3JlZSB3aXRo
IHlvdSB0aGF0IHNvbWUgdHJhZGl0aW9uYWwgc2VydmVycyBjYW4gYXR0YWNoIFZMQU4tSUQgZm9y
IGRhdGEgZnJhbWVzIGNvbWluZyBvdXQuDQo8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFz
cz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTom
cXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+
QnV0IG1hbnkgc2VydmVycyBkZXBsb3llZCB0b2RheSBkb27igJl0IGF0dGFjaCBWSUQuDQo8bzpw
PjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMt
c2VyaWYmcXVvdDs7Y29sb3I6IzFGNDk3RCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0K
PHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1m
YW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiMx
RjQ5N0QiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwi
PjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkm
cXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90Oztjb2xvcjojMUY0OTdEIj5MaW5kYQ0KPG86cD48
L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNl
cmlmJnF1b3Q7O2NvbG9yOiMxRjQ5N0QiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxk
aXYgc3R5bGU9ImJvcmRlcjpub25lO2JvcmRlci1sZWZ0OnNvbGlkIGJsdWUgMS41cHQ7cGFkZGlu
ZzowaW4gMGluIDBpbiA0LjBwdCI+DQo8ZGl2Pg0KPGRpdiBzdHlsZT0iYm9yZGVyOm5vbmU7Ym9y
ZGVyLXRvcDpzb2xpZCAjQjVDNERGIDEuMHB0O3BhZGRpbmc6My4wcHQgMGluIDBpbiAwaW4iPg0K
PHAgY2xhc3M9Ik1zb05vcm1hbCI+PGI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9u
dC1mYW1pbHk6JnF1b3Q7VGFob21hJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPkZyb206
PC9zcGFuPjwvYj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVv
dDtUYWhvbWEmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+IFJha2VzaCBTYWhhDQo8YSBo
cmVmPSJtYWlsdG86W21haWx0bzpyc2FoYUB1cy5pYm0uY29tXSI+W21haWx0bzpyc2FoYUB1cy5p
Ym0uY29tXTwvYT4gPGJyPg0KPGI+U2VudDo8L2I+IEZyaWRheSwgSmFudWFyeSAyNywgMjAxMiAx
OjQzIEFNPGJyPg0KPGI+VG86PC9iPiBMaW5kYSBEdW5iYXI8YnI+DQo8Yj5DYzo8L2I+IDxhIGhy
ZWY9Im1haWx0bzpkYXZpZC5ibGFja0BlbWMuY29tIj5kYXZpZC5ibGFja0BlbWMuY29tPC9hPjsg
PGEgaHJlZj0ibWFpbHRvOmRjQGlldGYub3JnIj4NCmRjQGlldGYub3JnPC9hPjsgPGEgaHJlZj0i
bWFpbHRvOmRjLWJvdW5jZXNAaWV0Zi5vcmciPmRjLWJvdW5jZXNAaWV0Zi5vcmc8L2E+OyBEaW5l
c2ggRHV0dDsNCjxhIGhyZWY9Im1haWx0bzprcmVlZ2VyQGNpc2NvLmNvbSI+a3JlZWdlckBjaXNj
by5jb208L2E+OyBNdXJhcmkgU3JpZGhhcmFuOyA8YSBocmVmPSJtYWlsdG86bmFydGVuQHJvdGFs
YS5yYWxlaWdoLmlibS5jb20iPg0KbmFydGVuQHJvdGFsYS5yYWxlaWdoLmlibS5jb208L2E+PGJy
Pg0KPGI+U3ViamVjdDo8L2I+IFJlOiBbZGNdIGNvbW1lbnRzIGFuZCBzdWdnZXN0aW9ucyB0byBk
cmFmdC1uYXJ0ZW4tbnYwMy1vdmVybGF5LXByb2JsZW0tc3RhdG1lbnQtMDE8bzpwPjwvbzpwPjwv
c3Bhbj48L3A+DQo8L2Rpdj4NCjwvZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJz
cDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1
b3Q7Ij5IaSBMaW5kYSw8L3NwYW4+DQo8YnI+DQo8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYm
cXVvdDsiPiZndDtZb3Ugc3RhdGVkIHRoYXQg4oCcdHlwaWNhbGx5LCBpdCBpcyBhIHZpcnR1YWwg
TklDIGNvbWluZyB1cCB0aGF0IHRyaWdnZXJzIHRoaXMgYXNzb2NpYXRpb27igJ0uIEl0IGlzIG5v
dCBxdWl0ZSByaWdodC4gVmlydHVhbCBOSUMsIHNhbWUgYXMgcGh5c2ljYWwgTklDIGZvciBhIG5v
bi12aXJ0dWFsaXplZCBzZXJ2ZXIsIHVzdWFsbHkgaXMgbm90DQogYXdhcmUgb2Ygd2hpY2ggJmd0
O25ldHdvcmsgc2VnbWVudCBpdCBpcyBhdHRhY2hlZCB0by4gPHU+Rm9yIEV0aGVybmV0IGRhdGEg
ZnJhbWVzIGNvbWluZyBvdXQgZnJvbSBhIHRyYWRpdGlvbmFsIHNlcnZlciAoaS5lLiBzZXJ2ZXIg
d2l0aG91dCB2aXJ0dWFsaXphdGlvbiksIHRoZSBkYXRhIGZyYW1lIGlzIG5vdCB0YWdnZWQ8L3U+
LCBpLmUuIG5vdCBoYXZpbmcgVklEIGFzc29jaWF0ZWQgd2l0aCBpdC4gSXQgaXMgJmd0O3RoZSBm
aXJzdCBzd2l0Y2ggd2hpY2gNCiBhc3NpZ24gdGhlIFZJRCBmb3IgdGhlIGRhdGEgZnJhbWUuIDwv
c3Bhbj48YnI+DQo8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWls
eTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPlJlZ2FyZGluZyAm
cXVvdDtGb3IgRXRoZXJuZXQNCjx1PmRhdGEgZnJhbWVzIGNvbWluZyBvdXQgZnJvbSBhIHRyYWRp
dGlvbmFsIHNlcnZlciAoaS5lLiBzZXJ2ZXIgd2l0aG91dCB2aXJ0dWFsaXphdGlvbiksIHRoZSBk
YXRhIGZyYW1lIGlzIG5vdCB0YWdnZWQ8L3U+JnF1b3Q7LC4uLiZxdW90OyBJdCBpcyAmZ3Q7dGhl
IGZpcnN0IHN3aXRjaCB3aGljaCBhc3NpZ24gdGhlIFZJRCBmb3IgdGhlIGRhdGEgZnJhbWUmcXVv
dDsgLi4uPC9zcGFuPg0KPGJyPg0KPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7
Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5u
b24tdmlydHVhbGl6ZWQgc2VydmVycyBhcmUgY2FwYWJsZSBvZiBzZW5kaW5nIFZMQU4gdGFnZ2Vk
IHBhY2tldHMuIEl0IGlzIGEgc3RhbmRhcmQgZmVhdHVyZSBvZiB0b2RheSdhcyB0cmFkaXRpb25h
bCBPcGVyYXRpbmcgU3lzdGVtcyAoTklDIGRyaXZlcnMpICZuYnNwO3RvIGFsbG93IFZMQU5zIHRv
IGJlIGFzc29jaWF0ZWQgd2l0aCBhbiBldGhlcm5ldA0KIGludGVyZmFjZS48L3NwYW4+IDxicj4N
CjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkm
cXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+U28gYSB0cmFkaXRpb25hbCBub24tdmlydHVh
bGl6ZWQgc2VydmVyIG1heSBzZW5kIHRhZ2dlZCBhbmQgdW50YWdnZWQgZnJhbWVzLjwvc3Bhbj4N
Cjxicj4NCjxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZx
dW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+VGhhbmtzLDwvc3Bhbj4g
PGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2Fs
aWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5SYWtlc2guPC9zcGFuPiA8YnI+DQo8
c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1
b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPi09LTwvc3Bhbj4gPGJyPg0KPGJyPg0KPGJyPg0K
PGJyPg0KPGJyPg0KPGJyPg0KPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo3LjVwdDtmb250
LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiM1
RjVGNUYiPkZyb206ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOzwvc3Bhbj48c3BhbiBzdHls
ZT0iZm9udC1zaXplOjcuNXB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LCZxdW90O3Nh
bnMtc2VyaWYmcXVvdDsiPkxpbmRhIER1bmJhciAmbHQ7PGEgaHJlZj0ibWFpbHRvOmxpbmRhLmR1
bmJhckBodWF3ZWkuY29tIj5saW5kYS5kdW5iYXJAaHVhd2VpLmNvbTwvYT4mZ3Q7PC9zcGFuPg0K
PGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo3LjVwdDtmb250LWZhbWlseTomcXVvdDtBcmlh
bCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiM1RjVGNUYiPlRvOiAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDs8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo3LjVwdDtm
b250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij48YSBo
cmVmPSJtYWlsdG86bmFydGVuQHJvdGFsYS5yYWxlaWdoLmlibS5jb20iPm5hcnRlbkByb3RhbGEu
cmFsZWlnaC5pYm0uY29tPC9hPiwgJnF1b3Q7PGEgaHJlZj0ibWFpbHRvOmRhdmlkLmJsYWNrQGVt
Yy5jb20iPmRhdmlkLmJsYWNrQGVtYy5jb208L2E+JnF1b3Q7DQogJmx0OzxhIGhyZWY9Im1haWx0
bzpkYXZpZC5ibGFja0BlbWMuY29tIj5kYXZpZC5ibGFja0BlbWMuY29tPC9hPiZndDssIE11cmFy
aSBTcmlkaGFyYW4gJmx0OzxhIGhyZWY9Im1haWx0bzptdXJhcmlzQG1pY3Jvc29mdC5jb20iPm11
cmFyaXNAbWljcm9zb2Z0LmNvbTwvYT4mZ3Q7LCBEaW5lc2ggRHV0dCAmbHQ7PGEgaHJlZj0ibWFp
bHRvOmRkdXR0QGNpc2NvLmNvbSI+ZGR1dHRAY2lzY28uY29tPC9hPiZndDssICZxdW90OzxhIGhy
ZWY9Im1haWx0bzprcmVlZ2VyQGNpc2NvLmNvbSI+a3JlZWdlckBjaXNjby5jb208L2E+JnF1b3Q7
DQogJmx0OzxhIGhyZWY9Im1haWx0bzprcmVlZ2VyQGNpc2NvLmNvbSI+a3JlZWdlckBjaXNjby5j
b208L2E+Jmd0Ozwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo3LjVwdDtmb250
LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiM1
RjVGNUYiPkNjOiAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDs8L3NwYW4+PHNwYW4gc3R5bGU9
ImZvbnQtc2l6ZTo3LjVwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OywmcXVvdDtzYW5z
LXNlcmlmJnF1b3Q7Ij4mcXVvdDs8YSBocmVmPSJtYWlsdG86ZGNAaWV0Zi5vcmciPmRjQGlldGYu
b3JnPC9hPiZxdW90OyAmbHQ7PGEgaHJlZj0ibWFpbHRvOmRjQGlldGYub3JnIj5kY0BpZXRmLm9y
ZzwvYT4mZ3Q7PC9zcGFuPg0KPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo3LjVwdDtmb250
LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiM1
RjVGNUYiPkRhdGU6ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOzwvc3Bhbj48c3BhbiBzdHls
ZT0iZm9udC1zaXplOjcuNXB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LCZxdW90O3Nh
bnMtc2VyaWYmcXVvdDsiPjAxLzI2LzIwMTIgMDQ6MjUgUE08L3NwYW4+DQo8YnI+DQo8c3BhbiBz
dHlsZT0iZm9udC1zaXplOjcuNXB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LCZxdW90
O3NhbnMtc2VyaWYmcXVvdDs7Y29sb3I6IzVGNUY1RiI+U3ViamVjdDogJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7PC9zcGFuPjxzcGFuIHN0eWxlPSJmb250LXNpemU6Ny41cHQ7Zm9udC1mYW1p
bHk6JnF1b3Q7QXJpYWwmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+W2RjXSBjb21tZW50
cyBhbmQgc3VnZ2VzdGlvbnMgdG8gZHJhZnQtbmFydGVuLW52MDMtb3ZlcmxheS1wcm9ibGVtLXN0
YXRtZW50LTAxPC9zcGFuPg0KPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo3LjVwdDtmb250
LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7O2NvbG9yOiM1
RjVGNUYiPlNlbnQgYnk6ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOzwvc3Bhbj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOjcuNXB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LCZxdW90
O3NhbnMtc2VyaWYmcXVvdDsiPjxhIGhyZWY9Im1haWx0bzpkYy1ib3VuY2VzQGlldGYub3JnIj5k
Yy1ib3VuY2VzQGlldGYub3JnPC9hPjwvc3Bhbj4NCjxvOnA+PC9vOnA+PC9wPg0KPGRpdiBjbGFz
cz0iTXNvTm9ybWFsIiBhbGlnbj0iY2VudGVyIiBzdHlsZT0idGV4dC1hbGlnbjpjZW50ZXIiPg0K
PGhyIHNpemU9IjMiIHdpZHRoPSIxMDAlIiBub3NoYWRlPSIiIHN0eWxlPSJjb2xvcjojQUNBODk5
IiBhbGlnbj0iY2VudGVyIj4NCjwvZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCIgc3R5bGU9Im1h
cmdpbi1ib3R0b206MTIuMHB0Ij48YnI+DQo8YnI+DQo8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1z
aXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2Vy
aWYmcXVvdDsiPlRob21hcywgZXQgYWwsIDwvc3Bhbj4NCjxicj4NCjxzcGFuIHN0eWxlPSJmb250
LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1z
ZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEw
LjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVv
dDsiPkhlcmUgYXJlIG15IGNvbW1lbnRzIHRvIGRyYWZ0LW5hcnRlbi1udjAzLW92ZXJsYXktcHJv
YmxlbS1zdGF0bWVudC0wMSBhbmQgc3VnZ2VzdGVkIHdvcmRpbmcgY2hhbmdlOjwvc3Bhbj4NCjxi
cj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGli
cmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8YnI+DQo8c3Bh
biBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7
LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPjMuMS4gTGltaXRhdGlvbnMgb2YgRXhpc3RpbmcgVmly
dHVhbCBOZXR3b3JrIE1vZGVscy4NCjwvc3Bhbj48YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYm
cXVvdDsiPiZuYnNwOzwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7
Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5T
aW5jZSBJRUVFODAyLjEgZGVmaW5lZCBWTEFOIHNlcGFyYXRpb24gaXMgbWVudGlvbmVkLCBpdCB3
b3VsZCBiZSBhcHByb3ByaWF0ZSB0byBtZW50aW9uIFBCQuKAmXMgSVNJRCBzZXBhcmF0aW9uLg0K
PC9zcGFuPjxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZx
dW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8
YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxp
YnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPlRoZSBsaW1pdGF0aW9uIGZvciBQQkIg
YW5kIFZMQU4gc2hvdWxkIGluY2x1ZGUgdGhhdCBNQUMgYWRkcmVzc2VzIGNhbuKAmXQgYmUgYWdn
cmVnYXRlZCwgdGhlcmVmb3JlIGZvcndhcmRpbmcgdGFibGUgY2FuIGJlIHZlcnkgbGFyZ2UgZm9y
IGxhcmdlIGRhdGEgY2VudGVycy4NCjwvc3Bhbj48YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYm
cXVvdDsiPiZuYnNwOzwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7
Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5T
ZWNvbmQgcGFyYWdyYXBoOiBXaHkg4oCcVkxBTnMgYXJlIGEgcHVyZSBicmlkZ2luZyBjb25zdHJ1
Y3Qgd2hpbGUgVlJGIGlzIHB1cmUgcm91dGluZyBjb25zdHJ1Y3TigJ0gaXMgYSBwcm9ibGVtPw0K
PC9zcGFuPjxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZx
dW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8
YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxp
YnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPjQuIE5ldHdvcmsgT3ZlcmxheXM8L3Nw
YW4+DQo8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVv
dDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPiZuYnNwOzwvc3Bhbj4gPGJy
Pg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJy
aSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5TaG91bGQgYWRkIGEgc3Vic2VjdGlvbiB0
byBkZXNjcmliZSBWaXJ0dWFsIE5ldHdvcmsgSW5zdGFuY2UgSUQuDQo8L3NwYW4+PGJyPg0KPHNw
YW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90
OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4mbmJzcDs8L3NwYW4+IDxicj4NCjxzcGFuIHN0eWxl
PSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7
c2Fucy1zZXJpZiZxdW90OyI+NC54LiBWaXJ0dWFsIE5ldHdvcmsgSW5zdGFuY2UgSUQ8L3NwYW4+
DQo8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtD
YWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPiZuYnNwOzwvc3Bhbj4gPGJyPg0K
PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZx
dW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5WaXJ0dWFsIE5ldHdvcmsgSW5zdGFuY2UgaXMg
Zm9yIHNlZ3JlZ2F0aW5nIHRyYWZmaWMgYmVsb25naW5nIHRvIGRpZmZlcmVudCB0ZW5hbnRzIG9y
IGRpZmZlcmVudCB6b25lcyBvZiBvbmUgdGVuYW50LiBXaGVuIGEgZGF0YSBjZW50ZXIgdXNlcyBP
dmVybGF5IE5ldHdvcmsgdG8gaGlkZSBob3N0cyBhZGRyZXNzZXMsIGl0IGlzIGltcG9ydGFudA0K
IHRoYXQgVmlydHVhbCBOZXR3b3JrIEluc3RhbmNlIGlkZW50aWZpZXIgY2FuIHByb3Blcmx5IHJl
cHJlc2VudCB0aGUgem9uZXMgb3IgdGhlIHRlbmFudHMgaW4gZW50aXJlIGRhdGEgY2VudGVyLCBl
c3BlY2lhbGx5IHdoZW4gdGhlIG92ZXJsYXkgZWRnZSBub2RlcyBhcmUgYWNjZXNzIHN3aXRjaGVz
LCBpLmUuIG5vdCBlbWJlZGRlZCBpbiBoeXBlcnZpc29ycy4NCjwvc3Bhbj48YnI+DQo8c3BhbiBz
dHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZx
dW90O3NhbnMtc2VyaWYmcXVvdDsiPiZuYnNwOzwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZv
bnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5z
LXNlcmlmJnF1b3Q7Ij5XaGVuIG92ZXJsYXkgZWRnZSBub2RlcyBhcmUgYWNjZXNzIHN3aXRjaGVz
LCB0aGUgZGF0YSBmcmFtZXMgYmVmb3JlIGVudGVyaW5nIHRoZSBPdmVybGF5IE5ldHdvcmsgb3Ig
YWZ0ZXIgZXhpdGluZyB0aGUgT3ZlcmxheSBOZXR3b3JrIG1pZ2h0IGNhcnJ5IHRyYWRpdGlvbmFs
IFZMQU4tSUQgZm9yIHByb3BlciB0cmFmZmljIHNlZ3JlZ2F0aW9uLg0KIFRoZSBWaXJ0dWFsIE5l
dHdvcmsgSW5zdGFuY2UgSUQgdmFsdWUgY2FycmllZCBieSB0aGUgT3ZlcmxheSBIZWFkZXIgJm5i
c3A7b2YgdGhlIGRhdGEgZnJhbWVzIG1pZ2h0IGJlIDI0IGJpdHMgKGFzIGRlc2NyaWJlZCBpbiAz
LjIpLiBUaG9zZSBWTEFOLUlEIGZvciBkYXRhIGZyYW1lcyB1bmRlciBlYWNoIG92ZXJsYXkgZWRn
ZSBub2RlIGFyZSBvbmx5IGxvY2FsbHkgc2lnbmlmaWNhbnQuIFByb3BlciBtYXBwaW5nIGhhcyB0
byBiZSBtYWludGFpbmVkLg0KPC9zcGFuPjxicj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAu
MHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90
OyI+Jm5ic3A7PC9zcGFuPiA8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250
LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPjYuMiAo
VFJJTEwpICZhbXA7Ni4zIChMMlZQTik8L3NwYW4+DQo8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1z
aXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2Vy
aWYmcXVvdDsiPiZuYnNwOzwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4w
cHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7
Ij5JdCBpcyBuZWNlc3NhcnkgdG8gcG9pbnQgb3V0IHRoYXQgYm90aCBUUklMTCBhbmQgTDJWUE4g
Y2FycnkgdGhlIFZMQU4tSUQgZW1iZWRkZWQgaW4gdGhlIG9yaWdpbmFsIEV0aGVybmV0IGZyYW1l
cyBhY3Jvc3MgdGhlIE92ZXJsYXkgTmV0d29yayBhbmQgdGhlIFZMQU4tSUQgbWFpbnRhaW4gdGhl
IHNhbWUgbWVhbmluZyBpbiB0d28gc2VwYXJhdGUNCiBMMiBpc2xhbmRzLiA8L3NwYW4+PGJyPg0K
PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZx
dW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4mbmJzcDs8L3NwYW4+IDxicj4NCjxzcGFuIHN0
eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1
b3Q7c2Fucy1zZXJpZiZxdW90OyI+V2hlbiBWTEFOLUlEIHRhZ2dlZCBFdGhlcm5ldCBmcmFtZXMg
dHJhdmVyc2UgYWNyb3NzIHRoZSBPdmVybGF5IE5ldHdvcmsgZm9yIERhdGEgQ2VudGVyLCB0aGUg
VkxBTi1JRCBjYXJyaWVkIGJ5IEV0aGVybmV0IGZyYW1lcyBsb3NlIGl0cyBzaWduaWZpY2FuY2Uu
IEluIGFub3RoZXIgd29yZHMsIGEgZGlmZmVyZW50IFZMQU4tSUQgbWlnaHQNCiBiZSByZS1hc3Np
Z25lZCB0byB0aGUgRXRoZXJuZXQgZnJhbWVzIGJ5IHRoZSBFZ3Jlc3Mgb3ZlcmxheSBlZGdlLiA8
L3NwYW4+PGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1
b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4mbmJzcDs8L3NwYW4+IDxi
cj4NCjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGli
cmkmcXVvdDssJnF1b3Q7c2Fucy1zZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8YnI+DQo8c3Bh
biBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7
LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPjUuMy4gQXNzb2NpYXRpbmcgYSBWTklEIHdpdGggYW4g
ZW5kcG9pbnQuDQo8L3NwYW4+PGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9u
dC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij4mbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7PC9zcGFu
Pg0KPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7
Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5Zb3Ugc3RhdGVkIHRoYXQg4oCc
dHlwaWNhbGx5LCBpdCBpcyBhIHZpcnR1YWwgTklDIGNvbWluZyB1cCB0aGF0IHRyaWdnZXJzIHRo
aXMgYXNzb2NpYXRpb27igJ0uIEl0IGlzIG5vdCBxdWl0ZSByaWdodC4gVmlydHVhbCBOSUMsIHNh
bWUgYXMgcGh5c2ljYWwgTklDIGZvciBhIG5vbi12aXJ0dWFsaXplZCBzZXJ2ZXIsIHVzdWFsbHkg
aXMgbm90DQogYXdhcmUgb2Ygd2hpY2ggbmV0d29yayBzZWdtZW50IGl0IGlzIGF0dGFjaGVkIHRv
LiBGb3IgRXRoZXJuZXQgZGF0YSBmcmFtZXMgY29taW5nIG91dCBmcm9tIGEgdHJhZGl0aW9uYWwg
c2VydmVyIChpLmUuIHNlcnZlciB3aXRob3V0IHZpcnR1YWxpemF0aW9uKSwgdGhlIGRhdGEgZnJh
bWUgaXMgbm90IHRhZ2dlZCwgaS5lLiBub3QgaGF2aW5nIFZJRCBhc3NvY2lhdGVkIHdpdGggaXQu
IEl0IGlzIHRoZSBmaXJzdCBzd2l0Y2ggd2hpY2ggYXNzaWduDQogdGhlIFZJRCBmb3IgdGhlIGRh
dGEgZnJhbWUuIDwvc3Bhbj48YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250
LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPiZuYnNw
Ozwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6
JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5JdCBpcyBtb3JlIGFj
Y3VyYXRlIHRvIHNheSAmbmJzcDvigJx2aXJ0dWFsIHN3aXRjaOKAnSBvciDigJxIeXBlcnZpc29y
4oCdIHdpdGhpbiB0aGUgc2VydmVyIHdoaWNoIGNhbiB0cmlnZ2VyIHRoZSBhc3NvY2lhdGlvbi4N
Cjwvc3Bhbj48YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTom
cXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPiZuYnNwOzwvc3Bhbj4g
PGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2Fs
aWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5TaW5jZSBvdmVybGF5IGhlYWRlciBj
YW4gYmUgYWxzbyBhZGRlZCBieSBmaXJzdCBhY2Nlc3Mgc3dpdGNoZXMgKGkuZS4gc3dpdGNoZXMg
bm90IGVtYmVkZGVkIGluIHBoeXNpY2FsIHNlcnZlcnMpLCBpdCBpcyB2ZXJ5IHBvc3NpYmxlIHRo
YXQgJm5ic3A7ZGF0YSBmcmFtZXMgYXJyaXZpbmcgYXQgdGhlIG92ZXJsYXkgZWRnZSAoaS5lLiB0
aGUNCiBmaXJzdCBhY2Nlc3Mgc3dpdGNoKSBpcyBhbHJlYWR5IFZMQU4gdGFnZ2VkLiBUaGVuIGFz
c29jaWF0aW9uIChvciBtYXBwaW5nKSBmcm9tIHRoZSBWTEFOLVRhZyB0byBWTklEIGhhcyB0byBi
ZSBvcGVyYXRvciBhZG1pbmlzdHJhdGVkLg0KPC9zcGFuPjxicj4NCjxzcGFuIHN0eWxlPSJmb250
LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssJnF1b3Q7c2Fucy1z
ZXJpZiZxdW90OyI+Jm5ic3A7PC9zcGFuPiA8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEw
LjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVv
dDsiPiZuYnNwOzwvc3Bhbj4gPGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9u
dC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OywmcXVvdDtzYW5zLXNlcmlmJnF1b3Q7Ij5MaW5k
YSBEdW5iYXI8L3NwYW4+DQo8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250
LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LCZxdW90O3NhbnMtc2VyaWYmcXVvdDsiPiZuYnNw
Ozwvc3Bhbj48dHQ+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQiPl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPC9zcGFuPjwvdHQ+PHNwYW4gc3R5bGU9
ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q291cmllciBOZXcmcXVvdDsiPjxi
cj4NCjx0dD5kYyBtYWlsaW5nIGxpc3Q8L3R0Pjxicj4NCjx0dD48YSBocmVmPSJtYWlsdG86ZGNA
aWV0Zi5vcmciPmRjQGlldGYub3JnPC9hPjwvdHQ+PGJyPg0KPC9zcGFuPjxhIGhyZWY9Imh0dHBz
Oi8vd3d3LmlldGYub3JnL21haWxtYW4vbGlzdGluZm8vZGMiPjx0dD48c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjEwLjBwdCI+aHR0cHM6Ly93d3cuaWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9kYzwv
c3Bhbj48L3R0PjwvYT48bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPC9kaXY+DQo8L2JvZHk+DQo8
L2h0bWw+DQo=

--_000_EB9B93801780FD4CA165E0FBCB3C3E6701A46BSJEXCHMB09corpadb_--


From Peter.AshwoodSmith@huawei.com  Fri Jan 27 13:38:54 2012
Return-Path: <Peter.AshwoodSmith@huawei.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0B94A21F8675 for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 13:38:54 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.524
X-Spam-Level: 
X-Spam-Status: No, score=-2.524 tagged_above=-999 required=5 tests=[AWL=0.075,  BAYES_00=-2.599]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id MC0RPlrQmBeM for <dc@ietfa.amsl.com>; Fri, 27 Jan 2012 13:38:52 -0800 (PST)
Received: from dfwrgout.huawei.com (dfwrgout.huawei.com [206.16.17.72]) by ietfa.amsl.com (Postfix) with ESMTP id 47C3B21F8650 for <dc@ietf.org>; Fri, 27 Jan 2012 13:38:52 -0800 (PST)
Received: from 172.18.9.243 (EHLO dfweml201-edg.china.huawei.com) ([172.18.9.243]) by dfwrg02-dlp.huawei.com (MOS 4.2.3-GA FastPath) with ESMTP id ACR83500; Fri, 27 Jan 2012 16:38:52 -0500 (EST)
Received: from DFWEML403-HUB.china.huawei.com (10.193.5.151) by dfweml201-edg.china.huawei.com (172.18.9.107) with Microsoft SMTP Server (TLS) id 14.1.323.3; Fri, 27 Jan 2012 13:36:26 -0800
Received: from DFWEML503-MBX.china.huawei.com ([10.124.31.29]) by dfweml403-hub.china.huawei.com ([10.193.5.151]) with mapi id 14.01.0323.003; Fri, 27 Jan 2012 13:36:27 -0800
From: AshwoodsmithPeter <Peter.AshwoodSmith@huawei.com>
To: Thomas Narten <narten@us.ibm.com>, Paul Unbehagen <paul@unbehagen.net>
Thread-Topic: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
Thread-Index: AQHM3Sh75ODcms7W/Uua1gvAD5+CBZYhHjuA//+aheA=
Date: Fri, 27 Jan 2012 21:36:26 +0000
Message-ID: <7AE6A4247B044C4ABE0A5B6BF427F8E290D35B@dfweml503-mbx>
In-Reply-To: <201201271924.q0RJO0mc027840@cichlid.raleigh.ibm.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.193.60.134]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Cc: "dc@ietf.org" <dc@ietf.org>, "david.black@emc.com" <david.black@emc.com>, Dinesh Dutt <ddutt@cisco.com>, Linda Dunbar <linda.dunbar@huawei.com>, Murari Sridharan <muraris@microsoft.com>, "kreeger@cisco.com" <kreeger@cisco.com>
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-problem-statment-01
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 27 Jan 2012 21:38:54 -0000

Thomas, I'm not aware of any SPB-V implementations ... yet ... but there ar=
e a number of interoperable SPB-M implementations (see http://ieee802.org/1=
/files/public/docs2011/aq-ashwood-smith-spbm-3rd-interop-0718-v01.pdf ).

My understanding is that there are a bunch of live deployments of SPB-M alr=
eady including DC's but I am not privy to all the details of course.

So I think concentrating on SPB-M would make sense, but if somebody feels o=
therwise by all means speak up.

Peter Ashwood-Smith


-----Original Message-----
From: Thomas Narten [mailto:narten@us.ibm.com]=20
Sent: Friday, January 27, 2012 2:24 PM
To: Paul Unbehagen
Cc: AshwoodsmithPeter; dc@ietf.org; david.black@emc.com; Dinesh Dutt; Linda=
 Dunbar; Murari Sridharan; kreeger@cisco.com
Subject: Re: [dc] comments and suggestions to draft-narten-nv03-overlay-pro=
blem-statment-01

> Also, all implementations and deployments of SPB are SPBm, the
>  scaling and ability to deal with Mac explosions help in the DC.

Can others confirm that SPB-M is what deployments are using? And that
SPB-V is essentially irrelevant?

The benefits of SPB-M over SPB-V seem pretty compelling to me. It
would be nice if we could focus all SPB discussions on SPB-M rather
than worry about SPB-V.

Thomas


From lmcm@tid.es  Sun Jan 29 08:35:05 2012
Return-Path: <lmcm@tid.es>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id BB9C021F84DA for <dc@ietfa.amsl.com>; Sun, 29 Jan 2012 08:35:05 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.741
X-Spam-Level: 
X-Spam-Status: No, score=-2.741 tagged_above=-999 required=5 tests=[AWL=-2.557, BAYES_40=-0.185, HTML_MESSAGE=0.001]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id JasPWQrLSwKx for <dc@ietfa.amsl.com>; Sun, 29 Jan 2012 08:35:04 -0800 (PST)
Received: from tidos.tid.es (tidos.tid.es [195.235.93.44]) by ietfa.amsl.com (Postfix) with ESMTP id 54A5021F84DC for <dc@ietf.org>; Sun, 29 Jan 2012 08:35:02 -0800 (PST)
Received: from sbrightmailg01.hi.inet (sbrightmailg01.hi.inet [10.95.64.104]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTP id <0LYK009ETJEC0P@tid.hi.inet> for dc@ietf.org; Sun, 29 Jan 2012 17:35:00 +0100 (MET)
Received: from tid (tid.hi.inet [10.95.64.10])	by sbrightmailg01.hi.inet (Symantec Messaging Gateway) with SMTP id D4.C2.02893.435752F4; Sun, 29 Jan 2012 17:35:00 +0100 (CET)
Received: from correo.tid.es (mailhost.hi.inet [10.95.64.100]) by tid.hi.inet (iPlanet Messaging Server 5.2 HotFix 2.14 (built Aug 8 2006)) with ESMTPS id <0LYK009EPJEC0P@tid.hi.inet> for dc@ietf.org; Sun, 29 Jan 2012 17:35:00 +0100 (MET)
Received: from EXCLU2K7.hi.inet ([10.95.67.65]) by htcasmad2.hi.inet ([192.168.0.2]) with mapi; Sun, 29 Jan 2012 17:35:00 +0100
Date: Sun, 29 Jan 2012 17:34:57 +0100
From: LUIS MIGUEL CONTRERAS MURILLO <lmcm@tid.es>
To: "dc@ietf.org" <dc@ietf.org>, "sdnp-bounces@lucidvision.com" <sdnp-bounces@lucidvision.com>
Message-id: <B348B152E5F11640B2247E54304E53FC590D5C8655@EXCLU2K7.hi.inet>
MIME-version: 1.0
Content-type: multipart/alternative; boundary="Boundary_(ID_4+fNgeRGymIJ6P9WGM/+WQ)"
Content-language: es-ES
Accept-Language: es-ES, en-US
Thread-topic: CfP "International Workshop on Cross-Stratum Optimization for Cloud Computing and Distributed Networked Applications"
Thread-index: Aczeo9YTOMYvBNgyTbmKW4j9tHbSxQ==
acceptlanguage: es-ES, en-US
X-AuditID: 0a5f4068-b7f2d6d000000b4d-46-4f2575344182
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprBKsWRmVeSWpSXmKPExsXCFe/ApWtSqupvsOycvkXL+busDoweS5b8 ZApgjOKySUnNySxLLdK3S+DKeLH6KGvBjGmMFY/bLrM3MP5pYuxi5OCQEDCR+NTr1sXICWSK SVy4t56ti5GLQ0hgA6PExdXbmCCcr4wSXy5+ZoRwGhklXv3sZAPpZhFQldgzxx+km03AUGLW zkmsIDXCAk2MEvvOHWACSYgIJEhcXn6UGcTmFfCUeHlhLyuELSjxY/I9FhCbWSBXYuP+/6wQ trjEnF8TwWxGAVmJledPgy0WEWhnlDh05T3UUD2JOfP+QhXJSPxfvpcF4gcBiSV7zjND2KIS Lx//Y53AKDwLyb5ZSPbNQrIPwtaTuDF1ChuErS2xbOFrZghbV2LGv0MsyOILGNlXMYoVJxVl pmeU5CZm5qQbGOplZOpl5qWWbGKERE3GDsblO1UOMQpwMCrx8CruUvQXYk0sK67MPcQoycGk JMpbUKLqL8SXlJ9SmZFYnBFfVJqTWnyIUYKDWUmE13Gmir8Qb0piZVVqUT5MSoaDQ0mCNx+k TbAoNT21Ii0zB5gaYNJMHJwg7TxA7ZkgNbzFBYm5xZnpEPlTjKoc1xr3n2cUYsnLz0uVEucN BCkSACnKKM2Dm/OKURzoYGHeLJAsDzC5wU14BTScCWj4cwaw4SWJCCmpBkaL2ckCKjMcbU4u Fzvyb574oUNhXIvXzDi7pSh2eqgQ1+HFU9u3XdstdOFU14snm3Ud5lpvzA/akmf7bYqZ88Yd p36+tOaVCPlSeCV6u/PPO58vHJSIytF+cKL3OPP2x55/1abvkt99YHeoQNeGhWcfSvBczOKV dlyaEL8m28ju1xGPOZ0fz29vUGIpzkg01GIuKk4EAOl1CH8rAwAA
Subject: [dc] CfP "International Workshop on Cross-Stratum Optimization for Cloud Computing and Distributed Networked Applications"
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Sun, 29 Jan 2012 16:35:05 -0000

--Boundary_(ID_4+fNgeRGymIJ6P9WGM/+WQ)
Content-type: text/plain; charset=iso-8859-1
Content-transfer-encoding: quoted-printable

Dear colleagues,

I forward you the following Call for Papers because IMHO it is aligned with=
 the interests of this community.

I think this could be a good place where disseminate original work part of,=
 or based on, the work being carried here.

Sorry for any inconvenience,

Thanks, best regards,

Luis


[Apologies if you receive multiple copies of this message]

Reminder
Call for Papers for the "International Workshop on Cross-Stratum Optimizati=
on for Cloud Computing and Distributed Networked Applications"
http://cccso.net/ or http://www.cccso.net/

Co-located with the 10th IEEE International Symposium on Parallel and Distr=
ibuted Processing with Applications, ISPA 2012 http://www.arcos.inf.uc3m.es=
/ispa12/index.shtml
July 10-13, 2012

Leganes, Madrid, Spain

Aims and Scope
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
The current lack of interaction between networked applications and the unde=
rlying network during service provisioning can cause inefficiencies in the =
use of network resources and can negatively impact the quality received by =
the final consumers of those applications.
Typical networked applications are offered through Information Technology (=
IT) resources (such as computing and storage facilities) residing in data c=
enters. Data centers then provide the physical and virtual infrastructure i=
n which applications and services are provided. Since the data centers are =
usually distributed geographically around a network, many decisions made in=
 the control and management of application services, such as where to insta=
ntiate another service instance, or which data center out of several is ass=
igned to a new customer, can have a significant impact on the state of the =
network. In the same way, the capabilities and state of the network can hav=
e a major impact on application performance.
Cross-stratum optimization (CSO) is defined as the combined optimization of=
 both the IT data center and the network components of an application that =
aims to provide joint resource optimization, responsiveness to quickly chan=
ging demands from/to application to/from network, enhanced service resilien=
ce using cooperative recovery techniques, and quality of experience assuran=
ce by a better use of existing network and application resources, among oth=
ers.
The CSO involves the overall optimization of application layer (IT) and net=
work resources by envisioning next generation architecture for interactions=
 and exchanges between the two layers to improve service continuity, perfor=
mance guarantees, scalability and manageability. The goal of this workshop =
is to promote the research interest on the optimal integration of applicati=
on and network resources.
This workshop aims to explore the challenges and issues faced by cloud comp=
uting and data center integration with networks. Among the key areas of inv=
estigation to be discussed in the workshop are as follows:
.- Application/network integration architectures and subsystems
.- Use cases, business models and requirements for application/network inte=
gration
.- Control/management issues for application/network integration
.- Network virtualization and its impact for application/network integratio=
n
.- Network-aware application/cloud computing
.- Flexible and scalable networking solutions for distributed Data Centers
.- Joint application/network reliability and security
.- Experimental/trial experience
.- Scalability
.- Joint/shared performance and fault monitoring
.- Multi-domain issues

Important Dates
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Extended Submission Deadline: 17 February 2012 (*firm* deadline)
Paper Acceptance Notification: 30 March 2012
Camera-ready Paper Submissions: 13 April 2012
Tentative workshop day: 10 July 2012 (to be confirmed)

Venue
=3D=3D=3D=3D=3D=3D
Universidad Carlos III de Madrid, Leganes, Madrid, Spain

Submision guidelines
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Target papers should describe original and unpublished work. All papers mus=
t be submitted in PDF format. Authors should submit full papers of up to 6 =
pages, following strictly the IEEE Computer Society Proceedings Manuscript =
style (available at http://www.computer.org/portal/web/cscps/formatting), u=
sing two-column, single-space format, with 10-point font size. Figures and =
references must be included in the 6 pages.
The submission process will be done online by using EasyChair submission sy=
stem:
http://www.easychair.org/conferences/?conf=3Dispa2012
choosing this workshop among the various workshops held in conjunction with=
 ISPA-2012
Submissions received after deadline, exceeding length limit, or not followi=
ng the specified format will not be considered.
The proceedings will be published by the IEEE in the same volume as the mai=
n conference and will be made online through the IEEE Xplore.

Technical Program Committee
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
Richard Alimi (Google, USA)
Greg Bernstein (Grotto Networking, USA)
TaeSang Choi (ETRI, Korea)
Nicola Ciulli (Nextworks, Italy)
Oscar Gonzalez de Dios (Telef=F3nica I+D, Spain)
Volker Hilt (Bell Labs, USA)
Giada Landi (Nextworks, Italy)
Dan Li (Huawei, China)
Vishwas Manral (HP, USA)
Thomas D. Nadeau (CA Technologies, USA)
Kohei Shiomoto (NTT, Japan)
Ning So (Verizon, USA)
Hui Yang (Beijing University Post and Telecommunication (BUPT), China)
Yang Richard Yang (Yale University, USA)

Workshop Organizing Committee
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D
Young Lee, Huawei Technologies (leeyoung (at) huawei.com)
Luis M. Contreras, Telef=F3nica I+D (lmcm (at) tid.es)
Andrea Fumagalli, The University of Texas at Dallas (andreaf (at) utdallas.=
edu)



_______________
Luis M. Contreras
Telef=F3nica I+D
lmcm@tid.es


________________________________
Este mensaje se dirige exclusivamente a su destinatario. Puede consultar nu=
estra pol=EDtica de env=EDo y recepci=F3n de correo electr=F3nico en el enl=
ace situado m=E1s abajo.
This message is intended exclusively for its addressee. We only send and re=
ceive email on the basis of the terms set out at
http://www.tid.es/ES/PAGINAS/disclaimer.aspx

--Boundary_(ID_4+fNgeRGymIJ6P9WGM/+WQ)
Content-type: text/html; charset=iso-8859-1
Content-transfer-encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0cm;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EstiloCorreo17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:612.0pt 792.0pt;
	margin:70.85pt 3.0cm 70.85pt 3.0cm;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Dear colleagues, <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I forward you the following Call for Papers because =
IMHO it is aligned with the interests of this community.
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I think this could be a good place where disseminate=
 original work part of, or based on, the work being carried here.<o:p></o:p=
></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Sorry for any inconvenience,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Thanks, best regards,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Luis<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">[Apologies if you receive multiple copies of this me=
ssage]<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Reminder <o:p></o:p></p>
<p class=3D"MsoNormal">Call for Papers for the &quot;International Workshop=
 on Cross-Stratum Optimization for Cloud Computing and Distributed Networke=
d Applications&quot;<o:p></o:p></p>
<p class=3D"MsoNormal"><a href=3D"http://cccso.net/">http://cccso.net/</a> =
or <a href=3D"http://www.cccso.net/">
http://www.cccso.net/</a><o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Co-located with the 10th IEEE International Symposiu=
m on Parallel and Distributed Processing with Applications, ISPA 2012
<a href=3D"http://www.arcos.inf.uc3m.es/ispa12/index.shtml">http://www.arco=
s.inf.uc3m.es/ispa12/index.shtml</a><o:p></o:p></p>
<p class=3D"MsoNormal">July 10-13, 2012<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Leganes, Madrid, Spain<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Aims and Scope<o:p></o:p></p>
<p class=3D"MsoNormal">=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<o:p></o:p></=
p>
<p class=3D"MsoNormal">The current lack of interaction between networked ap=
plications and the underlying network during service provisioning can cause=
 inefficiencies in the use of network resources and can negatively impact t=
he quality received by the final consumers
 of those applications.<o:p></o:p></p>
<p class=3D"MsoNormal">Typical networked applications are offered through I=
nformation Technology (IT) resources (such as computing and storage facilit=
ies) residing in data centers. Data centers then provide the physical and v=
irtual infrastructure in which applications
 and services are provided. Since the data centers are usually distributed =
geographically around a network, many decisions made in the control and man=
agement of application services, such as where to instantiate another servi=
ce instance, or which data center
 out of several is assigned to a new customer, can have a significant impac=
t on the state of the network. In the same way, the capabilities and state =
of the network can have a major impact on application performance.<o:p></o:=
p></p>
<p class=3D"MsoNormal">Cross-stratum optimization (CSO) is defined as the c=
ombined optimization of both the IT data center and the network components =
of an application that aims to provide joint resource optimization, respons=
iveness to quickly changing demands
 from/to application to/from network, enhanced service resilience using coo=
perative recovery techniques, and quality of experience assurance by a bett=
er use of existing network and application resources, among others.<o:p></o=
:p></p>
<p class=3D"MsoNormal">The CSO involves the overall optimization of applica=
tion layer (IT) and network resources by envisioning next generation archit=
ecture for interactions and exchanges between the two layers to improve ser=
vice continuity, performance guarantees,
 scalability and manageability. The goal of this workshop is to promote the=
 research interest on the optimal integration of application and network re=
sources.<o:p></o:p></p>
<p class=3D"MsoNormal">This workshop aims to explore the challenges and iss=
ues faced by cloud computing and data center integration with networks. Amo=
ng the key areas of investigation to be discussed in the workshop are as fo=
llows:<o:p></o:p></p>
<p class=3D"MsoNormal">.- Application/network integration architectures and=
 subsystems<o:p></o:p></p>
<p class=3D"MsoNormal">.- Use cases, business models and requirements for a=
pplication/network integration<o:p></o:p></p>
<p class=3D"MsoNormal">.- Control/management issues for application/network=
 integration<o:p></o:p></p>
<p class=3D"MsoNormal">.- Network virtualization and its impact for applica=
tion/network integration<o:p></o:p></p>
<p class=3D"MsoNormal">.- Network-aware application/cloud computing<o:p></o=
:p></p>
<p class=3D"MsoNormal">.- Flexible and scalable networking solutions for di=
stributed Data Centers<o:p></o:p></p>
<p class=3D"MsoNormal">.- Joint application/network reliability and securit=
y<o:p></o:p></p>
<p class=3D"MsoNormal">.- Experimental/trial experience<o:p></o:p></p>
<p class=3D"MsoNormal">.- Scalability<o:p></o:p></p>
<p class=3D"MsoNormal">.- Joint/shared performance and fault monitoring<o:p=
></o:p></p>
<p class=3D"MsoNormal">.- Multi-domain issues<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Important Dates<o:p></o:p></p>
<p class=3D"MsoNormal">=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<o:p></=
o:p></p>
<p class=3D"MsoNormal">Extended Submission Deadline: 17 February 2012 (*fir=
m* deadline)<o:p></o:p></p>
<p class=3D"MsoNormal">Paper Acceptance Notification: 30 March 2012 <o:p></=
o:p></p>
<p class=3D"MsoNormal">Camera-ready Paper Submissions: 13 April 2012 <o:p><=
/o:p></p>
<p class=3D"MsoNormal">Tentative workshop day: 10 July 2012 (to be confirme=
d)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><span lang=3D"ES">Venue<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES">=3D=3D=3D=3D=3D=3D<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal"><span lang=3D"ES">Universidad Carlos III de Madrid, =
Leganes, Madrid, Spain<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES"><o:p>&nbsp;</o:p></span></p>
<p class=3D"MsoNormal">Submision guidelines<o:p></o:p></p>
<p class=3D"MsoNormal">=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D<o:p></o:p></p>
<p class=3D"MsoNormal">Target papers should describe original and unpublish=
ed work. All papers must be submitted in PDF format. Authors should submit =
full papers of up to 6 pages, following strictly the IEEE Computer Society =
Proceedings Manuscript style (available
 at <a href=3D"http://www.computer.org/portal/web/cscps/formatting">http://=
www.computer.org/portal/web/cscps/formatting</a>), using two-column, single=
-space format, with 10-point font size. Figures and references must be incl=
uded in the 6 pages.
<o:p></o:p></p>
<p class=3D"MsoNormal">The submission process will be done online by using =
EasyChair submission system:<o:p></o:p></p>
<p class=3D"MsoNormal"><a href=3D"http://www.easychair.org/conferences/?con=
f=3Dispa2012">http://www.easychair.org/conferences/?conf=3Dispa2012</a><o:p=
></o:p></p>
<p class=3D"MsoNormal">choosing this workshop among the various workshops h=
eld in conjunction with ISPA-2012
<o:p></o:p></p>
<p class=3D"MsoNormal">Submissions received after deadline, exceeding lengt=
h limit, or not following the specified format will not be considered.<o:p>=
</o:p></p>
<p class=3D"MsoNormal">The proceedings will be published by the IEEE in the=
 same volume as the main conference and will be made online through the IEE=
E Xplore.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Technical Program Committee<o:p></o:p></p>
<p class=3D"MsoNormal">=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<o:p></o:p></p>
<p class=3D"MsoNormal">Richard Alimi (Google, USA)<o:p></o:p></p>
<p class=3D"MsoNormal">Greg Bernstein (Grotto Networking, USA)<o:p></o:p></=
p>
<p class=3D"MsoNormal"><span lang=3D"ES">TaeSang Choi (ETRI, Korea)<o:p></o=
:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES">Nicola Ciulli (Nextworks, Italy)<o=
:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"ES">Oscar Gonzalez de Dios (Telef=F3ni=
ca I&#43;D, Spain)
<o:p></o:p></span></p>
<p class=3D"MsoNormal">Volker Hilt (Bell Labs, USA) <o:p></o:p></p>
<p class=3D"MsoNormal">Giada Landi (Nextworks, Italy) <o:p></o:p></p>
<p class=3D"MsoNormal">Dan Li (Huawei, China) <o:p></o:p></p>
<p class=3D"MsoNormal">Vishwas Manral (HP, USA)<o:p></o:p></p>
<p class=3D"MsoNormal">Thomas D. Nadeau (CA Technologies, USA) <o:p></o:p><=
/p>
<p class=3D"MsoNormal">Kohei Shiomoto (NTT, Japan) <o:p></o:p></p>
<p class=3D"MsoNormal">Ning So (Verizon, USA) <o:p></o:p></p>
<p class=3D"MsoNormal">Hui Yang (Beijing University Post and Telecommunicat=
ion (BUPT), China)
<o:p></o:p></p>
<p class=3D"MsoNormal">Yang Richard Yang (Yale University, USA)<o:p></o:p><=
/p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Workshop Organizing Committee<o:p></o:p></p>
<p class=3D"MsoNormal">=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<o:p></o:p></p>
<p class=3D"MsoNormal">Young Lee, Huawei Technologies (leeyoung (at) huawei=
.com) <o:p>
</o:p></p>
<p class=3D"MsoNormal"><span lang=3D"ES">Luis M. Contreras, Telef=F3nica I&=
#43;D (lmcm (at) tid.es)
<o:p></o:p></span></p>
<p class=3D"MsoNormal">Andrea Fumagalli, The University of Texas at Dallas =
(andreaf (at) utdallas.edu)<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><span lang=3D"ES">_______________<o:p></o:p></span><=
/p>
<p class=3D"MsoNormal"><span lang=3D"ES">Luis M. Contreras<o:p></o:p></span=
></p>
<p class=3D"MsoNormal"><span lang=3D"ES">Telef=F3nica I&#43;D<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal"><span lang=3D"ES">lmcm@tid.es<o:p></o:p></span></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1">Este mensaje se dirige exclu=
sivamente a su destinatario. Puede consultar nuestra pol=EDtica de env=EDo =
y recepci=F3n de correo electr=F3nico en el enlace situado m=E1s abajo.<br>
This message is intended exclusively for its addressee. We only send and re=
ceive email on the basis of the terms set out at<br>
http://www.tid.es/ES/PAGINAS/disclaimer.aspx<br>
</font>
</body>
</html>

--Boundary_(ID_4+fNgeRGymIJ6P9WGM/+WQ)--

From vumip1@gmail.com  Mon Jan 30 18:39:11 2012
Return-Path: <vumip1@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id C37A521F862B for <dc@ietfa.amsl.com>; Mon, 30 Jan 2012 18:39:11 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.454
X-Spam-Level: 
X-Spam-Status: No, score=-3.454 tagged_above=-999 required=5 tests=[AWL=0.144,  BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id yQEsrQb7nPew for <dc@ietfa.amsl.com>; Mon, 30 Jan 2012 18:39:10 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id B340721F8624 for <dc@ietf.org>; Mon, 30 Jan 2012 18:39:10 -0800 (PST)
Received: by iagf6 with SMTP id f6so7449486iag.31 for <dc@ietf.org>; Mon, 30 Jan 2012 18:39:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=jkpNLuGvtMXgh2gJChVamaxNxkADvlRqvM08QKYV2zc=; b=g52xIY1ZgaWbiH3y1EYxTUz6s9oIW9+VACX0mkY9Ms72oDKPmnZ5rzkcxORRHeaFIH pZR286C8YN4QNGPfpOdnVNLbcDCkYGu7Zp9aP+Hwgw7of4pjiPKLey4966vfc639DKKX 9goA3PWDnQ4+KUWtUFayV9puuJ+JzSx2ffkFk=
MIME-Version: 1.0
Received: by 10.42.132.137 with SMTP id d9mr16295345ict.3.1327977544682; Mon, 30 Jan 2012 18:39:04 -0800 (PST)
Received: by 10.50.140.102 with HTTP; Mon, 30 Jan 2012 18:39:04 -0800 (PST)
In-Reply-To: <F6F7A4AA-E0FA-4EF5-8BAF-2941F7F89C93@asgaard.org>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <F6F7A4AA-E0FA-4EF5-8BAF-2941F7F89C93@asgaard.org>
Date: Mon, 30 Jan 2012 21:39:04 -0500
Message-ID: <CANtnpwg9A3vGUH+Tu4hr45pV_h6xVpQjnKyh_aBOBQT0KU71XA@mail.gmail.com>
From: Bhumip Khasnabish <vumip1@gmail.com>
To: Christopher LILJENSTOLPE <cdl@asgaard.org>
Content-Type: multipart/alternative; boundary=90e6ba6e84f4d20db904b7c9dd2c
Cc: Thomas Narten <narten@us.ibm.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 31 Jan 2012 02:39:11 -0000

--90e6ba6e84f4d20db904b7c9dd2c
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hello Chris,

Thanks.

>So, the big question is, are you interested in just addressing the network
>portion, or the larger problem (including containers, storage concurrency,
etc)?

We can start with a focus on the "network portion" with only as much as
needed of others (concurrency, containers, de-duplication, etc.). Hope this
helps.

Best.

Bhumip




On Wed, Jan 18, 2012 at 8:00 PM, Christopher LILJENSTOLPE
<cdl@asgaard.org>wrote:

> Greetings Bhumip,
>
>
> On 17Jan2012, at 22.33, Bhumip Khasnabish wrote:
>
> > Tom,
> >
> > Thanks.
> >
> > Yes, seamless migration of VM and VNE can be problematic in both intra-
> and
> > inter-data-center environments, especially in multi-hypervisor case.
>
> Agreed, but it's more than just a network problem, is it not?  ESPECIALLY
> in a multi-hypervisor environment.  So, the big question is, are you
> interested in just addressing the network portion, or the larger problem
> (including containers, storage concurrency, etc)?  If the later, do we ha=
ve
> the experience and remit to work on that space?
>
>        Chris
>
> >
> > It may be very helpful to bring one or more of these
> > proprietary VM migration approaches to IETF for consideration
> > for standardization, if that is appropriate.
> > Sure, we'll update the draft to articulate these requirements.
> >
> > Best.
> >
> > Bhumip
> >
> >
> > On Tue, Jan 17, 2012 at 10:40 AM, Thomas Narten <narten@us.ibm.com>
> wrote:
> >
> >> Bhumip,
> >>
> >> I skimmed this document and am having trouble figuring out what it is
> >> intended to do.
> >>
> >> The draft name itself has "problem" in it, but there is no single (or
> >> small set of) succinct problems listed. It's all very high level and
> >> hand wavy. I need help making the connection to an IETF action that
> >> could come out of this document.
> >>
> >> For example, it talks about VM migration.
> >>
> >> Is VM Migration a "problem" today? There are properietary approaches
> >> that the market seems to like OK.
> >>
> >> What is wrong with the current approaches? What is "broken" that needs
> >> fixing? Why should the IETF get involved in this space? What value
> >> would the IETF bring?
> >>
> >> Do you want to be able to do VM migration from one vendor's hypervisor
> >> to another vendor's?  If so, please just say so. Then we can see
> >> whether others here think that is an area the IETF (or some other SDO)
> >> should get involved in.
> >>
> >> Thomas
> >>
> >>
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
>
> --
> =E6=9D=8E=E6=9F=AF=E7=9D=BF
> Check my PGP key here: https://www.asgaard.org/~cdl/cdl.asc
> Current vCard here: https://www.asgaard.org/~cdl/cdl.vcf
>
>

--90e6ba6e84f4d20db904b7c9dd2c
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<div>Hello Chris,</div>
<div>=C2=A0</div>
<div>Thanks.</div>
<div>=C2=A0</div>
<div>&gt;So, the big question is, are you interested in just addressing the=
 network &gt;portion, or the larger problem (including containers, storage =
concurrency, etc)?=C2=A0=C2=A0</div>
<div>=C2=A0</div>
<div>We can start with a focus on the &quot;network portion&quot; with only=
 as much as needed of others (concurrency, containers, de-duplication, etc.=
). Hope this helps.</div>
<div>=C2=A0</div>
<div>Best.</div>
<div>=C2=A0</div>
<div>Bhumip</div>
<div>=C2=A0</div>
<div><br><br>=C2=A0</div>
<div class=3D"gmail_quote">On Wed, Jan 18, 2012 at 8:00 PM, Christopher LIL=
JENSTOLPE <span dir=3D"ltr">&lt;<a href=3D"mailto:cdl@asgaard.org" target=
=3D"_blank">cdl@asgaard.org</a>&gt;</span> wrote:<br>
<blockquote style=3D"BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PA=
DDING-LEFT:1ex" class=3D"gmail_quote">Greetings Bhumip,<br>
<div><br><br>On 17Jan2012, at 22.33, Bhumip Khasnabish wrote:<br><br>&gt; T=
om,<br>&gt;<br>&gt; Thanks.<br>&gt;<br>&gt; Yes, seamless migration of VM a=
nd VNE can be problematic in both intra- and<br>&gt; inter-data-center envi=
ronments, especially in multi-hypervisor case.<br>
<br></div>Agreed, but it&#39;s more than just a network problem, is it not?=
 =C2=A0ESPECIALLY in a multi-hypervisor environment. =C2=A0So, the big ques=
tion is, are you interested in just addressing the network portion, or the =
larger problem (including containers, storage concurrency, etc)? =C2=A0If t=
he later, do we have the experience and remit to work on that space?<br>
<br>=C2=A0 =C2=A0 =C2=A0 =C2=A0Chris<br>
<div>
<div></div>
<div><br>&gt;<br>&gt; It may be very helpful to bring one or more of these<=
br>&gt; proprietary VM migration approaches to IETF for consideration<br>&g=
t; for standardization, if that is appropriate.<br>&gt; Sure, we&#39;ll upd=
ate the draft to articulate these requirements.<br>
&gt;<br>&gt; Best.<br>&gt;<br>&gt; Bhumip<br>&gt;<br>&gt;<br>&gt; On Tue, J=
an 17, 2012 at 10:40 AM, Thomas Narten &lt;<a href=3D"mailto:narten@us.ibm.=
com" target=3D"_blank">narten@us.ibm.com</a>&gt; wrote:<br>&gt;<br>&gt;&gt;=
 Bhumip,<br>
&gt;&gt;<br>&gt;&gt; I skimmed this document and am having trouble figuring=
 out what it is<br>&gt;&gt; intended to do.<br>&gt;&gt;<br>&gt;&gt; The dra=
ft name itself has &quot;problem&quot; in it, but there is no single (or<br=
>
&gt;&gt; small set of) succinct problems listed. It&#39;s all very high lev=
el and<br>&gt;&gt; hand wavy. I need help making the connection to an IETF =
action that<br>&gt;&gt; could come out of this document.<br>&gt;&gt;<br>
&gt;&gt; For example, it talks about VM migration.<br>&gt;&gt;<br>&gt;&gt; =
Is VM Migration a &quot;problem&quot; today? There are properietary approac=
hes<br>&gt;&gt; that the market seems to like OK.<br>&gt;&gt;<br>&gt;&gt; W=
hat is wrong with the current approaches? What is &quot;broken&quot; that n=
eeds<br>
&gt;&gt; fixing? Why should the IETF get involved in this space? What value=
<br>&gt;&gt; would the IETF bring?<br>&gt;&gt;<br>&gt;&gt; Do you want to b=
e able to do VM migration from one vendor&#39;s hypervisor<br>&gt;&gt; to a=
nother vendor&#39;s? =C2=A0If so, please just say so. Then we can see<br>
&gt;&gt; whether others here think that is an area the IETF (or some other =
SDO)<br>&gt;&gt; should get involved in.<br>&gt;&gt;<br>&gt;&gt; Thomas<br>=
&gt;&gt;<br>&gt;&gt;<br></div></div>&gt; __________________________________=
_____________<br>
&gt; dc mailing list<br>&gt; <a href=3D"mailto:dc@ietf.org" target=3D"_blan=
k">dc@ietf.org</a><br>&gt; <a href=3D"https://www.ietf.org/mailman/listinfo=
/dc" target=3D"_blank">https://www.ietf.org/mailman/listinfo/dc</a><br><br>=
--<br>
=E6=9D=8E=E6=9F=AF=E7=9D=BF<br>Check my PGP key here: <a href=3D"https://ww=
w.asgaard.org/~cdl/cdl.asc" target=3D"_blank">https://www.asgaard.org/~cdl/=
cdl.asc</a><br>Current vCard here: <a href=3D"https://www.asgaard.org/~cdl/=
cdl.vcf" target=3D"_blank">https://www.asgaard.org/~cdl/cdl.vcf</a><br>
<br></blockquote></div><br><br clear=3D"all"><br>=C2=A0=20

--90e6ba6e84f4d20db904b7c9dd2c--

From liu.bin21@zte.com.cn  Mon Jan 30 19:20:06 2012
Return-Path: <liu.bin21@zte.com.cn>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A9E0B21F873E for <dc@ietfa.amsl.com>; Mon, 30 Jan 2012 19:20:06 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -99.238
X-Spam-Level: 
X-Spam-Status: No, score=-99.238 tagged_above=-999 required=5 tests=[BAYES_50=0.001, HTML_MESSAGE=0.001, RCVD_DOUBLE_IP_LOOSE=0.76, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zF8R-Jn1imL1 for <dc@ietfa.amsl.com>; Mon, 30 Jan 2012 19:20:05 -0800 (PST)
Received: from mx5.zte.com.cn (mx6.zte.com.cn [95.130.199.165]) by ietfa.amsl.com (Postfix) with ESMTP id 4C5DF21F873C for <dc@ietf.org>; Mon, 30 Jan 2012 19:20:04 -0800 (PST)
Received: from [10.30.17.100] by mx5.zte.com.cn with surfront esmtp id 56690122734555; Tue, 31 Jan 2012 10:54:18 +0800 (CST)
Received: from [10.30.3.21] by [192.168.168.16] with StormMail ESMTP id 18432.122734555; Tue, 31 Jan 2012 11:19:49 +0800 (CST)
Received: (from root@localhost) by mse02.zte.com.cn id q0V3JsoK047235 for <dc@ietf.org>; Tue, 31 Jan 2012 11:19:54 +0800 (GMT-8) (envelope-from liu.bin21@zte.com.cn)
Received: from notes_smtp.zte.com.cn ([10.30.1.239]) by mse02.zte.com.cn with ESMTP id q0V3HrVA045866 for <dc@ietf.org>; Tue, 31 Jan 2012 11:17:53 +0800 (GMT-8) (envelope-from liu.bin21@zte.com.cn)
Message-Id: <201201310319.q0V3JsoK047235@mse02.zte.com.cn>
To: dc@ietf.org
MIME-Version: 1.0
X-Mailer: Lotus Notes Release 7.0.1 January 17, 2006
From: liu.bin21@zte.com.cn
Date: Tue, 31 Jan 2012 11:17:52 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.1FP4|July 25, 2010) at 2012-01-31 11:17:55, Serialize complete at 2012-01-31 11:17:55
Content-Type: multipart/alternative; boundary="=_alternative 00121E7148257996_="
X-MAIL: mse02.zte.com.cn q0V3JsoK047235
X-MSS: AUDITRELEASE@mse02.zte.com.cn
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 31 Jan 2012 03:20:07 -0000

This is a multipart message in MIME format.
--=_alternative 00121E7148257996_=
Content-Type: text/plain; charset="US-ASCII"

Dear Mr. Thomas:

Thank you for your comments! The requirement in the draft contained 
program has made it clear, also includes potential problems, there is no 
clear distinction; I will follow your advice, in the next version, which 
is indicated the problem has been resolved, which is a new problem, then 
we have recognized the problem to compile requirement.

--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail is solely property of the sender's organization. This mail communication is confidential. Recipients named above are obligated to maintain secrecy and are not permitted to disclose the contents of this communication to others.
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the originator of the message. Any views expressed in this message are those of the individual sender.
This message has been scanned for viruses and Spam by ZTE Anti-Spam system.

--=_alternative 00121E7148257996_=
Content-Type: text/html; charset="US-ASCII"


<br><font size=2 face="sans-serif">Dear Mr.</font><tt><font size=2> Thomas:</font></tt>
<br>
<br><font size=2 face="sans-serif">Thank you for your comments! The requirement
in the draft contained program has made it clear, also includes potential
problems, there is no clear distinction; I will follow your advice, in
the next version, which is indicated the problem has been resolved, which
is a new problem, then we have recognized the problem to compile requirement.</font><br><pre>
--------------------------------------------------------
ZTE&nbsp;Information&nbsp;Security&nbsp;Notice:&nbsp;The&nbsp;information&nbsp;contained&nbsp;in&nbsp;this&nbsp;mail&nbsp;is&nbsp;solely&nbsp;property&nbsp;of&nbsp;the&nbsp;sender's&nbsp;organization.&nbsp;This&nbsp;mail&nbsp;communication&nbsp;is&nbsp;confidential.&nbsp;Recipients&nbsp;named&nbsp;above&nbsp;are&nbsp;obligated&nbsp;to&nbsp;maintain&nbsp;secrecy&nbsp;and&nbsp;are&nbsp;not&nbsp;permitted&nbsp;to&nbsp;disclose&nbsp;the&nbsp;contents&nbsp;of&nbsp;this&nbsp;communication&nbsp;to&nbsp;others.
This&nbsp;email&nbsp;and&nbsp;any&nbsp;files&nbsp;transmitted&nbsp;with&nbsp;it&nbsp;are&nbsp;confidential&nbsp;and&nbsp;intended&nbsp;solely&nbsp;for&nbsp;the&nbsp;use&nbsp;of&nbsp;the&nbsp;individual&nbsp;or&nbsp;entity&nbsp;to&nbsp;whom&nbsp;they&nbsp;are&nbsp;addressed.&nbsp;If&nbsp;you&nbsp;have&nbsp;received&nbsp;this&nbsp;email&nbsp;in&nbsp;error&nbsp;please&nbsp;notify&nbsp;the&nbsp;originator&nbsp;of&nbsp;the&nbsp;message.&nbsp;Any&nbsp;views&nbsp;expressed&nbsp;in&nbsp;this&nbsp;message&nbsp;are&nbsp;those&nbsp;of&nbsp;the&nbsp;individual&nbsp;sender.
This&nbsp;message&nbsp;has&nbsp;been&nbsp;scanned&nbsp;for&nbsp;viruses&nbsp;and&nbsp;Spam&nbsp;by&nbsp;ZTE&nbsp;Anti-Spam&nbsp;system.
</pre>
--=_alternative 00121E7148257996_=--


From vumip1@gmail.com  Tue Jan 31 09:09:35 2012
Return-Path: <vumip1@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2087F21F84FC for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 09:09:35 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.465
X-Spam-Level: 
X-Spam-Status: No, score=-3.465 tagged_above=-999 required=5 tests=[AWL=0.133,  BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id UQ49iq+vTeU1 for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 09:09:33 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id 686E221F84F2 for <dc@ietf.org>; Tue, 31 Jan 2012 09:09:33 -0800 (PST)
Received: by iagf6 with SMTP id f6so268345iag.31 for <dc@ietf.org>; Tue, 31 Jan 2012 09:09:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=QXnlGltuPWaSGs9iygH8DQkW0/PXYCvyZBiCfpVtv6Q=; b=WPTNySIFtzqwRz125UvZEwCseaNUY8DqKMYcKKksdB8uxSngJEqg0vH129Z28NSxZX Zdmt5P2fPR9v1wXJY/Az2SI9K+RyAq749REj8gmb2zCj7kwm6LyQZnD+HLF5j71Vb8Ud eY3Sz2LhGqIeFp1Uh57s1ZwuNMHCqG67BQZFI=
MIME-Version: 1.0
Received: by 10.50.214.38 with SMTP id nx6mr2987516igc.19.1328029771317; Tue, 31 Jan 2012 09:09:31 -0800 (PST)
Received: by 10.50.140.102 with HTTP; Tue, 31 Jan 2012 09:09:31 -0800 (PST)
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05A7CF1290@MX14A.corp.emc.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <201201191747.q0JHlS5J015128@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102CB2304@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A7BB90E7@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102CB2326@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A7CF1290@MX14A.corp.emc.com>
Date: Tue, 31 Jan 2012 12:09:31 -0500
Message-ID: <CANtnpwi8eHBBFDohoMQBi7bSJWdvvqn4yas8A3GXm5+JCAi-qw@mail.gmail.com>
From: Bhumip Khasnabish <vumip1@gmail.com>
To: david.black@emc.com
Content-Type: multipart/alternative; boundary=14dae93404edc53ff704b7d606b1
Cc: adalela@cisco.com, dc@ietf.org
Subject: Re: [dc] OVF "control plane" - Not a good idea
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 31 Jan 2012 17:09:35 -0000

--14dae93404edc53ff704b7d606b1
Content-Type: text/plain; charset=ISO-8859-1

A New Internet-Draft is available from the on-line Internet-Drafts directories.

	Title           : Network Virtualization Overlay Control Protocol Requirements
	Author(s)       : Lawrence Kreeger
                          Dinesh Dutt
                          Thomas Narten
                          David Black
                          Murari Sridharan
	Filename        : draft-kreeger-nvo3-overlay-cp-00.txt
	Pages           : 13
	Date            : 2012-01-30

   The document draft-narten-nvo3-overlay-problem-statement-01 discusses
   the needs for network virtualization using overlay networks in highly
   virtualized data centers.  The problem statement outlines a need for
   control protocols to facilitate running these overlay networks.  This
   document outlines the high level requirements to be fulfilled by the
   control protocols.


A URL for this Internet-Draft
is:http://www.ietf.org/internet-drafts/draft-kreeger-nvo3-overlay-cp-00.txt

Internet-Drafts are also available by anonymous FTP
at:ftp://ftp.ietf.org/internet-drafts/

This Internet-Draft can be retrieved
at:ftp://ftp.ietf.org/internet-drafts/draft-kreeger-nvo3-overlay-cp-00.txt

I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt
------------------------------

   - *To*: i-d-announce at ietf.org <i-d-announce@DOMAIN.HIDDEN>
   - *Subject*: I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt
   - *From*: internet-drafts at ietf.org <internet-drafts@DOMAIN.HIDDEN>
   - *Date*: Mon, 30 Jan 2012 11:39:16 -0800
   - *Delivered-to*: i-d-announce at ietfa.amsl.com<i-d-announce@DOMAIN.HIDDEN>
   - *List-archive*: <http://www.ietf.org/mail-archive/web/i-d-announce>
   - *List-help*:
<mailto:i-d-announce-request@ietf.org?subject=help<i-d-announce-request@ietf.org?subject=help>>

   - *List-id*: Internet Draft Announcements only <i-d-announce.ietf.org>
   - *List-post*: <mailto:i-d-announce@ietf.org <i-d-announce@ietf.org>>
   - *List-subscribe*: <https://www.ietf.org/mailman/listinfo/i-d-announce>,
   <mailto:i-d-announce-request@ietf.org?subject=subscribe<i-d-announce-request@ietf.org?subject=subscribe>>

   - *List-unsubscribe*: <https://www.ietf.org/mailman/options/i-d-announce>,
   <mailto:i-d-announce-request@ietf.org?subject=unsubscribe<i-d-announce-request@ietf.org?subject=unsubscribe>>

   - *Reply-to*: internet-drafts at ietf.org <internet-drafts@DOMAIN.HIDDEN>

------------------------------

A New Internet-Draft is available from the on-line Internet-Drafts directories.

	Title           : Network Virtualization Overlay Control Protocol Requirements
	Author(s)       : Lawrence Kreeger
                          Dinesh Dutt
                          Thomas Narten

On Fri, Jan 20, 2012 at 8:22 PM, <david.black@emc.com> wrote:

> Ashish,
>
> Unfortunately, this is digging in the "wrong place" because it recreates
> the
> problem that OVF was designed to solve.  OVF is intended to be a
> self-contained
> packaging and distribution format that contains everything needed to
> instantiate
> one or more VMs.  As such, OVF can be moved by all of the protocols noted
> below,
> plus a variety of other means, such as sneaker-net.
>
> If OVF is insufficient for the portability use case, then I suggest going
> to DMTF
> to work on adding what's missing instead of inventing a "control plane"
> that is
> at odds with OVF's design intent.
>
> Thanks,
> --David
>
> > -----Original Message-----
> > From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> > Sent: Thursday, January 19, 2012 11:10 PM
> > To: Black, David
> > Cc: dc@ietf.org
> > Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt
> >
> >
> > >> Was that supposed to be a serious question?
> >
> > Yes, it is a serious question, because VM mobility goes beyond the VM.
> >
> > >> If it was, I suggest FTP or NFS, both of which are already used to
> > move VM
> > >> images in practice, and are already specified in RFCs ;-).  OVF is
> > fundamentally
> > >> a VM image format.
> >
> > That's one approach. Another approach is to use a SOAP/REST APIs. Yet
> > another one is to define a cloud control plane, that does more than just
> > move VMs. E.g. when you move a VM, you have to move the firewall rules,
> > the VLAN association, the bandwidth, VRF configuration, GRE tunnel
> > configuration, etc.
> >
> > Thanks, Ashish
> >
> >
> > -----Original Message-----
> > From: david.black@emc.com [mailto:david.black@emc.com]
> > Sent: Friday, January 20, 2012 8:18 AM
> > To: Ashish Dalela (adalela)
> > Cc: dc@ietf.org
> > Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt
> >
> > > - Do we need a "control plane" to transfer OVF specification from
> > point
> > > A to B - the portability problem?
> >
> > Was that supposed to be a serious question?
> >
> > If it was, I suggest FTP or NFS, both of which are already used to move
> > VM
> > images in practice, and are already specified in RFCs ;-).  OVF is
> > fundamentally
> > a VM image format.
> >
> > Thanks,
> > --David
> > ----------------------------------------------------
> > David L. Black, Distinguished Engineer
> > EMC Corporation, 176 South St., Hopkinton, MA  01748
> > +1 (508) 293-7953             FAX: +1 (508) 293-7786
> > david.black@emc.com        Mobile: +1 (978) 394-7754
> > ----------------------------------------------------
> > ________________________________________
> > From: dc-bounces@ietf.org [dc-bounces@ietf.org] On Behalf Of Ashish
> > Dalela (adalela) [adalela@cisco.com]
> > Sent: Thursday, January 19, 2012 8:20 PM
> > To: Thomas Narten; Steven Blake
> > Cc: dc@ietf.org
> > Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
> >
> > I think it is fair to say that there is a difference between mobility
> > and portability. Mobility is live migration, but portability is
> > specifying a VM's properties, delete in one location and create in
> > another. The new location can be another hypervisor. In many cases, you
> > don't need mobility, just portability. E.g. if you have a disaster
> > recovery situation, then you aren't going to get mobility anyway.
> >
> > DMTF has specified a standard called OVF (Open Virtualization Format)
> > that addresses the "description" of the VM. This format is supported by
> > various hypervisor vendors. So, some level of VM migration
> > standardization has already happened (albeit portability and not
> > mobility).
> >
> > The questions are:
> >
> > - Do we need a "control plane" to transfer VM state from point A to B -
> > the mobility problem?
> > - Do we need a "control plane" to transfer OVF specification from point
> > A to B - the portability problem?
> >
> > The problem is relevant in the inter-datacenter, public-private, or
> > inter-cloud spaces, where there will be more than one hypervisor
> > controller by definition. Are we hitting the live migration issue today?
> > Maybe not. Is it conceivable that we will hit this issue? I think so.
> >
> > However, the question has to be asked to the provider/operators and not
> > to the vendors.
> >
> > Thanks, Ashish
> >
> >
> > -----Original Message-----
> > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> > Thomas Narten
> > Sent: Thursday, January 19, 2012 11:17 PM
> > To: Steven Blake
> > Cc: dc@ietf.org
> > Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
> >
> > Steven,
> >
> > > Several system vendors (myself included) stood up in Taipei and said
> > > "one encapsulation, please".  If IETF can facilitate industry
> > > convergence on a small set of NVO3 encapsulations (preferably one),
> > that
> > > would be a big win for Ethernet switch vendors.
> >
> > I agree completely.
> >
> > But my questions were asking about the apparent lack of  interest from
> > operators/implementers/market players regarding Bhumip's draft and the
> > apparent desire to have some sort of standards work related to the
> > general VM migration problem.
> >
> > Is there such interest?
> >
> > Thomas
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc
>

--14dae93404edc53ff704b7d606b1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<h1><pre>A New Internet-Draft is available from the on-line Internet-Drafts=
 directories.

	Title           : Network Virtualization Overlay Control Protocol Requirem=
ents
	Author(s)       : Lawrence Kreeger
                          Dinesh Dutt
                          Thomas Narten
                          David Black
                          Murari Sridharan
	Filename        : draft-kreeger-nvo3-overlay-cp-00.txt
	Pages           : 13
	Date            : 2012-01-30

   The document draft-narten-nvo3-overlay-problem-statement-01 discusses
   the needs for network virtualization using overlay networks in highly
   virtualized data centers.  The problem statement outlines a need for
   control protocols to facilitate running these overlay networks.  This
   document outlines the high level requirements to be fulfilled by the
   control protocols.


A URL for this Internet-Draft is:
<a href=3D"http://www.ietf.org/internet-drafts/draft-kreeger-nvo3-overlay-c=
p-00.txt" rel=3D"nofollow">http://www.ietf.org/internet-drafts/draft-kreege=
r-nvo3-overlay-cp-00.txt</a>

Internet-Drafts are also available by anonymous FTP at:
<a href=3D"ftp://ftp.ietf.org/internet-drafts/" rel=3D"nofollow">ftp://ftp.=
ietf.org/internet-drafts/</a>

This Internet-Draft can be retrieved at:
<a href=3D"ftp://ftp.ietf.org/internet-drafts/draft-kreeger-nvo3-overlay-cp=
-00.txt" rel=3D"nofollow">ftp://ftp.ietf.org/internet-drafts/draft-kreeger-=
nvo3-overlay-cp-00.txt</a>

</pre></h1>
<h1>I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt</h1>
<hr>

<ul>
<li><em>To</em>: <a href=3D"mailto:i-d-announce@DOMAIN.HIDDEN">i-d-announce=
 at ietf.org</a>=20
<li><em>Subject</em>: I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt=20
<li><em>From</em>: <a href=3D"mailto:internet-drafts@DOMAIN.HIDDEN">interne=
t-drafts at ietf.org</a>=20
<li><em>Date</em>: Mon, 30 Jan 2012 11:39:16 -0800=20
<li><em>Delivered-to</em>: <a href=3D"mailto:i-d-announce@DOMAIN.HIDDEN">i-=
d-announce at ietfa.amsl.com</a>=20
<li><em>List-archive</em>: &lt;<a href=3D"http://www.ietf.org/mail-archive/=
web/i-d-announce">http://www.ietf.org/mail-archive/web/i-d-announce</a>&gt;=
=20
<li><em>List-help</em>: &lt;<a href=3D"mailto:i-d-announce-request@ietf.org=
?subject=3Dhelp">mailto:i-d-announce-request@ietf.org?subject=3Dhelp</a>&gt=
;=20
<li><em>List-id</em>: Internet Draft Announcements only &lt;<a href=3D"http=
://i-d-announce.ietf.org">i-d-announce.ietf.org</a>&gt;=20
<li><em>List-post</em>: &lt;<a href=3D"mailto:i-d-announce@ietf.org">mailto=
:i-d-announce@ietf.org</a>&gt;=20
<li><em>List-subscribe</em>: &lt;<a href=3D"https://www.ietf.org/mailman/li=
stinfo/i-d-announce">https://www.ietf.org/mailman/listinfo/i-d-announce</a>=
&gt;, &lt;<a href=3D"mailto:i-d-announce-request@ietf.org?subject=3Dsubscri=
be">mailto:i-d-announce-request@ietf.org?subject=3Dsubscribe</a>&gt;=20
<li><em>List-unsubscribe</em>: &lt;<a href=3D"https://www.ietf.org/mailman/=
options/i-d-announce">https://www.ietf.org/mailman/options/i-d-announce</a>=
&gt;, &lt;<a href=3D"mailto:i-d-announce-request@ietf.org?subject=3Dunsubsc=
ribe">mailto:i-d-announce-request@ietf.org?subject=3Dunsubscribe</a>&gt;=20
<li><em>Reply-to</em>: <a href=3D"mailto:internet-drafts@DOMAIN.HIDDEN">int=
ernet-drafts at ietf.org</a> </li></li></li></li></li></li></li></li></li><=
/li></li></li></ul>
<hr>
<pre>A New Internet-Draft is available from the on-line Internet-Drafts dir=
ectories.

	Title           : Network Virtualization Overlay Control Protocol Requirem=
ents
	Author(s)       : Lawrence Kreeger
                          Dinesh Dutt
                          Thomas Narten
<br><br></pre>
<div class=3D"gmail_quote">On Fri, Jan 20, 2012 at 8:22 PM, <span dir=3D"lt=
r">&lt;<a href=3D"mailto:david.black@emc.com">david.black@emc.com</a>&gt;</=
span> wrote:<br>
<blockquote style=3D"BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PA=
DDING-LEFT:1ex" class=3D"gmail_quote">Ashish,<br><br>Unfortunately, this is=
 digging in the &quot;wrong place&quot; because it recreates the<br>problem=
 that OVF was designed to solve. =A0OVF is intended to be a self-contained<=
br>
packaging and distribution format that contains everything needed to instan=
tiate<br>one or more VMs. =A0As such, OVF can be moved by all of the protoc=
ols noted below,<br>plus a variety of other means, such as sneaker-net.<br>
<br>If OVF is insufficient for the portability use case, then I suggest goi=
ng to DMTF<br>to work on adding what&#39;s missing instead of inventing a &=
quot;control plane&quot; that is<br>at odds with OVF&#39;s design intent.<b=
r>
<br>Thanks,<br>--David<br><br>&gt; -----Original Message-----<br>&gt; From:=
 Ashish Dalela (adalela) [mailto:<a href=3D"mailto:adalela@cisco.com">adale=
la@cisco.com</a>]<br>&gt; Sent: Thursday, January 19, 2012 11:10 PM<br>&gt;=
 To: Black, David<br>
&gt; Cc: <a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>&gt; Subject: RE=
: [dc] draft-khasnabish-vmmi-problems-00.txt<br>&gt;<br>&gt;<br>&gt; &gt;&g=
t; Was that supposed to be a serious question?<br>&gt;<br>&gt; Yes, it is a=
 serious question, because VM mobility goes beyond the VM.<br>
&gt;<br>&gt; &gt;&gt; If it was, I suggest FTP or NFS, both of which are al=
ready used to<br>&gt; move VM<br>&gt; &gt;&gt; images in practice, and are =
already specified in RFCs ;-). =A0OVF is<br>&gt; fundamentally<br>&gt; &gt;=
&gt; a VM image format.<br>
&gt;<br>&gt; That&#39;s one approach. Another approach is to use a SOAP/RES=
T APIs. Yet<br>&gt; another one is to define a cloud control plane, that do=
es more than just<br>&gt; move VMs. E.g. when you move a VM, you have to mo=
ve the firewall rules,<br>
&gt; the VLAN association, the bandwidth, VRF configuration, GRE tunnel<br>=
&gt; configuration, etc.<br>&gt;<br>&gt; Thanks, Ashish<br>&gt;<br>&gt;<br>=
&gt; -----Original Message-----<br>&gt; From: <a href=3D"mailto:david.black=
@emc.com">david.black@emc.com</a> [mailto:<a href=3D"mailto:david.black@emc=
.com">david.black@emc.com</a>]<br>
&gt; Sent: Friday, January 20, 2012 8:18 AM<br>&gt; To: Ashish Dalela (adal=
ela)<br>&gt; Cc: <a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>&gt; Sub=
ject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt<br>&gt;<br>&gt; &gt; -=
 Do we need a &quot;control plane&quot; to transfer OVF specification from<=
br>
&gt; point<br>&gt; &gt; A to B - the portability problem?<br>&gt;<br>&gt; W=
as that supposed to be a serious question?<br>&gt;<br>&gt; If it was, I sug=
gest FTP or NFS, both of which are already used to move<br>&gt; VM<br>&gt; =
images in practice, and are already specified in RFCs ;-). =A0OVF is<br>
&gt; fundamentally<br>&gt; a VM image format.<br>&gt;<br>&gt; Thanks,<br>&g=
t; --David<br>&gt; ----------------------------------------------------<br>=
&gt; David L. Black, Distinguished Engineer<br>&gt; EMC Corporation, 176 So=
uth St., Hopkinton, MA =A001748<br>
&gt; <a href=3D"tel:%2B1%20%28508%29%20293-7953" value=3D"+15082937953">+1 =
(508) 293-7953</a> =A0 =A0 =A0 =A0 =A0 =A0 FAX: <a href=3D"tel:%2B1%20%2850=
8%29%20293-7786" value=3D"+15082937786">+1 (508) 293-7786</a><br>&gt; <a hr=
ef=3D"mailto:david.black@emc.com">david.black@emc.com</a> =A0 =A0 =A0 =A0Mo=
bile: <a href=3D"tel:%2B1%20%28978%29%20394-7754" value=3D"+19783947754">+1=
 (978) 394-7754</a><br>
&gt; ----------------------------------------------------<br>&gt; _________=
_______________________________<br>&gt; From: <a href=3D"mailto:dc-bounces@=
ietf.org">dc-bounces@ietf.org</a> [<a href=3D"mailto:dc-bounces@ietf.org">d=
c-bounces@ietf.org</a>] On Behalf Of Ashish<br>
&gt; Dalela (adalela) [<a href=3D"mailto:adalela@cisco.com">adalela@cisco.c=
om</a>]<br>&gt; Sent: Thursday, January 19, 2012 8:20 PM<br>&gt; To: Thomas=
 Narten; Steven Blake<br>&gt; Cc: <a href=3D"mailto:dc@ietf.org">dc@ietf.or=
g</a><br>
&gt; Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt<br>&gt;<br>&gt=
; I think it is fair to say that there is a difference between mobility<br>=
&gt; and portability. Mobility is live migration, but portability is<br>
&gt; specifying a VM&#39;s properties, delete in one location and create in=
<br>&gt; another. The new location can be another hypervisor. In many cases=
, you<br>&gt; don&#39;t need mobility, just portability. E.g. if you have a=
 disaster<br>
&gt; recovery situation, then you aren&#39;t going to get mobility anyway.<=
br>&gt;<br>&gt; DMTF has specified a standard called OVF (Open Virtualizati=
on Format)<br>&gt; that addresses the &quot;description&quot; of the VM. Th=
is format is supported by<br>
&gt; various hypervisor vendors. So, some level of VM migration<br>&gt; sta=
ndardization has already happened (albeit portability and not<br>&gt; mobil=
ity).<br>&gt;<br>&gt; The questions are:<br>&gt;<br>&gt; - Do we need a &qu=
ot;control plane&quot; to transfer VM state from point A to B -<br>
&gt; the mobility problem?<br>&gt; - Do we need a &quot;control plane&quot;=
 to transfer OVF specification from point<br>&gt; A to B - the portability =
problem?<br>&gt;<br>&gt; The problem is relevant in the inter-datacenter, p=
ublic-private, or<br>
&gt; inter-cloud spaces, where there will be more than one hypervisor<br>&g=
t; controller by definition. Are we hitting the live migration issue today?=
<br>&gt; Maybe not. Is it conceivable that we will hit this issue? I think =
so.<br>
&gt;<br>&gt; However, the question has to be asked to the provider/operator=
s and not<br>&gt; to the vendors.<br>&gt;<br>&gt; Thanks, Ashish<br>&gt;<br=
>&gt;<br>&gt; -----Original Message-----<br>&gt; From: <a href=3D"mailto:dc=
-bounces@ietf.org">dc-bounces@ietf.org</a> [mailto:<a href=3D"mailto:dc-bou=
nces@ietf.org">dc-bounces@ietf.org</a>] On Behalf Of<br>
&gt; Thomas Narten<br>&gt; Sent: Thursday, January 19, 2012 11:17 PM<br>&gt=
; To: Steven Blake<br>&gt; Cc: <a href=3D"mailto:dc@ietf.org">dc@ietf.org</=
a><br>&gt; Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt<br>&gt;<=
br>
&gt; Steven,<br>&gt;<br>&gt; &gt; Several system vendors (myself included) =
stood up in Taipei and said<br>&gt; &gt; &quot;one encapsulation, please&qu=
ot;. =A0If IETF can facilitate industry<br>&gt; &gt; convergence on a small=
 set of NVO3 encapsulations (preferably one),<br>
&gt; that<br>&gt; &gt; would be a big win for Ethernet switch vendors.<br>&=
gt;<br>&gt; I agree completely.<br>&gt;<br>&gt; But my questions were askin=
g about the apparent lack of =A0interest from<br>&gt; operators/implementer=
s/market players regarding Bhumip&#39;s draft and the<br>
&gt; apparent desire to have some sort of standards work related to the<br>=
&gt; general VM migration problem.<br>&gt;<br>&gt; Is there such interest?<=
br>&gt;<br>&gt; Thomas<br>&gt;<br>&gt; ____________________________________=
___________<br>
&gt; dc mailing list<br>&gt; <a href=3D"mailto:dc@ietf.org">dc@ietf.org</a>=
<br>&gt; <a href=3D"https://www.ietf.org/mailman/listinfo/dc" target=3D"_bl=
ank">https://www.ietf.org/mailman/listinfo/dc</a><br>&gt; _________________=
______________________________<br>
&gt; dc mailing list<br>&gt; <a href=3D"mailto:dc@ietf.org">dc@ietf.org</a>=
<br>&gt; <a href=3D"https://www.ietf.org/mailman/listinfo/dc" target=3D"_bl=
ank">https://www.ietf.org/mailman/listinfo/dc</a><br><br>__________________=
_____________________________<br>
dc mailing list<br><a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br><a hre=
f=3D"https://www.ietf.org/mailman/listinfo/dc" target=3D"_blank">https://ww=
w.ietf.org/mailman/listinfo/dc</a><br></blockquote></div><br><br clear=3D"a=
ll">
<br>=A0

--14dae93404edc53ff704b7d606b1--

From david.black@emc.com  Tue Jan 31 09:25:05 2012
Return-Path: <david.black@emc.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 24A3321F8600 for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 09:25:05 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -109.322
X-Spam-Level: 
X-Spam-Status: No, score=-109.322 tagged_above=-999 required=5 tests=[AWL=1.276, BAYES_00=-2.599, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_HI=-8, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ZyfHP6ZxbSDV for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 09:25:01 -0800 (PST)
Received: from mexforward.lss.emc.com (mexforward.lss.emc.com [128.222.32.20]) by ietfa.amsl.com (Postfix) with ESMTP id 6823A21F85F0 for <dc@ietf.org>; Tue, 31 Jan 2012 09:25:01 -0800 (PST)
Received: from hop04-l1d11-si02.isus.emc.com (HOP04-L1D11-SI02.isus.emc.com [10.254.111.55]) by mexforward.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q0VHOuXf002165 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 31 Jan 2012 12:24:57 -0500
Received: from mailhub.lss.emc.com (mailhub.lss.emc.com [10.254.222.130]) by hop04-l1d11-si02.isus.emc.com (RSA Interceptor); Tue, 31 Jan 2012 12:24:41 -0500
Received: from mxhub03.corp.emc.com (mxhub03.corp.emc.com [10.254.141.105]) by mailhub.lss.emc.com (Switch-3.4.3/Switch-3.4.3) with ESMTP id q0VHOfwM006050; Tue, 31 Jan 2012 12:24:41 -0500
Received: from mx14a.corp.emc.com ([169.254.1.94]) by mxhub03.corp.emc.com ([10.254.141.105]) with mapi; Tue, 31 Jan 2012 12:24:40 -0500
From: <david.black@emc.com>
To: <vumip1@gmail.com>
Date: Tue, 31 Jan 2012 12:24:40 -0500
Thread-Topic: [dc] OVF "control plane" - Not a good idea
Thread-Index: AczgOzAWxjO01PWTQ6yUQnbpuZ2vwQAAcyhw
Message-ID: <7C4DFCE962635144B8FAE8CA11D0BF1E05AD13AF5A@MX14A.corp.emc.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <201201191747.q0JHlS5J015128@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102CB2304@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A7BB90E7@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102CB2326@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A7CF1290@MX14A.corp.emc.com> <CANtnpwi8eHBBFDohoMQBi7bSJWdvvqn4yas8A3GXm5+JCAi-qw@mail.gmail.com>
In-Reply-To: <CANtnpwi8eHBBFDohoMQBi7bSJWdvvqn4yas8A3GXm5+JCAi-qw@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
acceptlanguage: en-US
Content-Type: multipart/alternative; boundary="_000_7C4DFCE962635144B8FAE8CA11D0BF1E05AD13AF5AMX14Acorpemcc_"
MIME-Version: 1.0
X-EMM-MHVC: 1
Cc: adalela@cisco.com, dc@ietf.org
Subject: Re: [dc] OVF "control plane" - Not a good idea
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 31 Jan 2012 17:25:05 -0000

--_000_7C4DFCE962635144B8FAE8CA11D0BF1E05AD13AF5AMX14Acorpemcc_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

OVF is not mentioned anywhere in that draft because:

"OVF is intended to be a self-contained packaging and distribution format"

OVF is not generally used as a runtime execution format for VMs.

Thanks,
--David (draft co-author)
----------------------------------------------------
David L. Black, Distinguished Engineer
EMC Corporation, 176 South St., Hopkinton, MA  01748
+1 (508) 293-7953             FAX: +1 (508) 293-7786
david.black@emc.com        Mobile: +1 (978) 394-7754
----------------------------------------------------

From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of Bhumip =
Khasnabish
Sent: Tuesday, January 31, 2012 12:10 PM
To: Black, David
Cc: adalela@cisco.com; dc@ietf.org
Subject: Re: [dc] OVF "control plane" - Not a good idea


A New Internet-Draft is available from the on-line Internet-Drafts director=
ies.



       Title           : Network Virtualization Overlay Control Protocol Re=
quirements

       Author(s)       : Lawrence Kreeger

                          Dinesh Dutt

                          Thomas Narten

                          David Black

                          Murari Sridharan

       Filename        : draft-kreeger-nvo3-overlay-cp-00.txt

       Pages           : 13

       Date            : 2012-01-30



   The document draft-narten-nvo3-overlay-problem-statement-01 discusses

   the needs for network virtualization using overlay networks in highly

   virtualized data centers.  The problem statement outlines a need for

   control protocols to facilitate running these overlay networks.  This

   document outlines the high level requirements to be fulfilled by the

   control protocols.





A URL for this Internet-Draft is:

http://www.ietf.org/internet-drafts/draft-kreeger-nvo3-overlay-cp-00.txt



Internet-Drafts are also available by anonymous FTP at:

ftp://ftp.ietf.org/internet-drafts/



This Internet-Draft can be retrieved at:

ftp://ftp.ietf.org/internet-drafts/draft-kreeger-nvo3-overlay-cp-00.txt



I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt
________________________________

 *   To: i-d-announce at ietf.org<mailto:i-d-announce@DOMAIN.HIDDEN>
 *   Subject: I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt
 *   From: internet-drafts at ietf.org<mailto:internet-drafts@DOMAIN.HIDDEN=
>
 *   Date: Mon, 30 Jan 2012 11:39:16 -0800
 *   Delivered-to: i-d-announce at ietfa.amsl.com<mailto:i-d-announce@DOMAI=
N.HIDDEN>
 *   List-archive: <http://www.ietf.org/mail-archive/web/i-d-announce>
 *   List-help: <mailto:i-d-announce-request@ietf.org?subject=3Dhelp>
 *   List-id: Internet Draft Announcements only <i-d-announce.ietf.org<http=
://i-d-announce.ietf.org>>
 *   List-post: <mailto:i-d-announce@ietf.org>
 *   List-subscribe: <https://www.ietf.org/mailman/listinfo/i-d-announce>, =
<mailto:i-d-announce-request@ietf.org?subject=3Dsubscribe>
 *   List-unsubscribe: <https://www.ietf.org/mailman/options/i-d-announce>,=
 <mailto:i-d-announce-request@ietf.org?subject=3Dunsubscribe>
 *   Reply-to: internet-drafts at ietf.org<mailto:internet-drafts@DOMAIN.HI=
DDEN>

________________________________

A New Internet-Draft is available from the on-line Internet-Drafts director=
ies.



       Title           : Network Virtualization Overlay Control Protocol Re=
quirements

       Author(s)       : Lawrence Kreeger

                          Dinesh Dutt

                          Thomas Narten


On Fri, Jan 20, 2012 at 8:22 PM, <david.black@emc.com<mailto:david.black@em=
c.com>> wrote:
Ashish,

Unfortunately, this is digging in the "wrong place" because it recreates th=
e
problem that OVF was designed to solve.  OVF is intended to be a self-conta=
ined
packaging and distribution format that contains everything needed to instan=
tiate
one or more VMs.  As such, OVF can be moved by all of the protocols noted b=
elow,
plus a variety of other means, such as sneaker-net.

If OVF is insufficient for the portability use case, then I suggest going t=
o DMTF
to work on adding what's missing instead of inventing a "control plane" tha=
t is
at odds with OVF's design intent.

Thanks,
--David

> -----Original Message-----
> From: Ashish Dalela (adalela) [mailto:adalela@cisco.com<mailto:adalela@ci=
sco.com>]
> Sent: Thursday, January 19, 2012 11:10 PM
> To: Black, David
> Cc: dc@ietf.org<mailto:dc@ietf.org>
> Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt
>
>
> >> Was that supposed to be a serious question?
>
> Yes, it is a serious question, because VM mobility goes beyond the VM.
>
> >> If it was, I suggest FTP or NFS, both of which are already used to
> move VM
> >> images in practice, and are already specified in RFCs ;-).  OVF is
> fundamentally
> >> a VM image format.
>
> That's one approach. Another approach is to use a SOAP/REST APIs. Yet
> another one is to define a cloud control plane, that does more than just
> move VMs. E.g. when you move a VM, you have to move the firewall rules,
> the VLAN association, the bandwidth, VRF configuration, GRE tunnel
> configuration, etc.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: david.black@emc.com<mailto:david.black@emc.com> [mailto:david.black=
@emc.com<mailto:david.black@emc.com>]
> Sent: Friday, January 20, 2012 8:18 AM
> To: Ashish Dalela (adalela)
> Cc: dc@ietf.org<mailto:dc@ietf.org>
> Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt
>
> > - Do we need a "control plane" to transfer OVF specification from
> point
> > A to B - the portability problem?
>
> Was that supposed to be a serious question?
>
> If it was, I suggest FTP or NFS, both of which are already used to move
> VM
> images in practice, and are already specified in RFCs ;-).  OVF is
> fundamentally
> a VM image format.
>
> Thanks,
> --David
> ----------------------------------------------------
> David L. Black, Distinguished Engineer
> EMC Corporation, 176 South St., Hopkinton, MA  01748
> +1 (508) 293-7953<tel:%2B1%20%28508%29%20293-7953>             FAX: +1 (5=
08) 293-7786<tel:%2B1%20%28508%29%20293-7786>
> david.black@emc.com<mailto:david.black@emc.com>        Mobile: +1 (978) 3=
94-7754<tel:%2B1%20%28978%29%20394-7754>
> ----------------------------------------------------
> ________________________________________
> From: dc-bounces@ietf.org<mailto:dc-bounces@ietf.org> [dc-bounces@ietf.or=
g<mailto:dc-bounces@ietf.org>] On Behalf Of Ashish
> Dalela (adalela) [adalela@cisco.com<mailto:adalela@cisco.com>]
> Sent: Thursday, January 19, 2012 8:20 PM
> To: Thomas Narten; Steven Blake
> Cc: dc@ietf.org<mailto:dc@ietf.org>
> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>
> I think it is fair to say that there is a difference between mobility
> and portability. Mobility is live migration, but portability is
> specifying a VM's properties, delete in one location and create in
> another. The new location can be another hypervisor. In many cases, you
> don't need mobility, just portability. E.g. if you have a disaster
> recovery situation, then you aren't going to get mobility anyway.
>
> DMTF has specified a standard called OVF (Open Virtualization Format)
> that addresses the "description" of the VM. This format is supported by
> various hypervisor vendors. So, some level of VM migration
> standardization has already happened (albeit portability and not
> mobility).
>
> The questions are:
>
> - Do we need a "control plane" to transfer VM state from point A to B -
> the mobility problem?
> - Do we need a "control plane" to transfer OVF specification from point
> A to B - the portability problem?
>
> The problem is relevant in the inter-datacenter, public-private, or
> inter-cloud spaces, where there will be more than one hypervisor
> controller by definition. Are we hitting the live migration issue today?
> Maybe not. Is it conceivable that we will hit this issue? I think so.
>
> However, the question has to be asked to the provider/operators and not
> to the vendors.
>
> Thanks, Ashish
>
>
> -----Original Message-----
> From: dc-bounces@ietf.org<mailto:dc-bounces@ietf.org> [mailto:dc-bounces@=
ietf.org<mailto:dc-bounces@ietf.org>] On Behalf Of
> Thomas Narten
> Sent: Thursday, January 19, 2012 11:17 PM
> To: Steven Blake
> Cc: dc@ietf.org<mailto:dc@ietf.org>
> Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
>
> Steven,
>
> > Several system vendors (myself included) stood up in Taipei and said
> > "one encapsulation, please".  If IETF can facilitate industry
> > convergence on a small set of NVO3 encapsulations (preferably one),
> that
> > would be a big win for Ethernet switch vendors.
>
> I agree completely.
>
> But my questions were asking about the apparent lack of  interest from
> operators/implementers/market players regarding Bhumip's draft and the
> apparent desire to have some sort of standards work related to the
> general VM migration problem.
>
> Is there such interest?
>
> Thomas
>
> _______________________________________________
> dc mailing list
> dc@ietf.org<mailto:dc@ietf.org>
> https://www.ietf.org/mailman/listinfo/dc
> _______________________________________________
> dc mailing list
> dc@ietf.org<mailto:dc@ietf.org>
> https://www.ietf.org/mailman/listinfo/dc

_______________________________________________
dc mailing list
dc@ietf.org<mailto:dc@ietf.org>
https://www.ietf.org/mailman/listinfo/dc





--_000_7C4DFCE962635144B8FAE8CA11D0BF1E05AD13AF5AMX14Acorpemcc_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:x=3D"urn:schemas-microsoft-com:office:excel" xmlns:p=3D"urn:schemas-m=
icrosoft-com:office:powerpoint" xmlns:a=3D"urn:schemas-microsoft-com:office=
:access" xmlns:dt=3D"uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:s=3D"=
uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:rs=3D"urn:schemas-microsof=
t-com:rowset" xmlns:z=3D"#RowsetSchema" xmlns:b=3D"urn:schemas-microsoft-co=
m:office:publisher" xmlns:ss=3D"urn:schemas-microsoft-com:office:spreadshee=
t" xmlns:c=3D"urn:schemas-microsoft-com:office:component:spreadsheet" xmlns=
:odc=3D"urn:schemas-microsoft-com:office:odc" xmlns:oa=3D"urn:schemas-micro=
soft-com:office:activation" xmlns:html=3D"http://www.w3.org/TR/REC-html40" =
xmlns:q=3D"http://schemas.xmlsoap.org/soap/envelope/" xmlns:rtc=3D"http://m=
icrosoft.com/officenet/conferencing" xmlns:D=3D"DAV:" xmlns:Repl=3D"http://=
schemas.microsoft.com/repl/" xmlns:mt=3D"http://schemas.microsoft.com/share=
point/soap/meetings/" xmlns:x2=3D"http://schemas.microsoft.com/office/excel=
/2003/xml" xmlns:ppda=3D"http://www.passport.com/NameSpace.xsd" xmlns:ois=
=3D"http://schemas.microsoft.com/sharepoint/soap/ois/" xmlns:dir=3D"http://=
schemas.microsoft.com/sharepoint/soap/directory/" xmlns:ds=3D"http://www.w3=
.org/2000/09/xmldsig#" xmlns:dsp=3D"http://schemas.microsoft.com/sharepoint=
/dsp" xmlns:udc=3D"http://schemas.microsoft.com/data/udc" xmlns:xsd=3D"http=
://www.w3.org/2001/XMLSchema" xmlns:sub=3D"http://schemas.microsoft.com/sha=
repoint/soap/2002/1/alerts/" xmlns:ec=3D"http://www.w3.org/2001/04/xmlenc#"=
 xmlns:sp=3D"http://schemas.microsoft.com/sharepoint/" xmlns:sps=3D"http://=
schemas.microsoft.com/sharepoint/soap/" xmlns:xsi=3D"http://www.w3.org/2001=
/XMLSchema-instance" xmlns:udcs=3D"http://schemas.microsoft.com/data/udc/so=
ap" xmlns:udcxf=3D"http://schemas.microsoft.com/data/udc/xmlfile" xmlns:udc=
p2p=3D"http://schemas.microsoft.com/data/udc/parttopart" xmlns:wf=3D"http:/=
/schemas.microsoft.com/sharepoint/soap/workflow/" xmlns:dsss=3D"http://sche=
mas.microsoft.com/office/2006/digsig-setup" xmlns:dssi=3D"http://schemas.mi=
crosoft.com/office/2006/digsig" xmlns:mdssi=3D"http://schemas.openxmlformat=
s.org/package/2006/digital-signature" xmlns:mver=3D"http://schemas.openxmlf=
ormats.org/markup-compatibility/2006" xmlns:m=3D"http://schemas.microsoft.c=
om/office/2004/12/omml" xmlns:mrels=3D"http://schemas.openxmlformats.org/pa=
ckage/2006/relationships" xmlns:spwp=3D"http://microsoft.com/sharepoint/web=
partpages" xmlns:ex12t=3D"http://schemas.microsoft.com/exchange/services/20=
06/types" xmlns:ex12m=3D"http://schemas.microsoft.com/exchange/services/200=
6/messages" xmlns:pptsl=3D"http://schemas.microsoft.com/sharepoint/soap/Sli=
deLibrary/" xmlns:spsl=3D"http://microsoft.com/webservices/SharePointPortal=
Server/PublishedLinksService" xmlns:Z=3D"urn:schemas-microsoft-com:" xmlns:=
st=3D"&#1;" xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta http-equi=
v=3DContent-Type content=3D"text/html; charset=3Dus-ascii"><meta name=3DGen=
erator content=3D"Microsoft Word 12 (filtered medium)"><!--[if !mso]><style=
>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:Wingdings;
	panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:Tahoma;
	panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:12.0pt;
	font-family:"Times New Roman","serif";}
h1
	{mso-style-priority:9;
	mso-style-link:"Heading 1 Char";
	mso-margin-top-alt:auto;
	margin-right:0in;
	mso-margin-bottom-alt:auto;
	margin-left:0in;
	font-size:24.0pt;
	font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
pre
	{mso-style-priority:99;
	mso-style-link:"HTML Preformatted Char";
	margin:0in;
	margin-bottom:.0001pt;
	font-size:10.0pt;
	font-family:"Courier New";}
span.Heading1Char
	{mso-style-name:"Heading 1 Char";
	mso-style-priority:9;
	mso-style-link:"Heading 1";
	font-family:"Cambria","serif";
	color:#365F91;
	font-weight:bold;}
span.HTMLPreformattedChar
	{mso-style-name:"HTML Preformatted Char";
	mso-style-priority:99;
	mso-style-link:"HTML Preformatted";
	font-family:Consolas;}
span.EmailStyle21
	{mso-style-type:personal-reply;
	font-family:"Courier New";
	color:black;
	font-weight:normal;
	font-style:normal;
	text-decoration:none none;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
/* List Definitions */
@list l0
	{mso-list-id:1328561206;
	mso-list-template-ids:1089367270;}
@list l0:level1
	{mso-level-number-format:bullet;
	mso-level-text:\F0B7;
	mso-level-tab-stop:.5in;
	mso-level-number-position:left;
	text-indent:-.25in;
	mso-ansi-font-size:10.0pt;
	font-family:Symbol;}
ol
	{margin-bottom:0in;}
ul
	{margin-bottom:0in;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue vli=
nk=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span style=3D'f=
ont-size:10.0pt;font-family:"Courier New";color:black'>OVF is not mentioned=
 anywhere in that draft because:<o:p></o:p></span></p><p class=3DMsoNormal>=
<span style=3D'font-size:10.0pt;font-family:"Courier New";color:black'><o:p=
>&nbsp;</o:p></span></p><p class=3DMsoNormal><span style=3D'font-size:10.0p=
t;font-family:"Courier New";color:black'>&#8220;</span>OVF is intended to b=
e a self-contained packaging and distribution format&#8221;<o:p></o:p></p><=
p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal><span style=
=3D'font-size:10.0pt;font-family:"Courier New";color:black'>OVF is not gene=
rally used as a runtime execution format for VMs.<o:p></o:p></span></p><p c=
lass=3DMsoNormal><span style=3D'font-size:10.0pt;font-family:"Courier New";=
color:black'><o:p>&nbsp;</o:p></span></p><p class=3DMsoNormal><span style=
=3D'font-size:10.0pt;font-family:"Courier New";color:black'>Thanks,<br>--Da=
vid (draft co-author)<br>--------------------------------------------------=
--<br>David L. Black, Distinguished Engineer<br>EMC Corporation, 176 South =
St., Hopkinton, MA&nbsp; 01748<br>+1 (508) 293-7953&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; FAX: +1 (508) 293-7786<br>=
david.black@emc.com&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Mobile: +1 (9=
78) 394-7754<br>----------------------------------------------------</span>=
<span style=3D'font-size:11.0pt;font-family:"Courier New";color:black'><o:p=
></o:p></span></p><p class=3DMsoNormal><span style=3D'font-size:10.0pt;font=
-family:"Courier New";color:black'><o:p>&nbsp;</o:p></span></p><div style=
=3D'border:none;border-left:solid blue 1.5pt;padding:0in 0in 0in 4.0pt'><di=
v><div style=3D'border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0i=
n 0in 0in'><p class=3DMsoNormal><b><span style=3D'font-size:10.0pt;font-fam=
ily:"Tahoma","sans-serif"'>From:</span></b><span style=3D'font-size:10.0pt;=
font-family:"Tahoma","sans-serif"'> dc-bounces@ietf.org [mailto:dc-bounces@=
ietf.org] <b>On Behalf Of </b>Bhumip Khasnabish<br><b>Sent:</b> Tuesday, Ja=
nuary 31, 2012 12:10 PM<br><b>To:</b> Black, David<br><b>Cc:</b> adalela@ci=
sco.com; dc@ietf.org<br><b>Subject:</b> Re: [dc] OVF &quot;control plane&qu=
ot; - Not a good idea<o:p></o:p></span></p></div></div><p class=3DMsoNormal=
><o:p>&nbsp;</o:p></p><pre><b>A New Internet-Draft is available from the on=
-line Internet-Drafts directories.<o:p></o:p></b></pre><pre><b><o:p>&nbsp;<=
/o:p></b></pre><pre><b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Title&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : Network Virtualization=
 Overlay Control Protocol Requirements<o:p></o:p></b></pre><pre><b>&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp; Author(s)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
: Lawrence Kreeger<o:p></o:p></b></pre><pre><b>&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dinesh Dutt<o:p></o:p></b><=
/pre><pre><b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp; Thomas Narten<o:p></o:p></b></pre><pre><b>&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; David Black<o:p></=
o:p></b></pre><pre><b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp; Murari Sridharan<o:p></o:p></b></pre><pre><b>&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Filename&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp; : draft-kreeger-nvo3-overlay-cp-00.txt<o:p></o:p></b></pre><pre><b>&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Pages&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp; : 13<o:p></o:p></b></pre><pre><b>&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; Date&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp; : 2012-01-30<o:p></o:p></b></pre><pre><b><o:p>&nbsp;</o:p>=
</b></pre><pre><b>&nbsp;&nbsp; The document draft-narten-nvo3-overlay-probl=
em-statement-01 discusses<o:p></o:p></b></pre><pre><b>&nbsp;&nbsp; the need=
s for network virtualization using overlay networks in highly<o:p></o:p></b=
></pre><pre><b>&nbsp;&nbsp; virtualized data centers.&nbsp; The problem sta=
tement outlines a need for<o:p></o:p></b></pre><pre><b>&nbsp;&nbsp; control=
 protocols to facilitate running these overlay networks.&nbsp; This<o:p></o=
:p></b></pre><pre><b>&nbsp;&nbsp; document outlines the high level requirem=
ents to be fulfilled by the<o:p></o:p></b></pre><pre><b>&nbsp;&nbsp; contro=
l protocols.<o:p></o:p></b></pre><pre><b><o:p>&nbsp;</o:p></b></pre><pre><b=
><o:p>&nbsp;</o:p></b></pre><pre><b>A URL for this Internet-Draft is:<o:p><=
/o:p></b></pre><pre><b><a href=3D"http://www.ietf.org/internet-drafts/draft=
-kreeger-nvo3-overlay-cp-00.txt">http://www.ietf.org/internet-drafts/draft-=
kreeger-nvo3-overlay-cp-00.txt</a><o:p></o:p></b></pre><pre><b><o:p>&nbsp;<=
/o:p></b></pre><pre><b>Internet-Drafts are also available by anonymous FTP =
at:<o:p></o:p></b></pre><pre><b><a href=3D"ftp://ftp.ietf.org/internet-draf=
ts/">ftp://ftp.ietf.org/internet-drafts/</a><o:p></o:p></b></pre><pre><b><o=
:p>&nbsp;</o:p></b></pre><pre><b>This Internet-Draft can be retrieved at:<o=
:p></o:p></b></pre><pre><b><a href=3D"ftp://ftp.ietf.org/internet-drafts/dr=
aft-kreeger-nvo3-overlay-cp-00.txt">ftp://ftp.ietf.org/internet-drafts/draf=
t-kreeger-nvo3-overlay-cp-00.txt</a><o:p></o:p></b></pre><pre><b><o:p>&nbsp=
;</o:p></b></pre><h1>I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt<o:p><=
/o:p></h1><div class=3DMsoNormal align=3Dcenter style=3D'text-align:center'=
><hr size=3D2 width=3D"100%" align=3Dcenter></div><span style=3D'font-size:=
12.0pt;font-family:"Times New Roman","serif"'><ul type=3Ddisc><li class=3DM=
soNormal style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-li=
st:l0 level1 lfo1'><em>To</em>: <a href=3D"mailto:i-d-announce@DOMAIN.HIDDE=
N">i-d-announce at ietf.org</a> <o:p></o:p></li><li class=3DMsoNormal style=
=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0 level1 l=
fo1'><em>Subject</em>: I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt <o:=
p></o:p></li><li class=3DMsoNormal style=3D'mso-margin-top-alt:auto;mso-mar=
gin-bottom-alt:auto;mso-list:l0 level1 lfo1'><em>From</em>: <a href=3D"mail=
to:internet-drafts@DOMAIN.HIDDEN">internet-drafts at ietf.org</a> <o:p></o:=
p></li><li class=3DMsoNormal style=3D'mso-margin-top-alt:auto;mso-margin-bo=
ttom-alt:auto;mso-list:l0 level1 lfo1'><em>Date</em>: Mon, 30 Jan 2012 11:3=
9:16 -0800 <o:p></o:p></li><li class=3DMsoNormal style=3D'mso-margin-top-al=
t:auto;mso-margin-bottom-alt:auto;mso-list:l0 level1 lfo1'><em>Delivered-to=
</em>: <a href=3D"mailto:i-d-announce@DOMAIN.HIDDEN">i-d-announce at ietfa.=
amsl.com</a> <o:p></o:p></li><li class=3DMsoNormal style=3D'mso-margin-top-=
alt:auto;mso-margin-bottom-alt:auto;mso-list:l0 level1 lfo1'><em>List-archi=
ve</em>: &lt;<a href=3D"http://www.ietf.org/mail-archive/web/i-d-announce">=
http://www.ietf.org/mail-archive/web/i-d-announce</a>&gt; <o:p></o:p></li><=
li class=3DMsoNormal style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt=
:auto;mso-list:l0 level1 lfo1'><em>List-help</em>: &lt;<a href=3D"mailto:i-=
d-announce-request@ietf.org?subject=3Dhelp">mailto:i-d-announce-request@iet=
f.org?subject=3Dhelp</a>&gt; <o:p></o:p></li><li class=3DMsoNormal style=3D=
'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0 level1 lfo1=
'><em>List-id</em>: Internet Draft Announcements only &lt;<a href=3D"http:/=
/i-d-announce.ietf.org">i-d-announce.ietf.org</a>&gt; <o:p></o:p></li><li c=
lass=3DMsoNormal style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:aut=
o;mso-list:l0 level1 lfo1'><em>List-post</em>: &lt;<a href=3D"mailto:i-d-an=
nounce@ietf.org">mailto:i-d-announce@ietf.org</a>&gt; <o:p></o:p></li><li c=
lass=3DMsoNormal style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:aut=
o;mso-list:l0 level1 lfo1'><em>List-subscribe</em>: &lt;<a href=3D"https://=
www.ietf.org/mailman/listinfo/i-d-announce">https://www.ietf.org/mailman/li=
stinfo/i-d-announce</a>&gt;, &lt;<a href=3D"mailto:i-d-announce-request@iet=
f.org?subject=3Dsubscribe">mailto:i-d-announce-request@ietf.org?subject=3Ds=
ubscribe</a>&gt; <o:p></o:p></li><li class=3DMsoNormal style=3D'mso-margin-=
top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l0 level1 lfo1'><em>List-u=
nsubscribe</em>: &lt;<a href=3D"https://www.ietf.org/mailman/options/i-d-an=
nounce">https://www.ietf.org/mailman/options/i-d-announce</a>&gt;, &lt;<a h=
ref=3D"mailto:i-d-announce-request@ietf.org?subject=3Dunsubscribe">mailto:i=
-d-announce-request@ietf.org?subject=3Dunsubscribe</a>&gt; <o:p></o:p></li>=
<li class=3DMsoNormal style=3D'mso-margin-top-alt:auto;mso-margin-bottom-al=
t:auto;mso-list:l0 level1 lfo1'><em>Reply-to</em>: <a href=3D"mailto:intern=
et-drafts@DOMAIN.HIDDEN">internet-drafts at ietf.org</a> <o:p></o:p></li></=
ul></span><div class=3DMsoNormal align=3Dcenter style=3D'text-align:center'=
><hr size=3D2 width=3D"100%" align=3Dcenter></div><pre>A New Internet-Draft=
 is available from the on-line Internet-Drafts directories.<o:p></o:p></pre=
><pre><o:p>&nbsp;</o:p></pre><pre>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Titl=
e&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; : Network Vir=
tualization Overlay Control Protocol Requirements<o:p></o:p></pre><pre>&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Author(s)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; : Lawrence Kreeger<o:p></o:p></pre><pre>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Dinesh Dutt<o:p></o:p></pre><p=
re>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp; Thomas Narten<o:p></o:p></pre><pre style=3D'margin-bottom:12.0pt'><o:p>=
&nbsp;</o:p></pre><div><p class=3DMsoNormal>On Fri, Jan 20, 2012 at 8:22 PM=
, &lt;<a href=3D"mailto:david.black@emc.com">david.black@emc.com</a>&gt; wr=
ote:<o:p></o:p></p><p class=3DMsoNormal>Ashish,<br><br>Unfortunately, this =
is digging in the &quot;wrong place&quot; because it recreates the<br>probl=
em that OVF was designed to solve. &nbsp;OVF is intended to be a self-conta=
ined<br>packaging and distribution format that contains everything needed t=
o instantiate<br>one or more VMs. &nbsp;As such, OVF can be moved by all of=
 the protocols noted below,<br>plus a variety of other means, such as sneak=
er-net.<br><br>If OVF is insufficient for the portability use case, then I =
suggest going to DMTF<br>to work on adding what's missing instead of invent=
ing a &quot;control plane&quot; that is<br>at odds with OVF's design intent=
.<br><br>Thanks,<br>--David<br><br>&gt; -----Original Message-----<br>&gt; =
From: Ashish Dalela (adalela) [mailto:<a href=3D"mailto:adalela@cisco.com">=
adalela@cisco.com</a>]<br>&gt; Sent: Thursday, January 19, 2012 11:10 PM<br=
>&gt; To: Black, David<br>&gt; Cc: <a href=3D"mailto:dc@ietf.org">dc@ietf.o=
rg</a><br>&gt; Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt<br>&=
gt;<br>&gt;<br>&gt; &gt;&gt; Was that supposed to be a serious question?<br=
>&gt;<br>&gt; Yes, it is a serious question, because VM mobility goes beyon=
d the VM.<br>&gt;<br>&gt; &gt;&gt; If it was, I suggest FTP or NFS, both of=
 which are already used to<br>&gt; move VM<br>&gt; &gt;&gt; images in pract=
ice, and are already specified in RFCs ;-). &nbsp;OVF is<br>&gt; fundamenta=
lly<br>&gt; &gt;&gt; a VM image format.<br>&gt;<br>&gt; That's one approach=
. Another approach is to use a SOAP/REST APIs. Yet<br>&gt; another one is t=
o define a cloud control plane, that does more than just<br>&gt; move VMs. =
E.g. when you move a VM, you have to move the firewall rules,<br>&gt; the V=
LAN association, the bandwidth, VRF configuration, GRE tunnel<br>&gt; confi=
guration, etc.<br>&gt;<br>&gt; Thanks, Ashish<br>&gt;<br>&gt;<br>&gt; -----=
Original Message-----<br>&gt; From: <a href=3D"mailto:david.black@emc.com">=
david.black@emc.com</a> [mailto:<a href=3D"mailto:david.black@emc.com">davi=
d.black@emc.com</a>]<br>&gt; Sent: Friday, January 20, 2012 8:18 AM<br>&gt;=
 To: Ashish Dalela (adalela)<br>&gt; Cc: <a href=3D"mailto:dc@ietf.org">dc@=
ietf.org</a><br>&gt; Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.tx=
t<br>&gt;<br>&gt; &gt; - Do we need a &quot;control plane&quot; to transfer=
 OVF specification from<br>&gt; point<br>&gt; &gt; A to B - the portability=
 problem?<br>&gt;<br>&gt; Was that supposed to be a serious question?<br>&g=
t;<br>&gt; If it was, I suggest FTP or NFS, both of which are already used =
to move<br>&gt; VM<br>&gt; images in practice, and are already specified in=
 RFCs ;-). &nbsp;OVF is<br>&gt; fundamentally<br>&gt; a VM image format.<br=
>&gt;<br>&gt; Thanks,<br>&gt; --David<br>&gt; -----------------------------=
-----------------------<br>&gt; David L. Black, Distinguished Engineer<br>&=
gt; EMC Corporation, 176 South St., Hopkinton, MA &nbsp;01748<br>&gt; <a hr=
ef=3D"tel:%2B1%20%28508%29%20293-7953">+1 (508) 293-7953</a> &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; &nbsp; FAX: <a href=3D"tel:%2B1%20%28508%29%20293-7786=
">+1 (508) 293-7786</a><br>&gt; <a href=3D"mailto:david.black@emc.com">davi=
d.black@emc.com</a> &nbsp; &nbsp; &nbsp; &nbsp;Mobile: <a href=3D"tel:%2B1%=
20%28978%29%20394-7754">+1 (978) 394-7754</a><br>&gt; ---------------------=
-------------------------------<br>&gt; ___________________________________=
_____<br>&gt; From: <a href=3D"mailto:dc-bounces@ietf.org">dc-bounces@ietf.=
org</a> [<a href=3D"mailto:dc-bounces@ietf.org">dc-bounces@ietf.org</a>] On=
 Behalf Of Ashish<br>&gt; Dalela (adalela) [<a href=3D"mailto:adalela@cisco=
.com">adalela@cisco.com</a>]<br>&gt; Sent: Thursday, January 19, 2012 8:20 =
PM<br>&gt; To: Thomas Narten; Steven Blake<br>&gt; Cc: <a href=3D"mailto:dc=
@ietf.org">dc@ietf.org</a><br>&gt; Subject: Re: [dc] draft-khasnabish-vmmi-=
problems-00.txt<br>&gt;<br>&gt; I think it is fair to say that there is a d=
ifference between mobility<br>&gt; and portability. Mobility is live migrat=
ion, but portability is<br>&gt; specifying a VM's properties, delete in one=
 location and create in<br>&gt; another. The new location can be another hy=
pervisor. In many cases, you<br>&gt; don't need mobility, just portability.=
 E.g. if you have a disaster<br>&gt; recovery situation, then you aren't go=
ing to get mobility anyway.<br>&gt;<br>&gt; DMTF has specified a standard c=
alled OVF (Open Virtualization Format)<br>&gt; that addresses the &quot;des=
cription&quot; of the VM. This format is supported by<br>&gt; various hyper=
visor vendors. So, some level of VM migration<br>&gt; standardization has a=
lready happened (albeit portability and not<br>&gt; mobility).<br>&gt;<br>&=
gt; The questions are:<br>&gt;<br>&gt; - Do we need a &quot;control plane&q=
uot; to transfer VM state from point A to B -<br>&gt; the mobility problem?=
<br>&gt; - Do we need a &quot;control plane&quot; to transfer OVF specifica=
tion from point<br>&gt; A to B - the portability problem?<br>&gt;<br>&gt; T=
he problem is relevant in the inter-datacenter, public-private, or<br>&gt; =
inter-cloud spaces, where there will be more than one hypervisor<br>&gt; co=
ntroller by definition. Are we hitting the live migration issue today?<br>&=
gt; Maybe not. Is it conceivable that we will hit this issue? I think so.<b=
r>&gt;<br>&gt; However, the question has to be asked to the provider/operat=
ors and not<br>&gt; to the vendors.<br>&gt;<br>&gt; Thanks, Ashish<br>&gt;<=
br>&gt;<br>&gt; -----Original Message-----<br>&gt; From: <a href=3D"mailto:=
dc-bounces@ietf.org">dc-bounces@ietf.org</a> [mailto:<a href=3D"mailto:dc-b=
ounces@ietf.org">dc-bounces@ietf.org</a>] On Behalf Of<br>&gt; Thomas Narte=
n<br>&gt; Sent: Thursday, January 19, 2012 11:17 PM<br>&gt; To: Steven Blak=
e<br>&gt; Cc: <a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>&gt; Subjec=
t: Re: [dc] draft-khasnabish-vmmi-problems-00.txt<br>&gt;<br>&gt; Steven,<b=
r>&gt;<br>&gt; &gt; Several system vendors (myself included) stood up in Ta=
ipei and said<br>&gt; &gt; &quot;one encapsulation, please&quot;. &nbsp;If =
IETF can facilitate industry<br>&gt; &gt; convergence on a small set of NVO=
3 encapsulations (preferably one),<br>&gt; that<br>&gt; &gt; would be a big=
 win for Ethernet switch vendors.<br>&gt;<br>&gt; I agree completely.<br>&g=
t;<br>&gt; But my questions were asking about the apparent lack of &nbsp;in=
terest from<br>&gt; operators/implementers/market players regarding Bhumip'=
s draft and the<br>&gt; apparent desire to have some sort of standards work=
 related to the<br>&gt; general VM migration problem.<br>&gt;<br>&gt; Is th=
ere such interest?<br>&gt;<br>&gt; Thomas<br>&gt;<br>&gt; _________________=
______________________________<br>&gt; dc mailing list<br>&gt; <a href=3D"m=
ailto:dc@ietf.org">dc@ietf.org</a><br>&gt; <a href=3D"https://www.ietf.org/=
mailman/listinfo/dc" target=3D"_blank">https://www.ietf.org/mailman/listinf=
o/dc</a><br>&gt; _______________________________________________<br>&gt; dc=
 mailing list<br>&gt; <a href=3D"mailto:dc@ietf.org">dc@ietf.org</a><br>&gt=
; <a href=3D"https://www.ietf.org/mailman/listinfo/dc" target=3D"_blank">ht=
tps://www.ietf.org/mailman/listinfo/dc</a><br><br>_________________________=
______________________<br>dc mailing list<br><a href=3D"mailto:dc@ietf.org"=
>dc@ietf.org</a><br><a href=3D"https://www.ietf.org/mailman/listinfo/dc" ta=
rget=3D"_blank">https://www.ietf.org/mailman/listinfo/dc</a><o:p></o:p></p>=
</div><p class=3DMsoNormal><br><br clear=3Dall><br>&nbsp; <o:p></o:p></p></=
div></div></body></html>=

--_000_7C4DFCE962635144B8FAE8CA11D0BF1E05AD13AF5AMX14Acorpemcc_--

From vumip1@gmail.com  Tue Jan 31 09:28:02 2012
Return-Path: <vumip1@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 9E66A11E8080 for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 09:28:02 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -3.174
X-Spam-Level: 
X-Spam-Status: No, score=-3.174 tagged_above=-999 required=5 tests=[AWL=-0.176, BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_32=0.6, RCVD_IN_DNSWL_LOW=-1]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qZQBMqE2Vrr2 for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 09:28:00 -0800 (PST)
Received: from mail-iy0-f172.google.com (mail-iy0-f172.google.com [209.85.210.172]) by ietfa.amsl.com (Postfix) with ESMTP id 90AB011E8075 for <dc@ietf.org>; Tue, 31 Jan 2012 09:28:00 -0800 (PST)
Received: by iagf6 with SMTP id f6so291060iag.31 for <dc@ietf.org>; Tue, 31 Jan 2012 09:28:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=5iCoRlHZeme0xaradZpbdUd4nF94cjXGVIOUAAAiEqA=; b=RefBMF8FwoT4MR0GUVoBLivrhS2BVbub/sdsPDVMIVB9APkRcKZwdJBbDNjsY1ODtg LK74yO3j265Yocm63ltYIsKXYbgq9t5Xgbn5s03w5FFvqp/8elQQv9w0zGs0xHtms1qh LLy7ikT5iZxU9bzeGKCBp0iwLrumxCdmpDL00=
MIME-Version: 1.0
Received: by 10.42.155.70 with SMTP id t6mr18409649icw.11.1328030880176; Tue, 31 Jan 2012 09:28:00 -0800 (PST)
Received: by 10.50.140.102 with HTTP; Tue, 31 Jan 2012 09:28:00 -0800 (PST)
In-Reply-To: <7C4DFCE962635144B8FAE8CA11D0BF1E05AD13AF5A@MX14A.corp.emc.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <201201191747.q0JHlS5J015128@cichlid.raleigh.ibm.com> <618BE8B40039924EB9AED233D4A09C5102CB2304@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A7BB90E7@MX14A.corp.emc.com> <618BE8B40039924EB9AED233D4A09C5102CB2326@XMB-BGL-416.cisco.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05A7CF1290@MX14A.corp.emc.com> <CANtnpwi8eHBBFDohoMQBi7bSJWdvvqn4yas8A3GXm5+JCAi-qw@mail.gmail.com> <7C4DFCE962635144B8FAE8CA11D0BF1E05AD13AF5A@MX14A.corp.emc.com>
Date: Tue, 31 Jan 2012 12:28:00 -0500
Message-ID: <CANtnpwi=nsgWfy9ExjaovtUCH4Z0Qjq5q0rfNUyKKcvjW2edDw@mail.gmail.com>
From: Bhumip Khasnabish <vumip1@gmail.com>
To: david.black@emc.com
Content-Type: multipart/alternative; boundary=90e6ba1eff4add1c4704b7d64844
Cc: adalela@cisco.com, dc@ietf.org
Subject: Re: [dc] OVF "control plane" - Not a good idea
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 31 Jan 2012 17:28:02 -0000

--90e6ba1eff4add1c4704b7d64844
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

Bhumip Khasnabish vumip1@gmail.com
To Christopher LILJENSTOLPE ietf@cdl.asgaard.org
ccThomas Narten <narten@us.ibm.com>, Ronald Bonica <rbonica@juniper.net>,
dc@ietf.org,"So, Ning" <ning.so@verizon.com>
date Fri, Dec 30, 2011 at 12:56 PM
subjectRe: [dc] Elevator Pitch (was: Scoping the Interim meeting)

mailed-bygmail.com

hide details 12/30/11

Hello Chris,

Thank you very much for your comments and suggestions.

If you do not mind, please provide write ups, as appropriate, on Amazon
API, ONF, OpenFlow, Open Stack, etc. that you mention below for inclusion
in the next version of the SDO survey draft.

For the work item survey draft, the intention is to provide a list. We have
seen many follow up drafts during the last two IETF mtgs articulating
problems and possible solutions  (VDCS, VPN4DC, VEPC, VPN-O-CS, VPN-O-DCS,
VRM, security framework for VDCS, etc.) from both service providers and
solution providers.

We'll be updating this draft soon, and can work on providing priority, if
that is helpful.

*If we want to work on one near-term high-priority work item, may be
virtual machine (and virtual network element) mobility and interconnection
can be the focus..(may be we need to use distributed control, distributed
hash, etc.) *

Any others?

Thanks and best wishes.

Bhumip
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D

On Tue, Jan 31, 2012 at 12:24 PM, <david.black@emc.com> wrote:

>  OVF is not mentioned anywhere in that draft because:****
>
> ** **
>
> =93OVF is intended to be a self-contained packaging and distribution form=
at=94
> ****
>
> ** **
>
> OVF is not generally used as a runtime execution format for VMs.****
>
> ** **
>
> Thanks,
> --David (draft co-author)
>
> ----------------------------------------------------
> David L. Black, Distinguished Engineer
> EMC Corporation, 176 South St., Hopkinton, MA  01748
> +1 (508) 293-7953             FAX: +1 (508) 293-7786
> david.black@emc.com        Mobile: +1 (978) 394-7754
> ----------------------------------------------------
> ****
>
> ** **
>
> *From:* dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] *On Behalf Of *B=
humip
> Khasnabish
> *Sent:* Tuesday, January 31, 2012 12:10 PM
> *To:* Black, David
> *Cc:* adalela@cisco.com; dc@ietf.org
> *Subject:* Re: [dc] OVF "control plane" - Not a good idea****
>
> ** **
>
> *A New Internet-Draft is available from the on-line Internet-Drafts direc=
tories.*
>
> * *
>
> *       Title           : Network Virtualization Overlay Control Protocol=
 Requirements*
>
> *       Author(s)       : Lawrence Kreeger*
>
> *                          Dinesh Dutt*
>
> *                          Thomas Narten*
>
> *                          David Black*
>
> *                          Murari Sridharan*
>
> *       Filename        : draft-kreeger-nvo3-overlay-cp-00.txt*
>
> *       Pages           : 13*
>
> *       Date            : 2012-01-30*
>
> * *
>
> *   The document draft-narten-nvo3-overlay-problem-statement-01 discusses=
*
>
> *   the needs for network virtualization using overlay networks in highly=
*
>
> *   virtualized data centers.  The problem statement outlines a need for*
>
> *   control protocols to facilitate running these overlay networks.  This=
*
>
> *   document outlines the high level requirements to be fulfilled by the*
>
> *   control protocols.*
>
> * *
>
> * *
>
> *A URL for this Internet-Draft is:*
>
> *http://www.ietf.org/internet-drafts/draft-kreeger-nvo3-overlay-cp-00.txt=
*
>
> * *
>
> *Internet-Drafts are also available by anonymous FTP at:*
>
> *ftp://ftp.ietf.org/internet-drafts/*
>
> * *
>
> *This Internet-Draft can be retrieved at:*
>
> *ftp://ftp.ietf.org/internet-drafts/draft-kreeger-nvo3-overlay-cp-00.txt*
>
> * *
>
> I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt****
> ------------------------------
>
>    - *To*: i-d-announce at ietf.org <i-d-announce@DOMAIN.HIDDEN> ****
>    - *Subject*: I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt ****
>    - *From*: internet-drafts at ietf.org <internet-drafts@DOMAIN.HIDDEN> =
*
>    ***
>    - *Date*: Mon, 30 Jan 2012 11:39:16 -0800 ****
>    - *Delivered-to*: i-d-announce at ietfa.amsl.com<i-d-announce@DOMAIN.H=
IDDEN>
>    ****
>    - *List-archive*: <http://www.ietf.org/mail-archive/web/i-d-announce> =
*
>    ***
>    - *List-help*: <mailto:i-d-announce-request@ietf.org?subject=3Dhelp<i-=
d-announce-request@ietf.org?subject=3Dhelp>>
>    ****
>    - *List-id*: Internet Draft Announcements only <i-d-announce.ietf.org>
>    ****
>    - *List-post*: <mailto:i-d-announce@ietf.org <i-d-announce@ietf.org>> =
*
>    ***
>    - *List-subscribe*: <https://www.ietf.org/mailman/listinfo/i-d-announc=
e>,
>    <mailto:i-d-announce-request@ietf.org?subject=3Dsubscribe<i-d-announce=
-request@ietf.org?subject=3Dsubscribe>>
>    ****
>    - *List-unsubscribe*: <
>    https://www.ietf.org/mailman/options/i-d-announce>, <
>    mailto:i-d-announce-request@ietf.org?subject=3Dunsubscribe<i-d-announc=
e-request@ietf.org?subject=3Dunsubscribe>>
>    ****
>    - *Reply-to*: internet-drafts at ietf.org<internet-drafts@DOMAIN.HIDDE=
N>
>    ****
>
>  ------------------------------
>
> A New Internet-Draft is available from the on-line Internet-Drafts direct=
ories.****
>
> ** **
>
>        Title           : Network Virtualization Overlay Control Protocol =
Requirements****
>
>        Author(s)       : Lawrence Kreeger****
>
>                           Dinesh Dutt****
>
>                           Thomas Narten****
>
> ** **
>
>  On Fri, Jan 20, 2012 at 8:22 PM, <david.black@emc.com> wrote:****
>
> Ashish,
>
> Unfortunately, this is digging in the "wrong place" because it recreates
> the
> problem that OVF was designed to solve.  OVF is intended to be a
> self-contained
> packaging and distribution format that contains everything needed to
> instantiate
> one or more VMs.  As such, OVF can be moved by all of the protocols noted
> below,
> plus a variety of other means, such as sneaker-net.
>
> If OVF is insufficient for the portability use case, then I suggest going
> to DMTF
> to work on adding what's missing instead of inventing a "control plane"
> that is
> at odds with OVF's design intent.
>
> Thanks,
> --David
>
> > -----Original Message-----
> > From: Ashish Dalela (adalela) [mailto:adalela@cisco.com]
> > Sent: Thursday, January 19, 2012 11:10 PM
> > To: Black, David
> > Cc: dc@ietf.org
> > Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt
> >
> >
> > >> Was that supposed to be a serious question?
> >
> > Yes, it is a serious question, because VM mobility goes beyond the VM.
> >
> > >> If it was, I suggest FTP or NFS, both of which are already used to
> > move VM
> > >> images in practice, and are already specified in RFCs ;-).  OVF is
> > fundamentally
> > >> a VM image format.
> >
> > That's one approach. Another approach is to use a SOAP/REST APIs. Yet
> > another one is to define a cloud control plane, that does more than jus=
t
> > move VMs. E.g. when you move a VM, you have to move the firewall rules,
> > the VLAN association, the bandwidth, VRF configuration, GRE tunnel
> > configuration, etc.
> >
> > Thanks, Ashish
> >
> >
> > -----Original Message-----
> > From: david.black@emc.com [mailto:david.black@emc.com]
> > Sent: Friday, January 20, 2012 8:18 AM
> > To: Ashish Dalela (adalela)
> > Cc: dc@ietf.org
> > Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt
> >
> > > - Do we need a "control plane" to transfer OVF specification from
> > point
> > > A to B - the portability problem?
> >
> > Was that supposed to be a serious question?
> >
> > If it was, I suggest FTP or NFS, both of which are already used to move
> > VM
> > images in practice, and are already specified in RFCs ;-).  OVF is
> > fundamentally
> > a VM image format.
> >
> > Thanks,
> > --David
> > ----------------------------------------------------
> > David L. Black, Distinguished Engineer
> > EMC Corporation, 176 South St., Hopkinton, MA  01748
> > +1 (508) 293-7953             FAX: +1 (508) 293-7786
> > david.black@emc.com        Mobile: +1 (978) 394-7754
> > ----------------------------------------------------
> > ________________________________________
> > From: dc-bounces@ietf.org [dc-bounces@ietf.org] On Behalf Of Ashish
> > Dalela (adalela) [adalela@cisco.com]
> > Sent: Thursday, January 19, 2012 8:20 PM
> > To: Thomas Narten; Steven Blake
> > Cc: dc@ietf.org
> > Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
> >
> > I think it is fair to say that there is a difference between mobility
> > and portability. Mobility is live migration, but portability is
> > specifying a VM's properties, delete in one location and create in
> > another. The new location can be another hypervisor. In many cases, you
> > don't need mobility, just portability. E.g. if you have a disaster
> > recovery situation, then you aren't going to get mobility anyway.
> >
> > DMTF has specified a standard called OVF (Open Virtualization Format)
> > that addresses the "description" of the VM. This format is supported by
> > various hypervisor vendors. So, some level of VM migration
> > standardization has already happened (albeit portability and not
> > mobility).
> >
> > The questions are:
> >
> > - Do we need a "control plane" to transfer VM state from point A to B -
> > the mobility problem?
> > - Do we need a "control plane" to transfer OVF specification from point
> > A to B - the portability problem?
> >
> > The problem is relevant in the inter-datacenter, public-private, or
> > inter-cloud spaces, where there will be more than one hypervisor
> > controller by definition. Are we hitting the live migration issue today=
?
> > Maybe not. Is it conceivable that we will hit this issue? I think so.
> >
> > However, the question has to be asked to the provider/operators and not
> > to the vendors.
> >
> > Thanks, Ashish
> >
> >
> > -----Original Message-----
> > From: dc-bounces@ietf.org [mailto:dc-bounces@ietf.org] On Behalf Of
> > Thomas Narten
> > Sent: Thursday, January 19, 2012 11:17 PM
> > To: Steven Blake
> > Cc: dc@ietf.org
> > Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
> >
> > Steven,
> >
> > > Several system vendors (myself included) stood up in Taipei and said
> > > "one encapsulation, please".  If IETF can facilitate industry
> > > convergence on a small set of NVO3 encapsulations (preferably one),
> > that
> > > would be a big win for Ethernet switch vendors.
> >
> > I agree completely.
> >
> > But my questions were asking about the apparent lack of  interest from
> > operators/implementers/market players regarding Bhumip's draft and the
> > apparent desire to have some sort of standards work related to the
> > general VM migration problem.
> >
> > Is there such interest?
> >
> > Thomas
> >
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
> > _______________________________________________
> > dc mailing list
> > dc@ietf.org
> > https://www.ietf.org/mailman/listinfo/dc
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc****
>
>
>
>
>   ****
>

--90e6ba1eff4add1c4704b7d64844
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

<div style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><=
font size=3D"3" face=3D"Calibri">Bhumip Khasnabish <a href=3D"mailto:vumip1=
@gmail.com">vumip1@gmail.com</a> </font></div>
<div style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><=
font size=3D"3" face=3D"Calibri">To Christopher LILJENSTOLPE <a href=3D"mai=
lto:ietf@cdl.asgaard.org">ietf@cdl.asgaard.org</a></font></div>
<div style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><=
font size=3D"3" face=3D"Calibri">ccThomas Narten &lt;<a href=3D"mailto:nart=
en@us.ibm.com">narten@us.ibm.com</a>&gt;, Ronald Bonica &lt;<a href=3D"mail=
to:rbonica@juniper.net">rbonica@juniper.net</a>&gt;, <a href=3D"mailto:dc@i=
etf.org,%22So">dc@ietf.org,</a></font><font size=3D"3" face=3D"Calibri">&qu=
ot;So, Ning&quot; &lt;<a href=3D"mailto:ning.so@verizon.com">ning.so@verizo=
n.com</a>&gt;</font></div>

<div style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><=
font size=3D"3" face=3D"Calibri">date Fri, Dec 30, 2011 at 12:56 PM</font><=
/div>
<div style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><=
font size=3D"3" face=3D"Calibri">subjectRe: [dc] Elevator Pitch (was: Scopi=
ng the Interim meeting)</font></div>
<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><fo=
nt size=3D"3" face=3D"Calibri"><a href=3D"http://mailed-bygmail.com">mailed=
-bygmail.com</a></font></p>
<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><fo=
nt size=3D"3" face=3D"Calibri">hide details 12/30/11 </font></p>
<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><fo=
nt size=3D"3" face=3D"Calibri">Hello Chris,</font></p>
<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><fo=
nt size=3D"3" face=3D"Calibri">Thank you very much for your comments and su=
ggestions.</font></p>
<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><fo=
nt size=3D"3" face=3D"Calibri">If you do not mind, please provide write ups=
, as appropriate, on Amazon API, ONF, OpenFlow, Open Stack, etc. that you m=
ention below for inclusion in the next version of the SDO survey draft. </f=
ont></p>

<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><fo=
nt size=3D"3" face=3D"Calibri">For the work item survey draft, the intentio=
n is to provide a list. We have seen many follow up drafts during the last =
two IETF mtgs articulating problems and possible solutions<span style>=A0 <=
/span>(VDCS, VPN4DC, VEPC, VPN-O-CS, VPN-O-DCS, VRM, security framework for=
 VDCS, etc.) from both service providers and solution providers.</font></p>

<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><fo=
nt size=3D"3" face=3D"Calibri">We&#39;ll be updating this draft soon, and c=
an work on providing priority, if that is helpful.</font></p>
<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><b =
style><span style=3D"FONT-SIZE:12pt"><font face=3D"Calibri">If we want to w=
ork on one near-term high-priority work item, may be virtual machine (and v=
irtual network element) mobility and interconnection can be the focus..(may=
 be we need to use <u>distributed control</u>, distributed hash, etc.) </fo=
nt></span></b></p>

<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><fo=
nt size=3D"3" face=3D"Calibri">Any others?</font></p>
<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><fo=
nt size=3D"3" face=3D"Calibri">Thanks and best wishes.</font></p>
<p style=3D"LINE-HEIGHT:normal;MARGIN:0in 0in 10pt" class=3D"MsoNormal"><fo=
nt size=3D"3" face=3D"Calibri">Bhumip</font></p>=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
<br><br>
<div class=3D"gmail_quote">On Tue, Jan 31, 2012 at 12:24 PM, <span dir=3D"l=
tr">&lt;<a href=3D"mailto:david.black@emc.com">david.black@emc.com</a>&gt;<=
/span> wrote:<br>
<blockquote style=3D"BORDER-LEFT:#ccc 1px solid;MARGIN:0px 0px 0px 0.8ex;PA=
DDING-LEFT:1ex" class=3D"gmail_quote">
<div lang=3D"EN-US" vlink=3D"purple" link=3D"blue">
<div>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY:&#39;Courier New&#39;;COL=
OR:black;FONT-SIZE:10pt">OVF is not mentioned anywhere in that draft becaus=
e:<u></u><u></u></span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY:&#39;Courier New&#39;;COL=
OR:black;FONT-SIZE:10pt"><u></u>=A0<u></u></span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY:&#39;Courier New&#39;;COL=
OR:black;FONT-SIZE:10pt">=93</span>OVF is intended to be a self-contained p=
ackaging and distribution format=94<u></u><u></u></p>
<p class=3D"MsoNormal"><u></u>=A0<u></u></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY:&#39;Courier New&#39;;COL=
OR:black;FONT-SIZE:10pt">OVF is not generally used as a runtime execution f=
ormat for VMs.<u></u><u></u></span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY:&#39;Courier New&#39;;COL=
OR:black;FONT-SIZE:10pt"><u></u>=A0<u></u></span></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY:&#39;Courier New&#39;;COL=
OR:black;FONT-SIZE:10pt">Thanks,<br>--David (draft co-author)=20
<div class=3D"im"><br>----------------------------------------------------<=
br>David L. Black, Distinguished Engineer<br>EMC Corporation, 176 South St.=
, Hopkinton, MA=A0 01748<br><a href=3D"tel:%2B1%20%28508%29%20293-7953" tar=
get=3D"_blank" value=3D"+15082937953">+1 (508) 293-7953</a>=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0 FAX: <a href=3D"tel:%2B1%20%28508%29%20293-7786" targ=
et=3D"_blank" value=3D"+15082937786">+1 (508) 293-7786</a><br>
<a href=3D"mailto:david.black@emc.com" target=3D"_blank">david.black@emc.co=
m</a>=A0=A0=A0=A0=A0=A0=A0 Mobile: <a href=3D"tel:%2B1%20%28978%29%20394-77=
54" target=3D"_blank" value=3D"+19783947754">+1 (978) 394-7754</a><br>-----=
-----------------------------------------------</div>
</span><span style=3D"FONT-FAMILY:&#39;Courier New&#39;;COLOR:black;FONT-SI=
ZE:11pt"><u></u><u></u></span>
<p></p>
<p class=3D"MsoNormal"><span style=3D"FONT-FAMILY:&#39;Courier New&#39;;COL=
OR:black;FONT-SIZE:10pt"><u></u>=A0<u></u></span></p>
<div style=3D"BORDER-BOTTOM:medium none;BORDER-LEFT:blue 1.5pt solid;PADDIN=
G-BOTTOM:0in;PADDING-LEFT:4pt;PADDING-RIGHT:0in;BORDER-TOP:medium none;BORD=
ER-RIGHT:medium none;PADDING-TOP:0in">
<div>
<div style=3D"BORDER-BOTTOM:medium none;BORDER-LEFT:medium none;PADDING-BOT=
TOM:0in;PADDING-LEFT:0in;PADDING-RIGHT:0in;BORDER-TOP:#b5c4df 1pt solid;BOR=
DER-RIGHT:medium none;PADDING-TOP:3pt">
<p class=3D"MsoNormal"><b><span style=3D"FONT-FAMILY:&#39;Tahoma&#39;,&#39;=
sans-serif&#39;;FONT-SIZE:10pt">From:</span></b><span style=3D"FONT-FAMILY:=
&#39;Tahoma&#39;,&#39;sans-serif&#39;;FONT-SIZE:10pt"> <a href=3D"mailto:dc=
-bounces@ietf.org" target=3D"_blank">dc-bounces@ietf.org</a> [mailto:<a hre=
f=3D"mailto:dc-bounces@ietf.org" target=3D"_blank">dc-bounces@ietf.org</a>]=
 <b>On Behalf Of </b>Bhumip Khasnabish<br>
<b>Sent:</b> Tuesday, January 31, 2012 12:10 PM<br><b>To:</b> Black, David<=
br><b>Cc:</b> <a href=3D"mailto:adalela@cisco.com" target=3D"_blank">adalel=
a@cisco.com</a>; <a href=3D"mailto:dc@ietf.org" target=3D"_blank">dc@ietf.o=
rg</a><br>
<b>Subject:</b> Re: [dc] OVF &quot;control plane&quot; - Not a good idea<u>=
</u><u></u></span></p></div></div>
<div>
<div></div>
<div class=3D"h5">
<p class=3D"MsoNormal"><u></u>=A0<u></u></p><pre><b>A New Internet-Draft is=
 available from the on-line Internet-Drafts directories.<u></u><u></u></b><=
/pre><pre><b><u></u>=A0<u></u></b></pre><pre><b>=A0=A0=A0=A0=A0=A0 Title=A0=
=A0=A0=A0=A0=A0=A0=A0=A0=A0 : Network Virtualization Overlay Control Protoc=
ol Requirements<u></u><u></u></b></pre>
<pre><b>=A0=A0=A0=A0=A0=A0 Author(s)=A0=A0=A0=A0=A0=A0 : Lawrence Kreeger<u=
></u><u></u></b></pre><pre><b>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 Dinesh Dutt<u></u><u></u></b></pre><pre><b>=
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
 Thomas Narten<u></u><u></u></b></pre><pre><b>=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 David Black<u></u><u></u><=
/b></pre>
<pre><b>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0 Murari Sridharan<u></u><u></u></b></pre><pre><b>=A0=A0=A0=A0=A0=
=A0 Filename=A0=A0=A0=A0=A0=A0=A0 : draft-kreeger-nvo3-overlay-cp-00.txt<u>=
</u><u></u></b></pre><pre><b>=A0=A0=A0=A0=A0=A0 Pages=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0 : 13<u></u><u></u></b></pre>
<pre><b>=A0=A0=A0=A0=A0=A0 Date=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 : 2012-01-=
30<u></u><u></u></b></pre><pre><b><u></u>=A0<u></u></b></pre><pre><b>=A0=A0=
 The document draft-narten-nvo3-overlay-problem-statement-01 discusses<u></=
u><u></u></b></pre><pre><b>=A0=A0 the needs for network virtualization usin=
g overlay networks in highly<u></u><u></u></b></pre>
<pre><b>=A0=A0 virtualized data centers.=A0 The problem statement outlines =
a need for<u></u><u></u></b></pre><pre><b>=A0=A0 control protocols to facil=
itate running these overlay networks.=A0 This<u></u><u></u></b></pre><pre><=
b>=A0=A0 document outlines the high level requirements to be fulfilled by t=
he<u></u><u></u></b></pre>
<pre><b>=A0=A0 control protocols.<u></u><u></u></b></pre><pre><b><u></u>=A0=
<u></u></b></pre><pre><b><u></u>=A0<u></u></b></pre><pre><b>A URL for this =
Internet-Draft is:<u></u><u></u></b></pre><pre><b><a href=3D"http://www.iet=
f.org/internet-drafts/draft-kreeger-nvo3-overlay-cp-00.txt" target=3D"_blan=
k">http://www.ietf.org/internet-drafts/draft-kreeger-nvo3-overlay-cp-00.txt=
</a><u></u><u></u></b></pre>
<pre><b><u></u>=A0<u></u></b></pre><pre><b>Internet-Drafts are also availab=
le by anonymous FTP at:<u></u><u></u></b></pre><pre><b><a href=3D"ftp://ftp=
.ietf.org/internet-drafts/" target=3D"_blank">ftp://ftp.ietf.org/internet-d=
rafts/</a><u></u><u></u></b></pre>
<pre><b><u></u>=A0<u></u></b></pre><pre><b>This Internet-Draft can be retri=
eved at:<u></u><u></u></b></pre><pre><b><a href=3D"ftp://ftp.ietf.org/inter=
net-drafts/draft-kreeger-nvo3-overlay-cp-00.txt" target=3D"_blank">ftp://ft=
p.ietf.org/internet-drafts/draft-kreeger-nvo3-overlay-cp-00.txt</a><u></u><=
u></u></b></pre>
<pre><b><u></u>=A0<u></u></b></pre>
<h1>I-D Action: draft-kreeger-nvo3-overlay-cp-00.txt<u></u><u></u></h1>
<div style=3D"TEXT-ALIGN:center" class=3D"MsoNormal" align=3D"center">
<hr align=3D"center" size=3D"2" width=3D"100%">
</div><span style=3D"FONT-FAMILY:&#39;Times New Roman&#39;,&#39;serif&#39;;=
FONT-SIZE:12pt">
<ul type=3D"disc">
<li class=3D"MsoNormal"><em>To</em>: <a href=3D"mailto:i-d-announce@DOMAIN.=
HIDDEN" target=3D"_blank">i-d-announce at ietf.org</a> <u></u><u></u></li>
<li class=3D"MsoNormal"><em>Subject</em>: I-D Action: draft-kreeger-nvo3-ov=
erlay-cp-00.txt <u></u><u></u></li>
<li class=3D"MsoNormal"><em>From</em>: <a href=3D"mailto:internet-drafts@DO=
MAIN.HIDDEN" target=3D"_blank">internet-drafts at ietf.org</a> <u></u><u></=
u></li>
<li class=3D"MsoNormal"><em>Date</em>: Mon, 30 Jan 2012 11:39:16 -0800 <u><=
/u><u></u></li>
<li class=3D"MsoNormal"><em>Delivered-to</em>: <a href=3D"mailto:i-d-announ=
ce@DOMAIN.HIDDEN" target=3D"_blank">i-d-announce at ietfa.amsl.com</a> <u><=
/u><u></u></li>
<li class=3D"MsoNormal"><em>List-archive</em>: &lt;<a href=3D"http://www.ie=
tf.org/mail-archive/web/i-d-announce" target=3D"_blank">http://www.ietf.org=
/mail-archive/web/i-d-announce</a>&gt; <u></u><u></u></li>
<li class=3D"MsoNormal"><em>List-help</em>: &lt;<a href=3D"mailto:i-d-annou=
nce-request@ietf.org?subject=3Dhelp" target=3D"_blank">mailto:i-d-announce-=
request@ietf.org?subject=3Dhelp</a>&gt; <u></u><u></u></li>
<li class=3D"MsoNormal"><em>List-id</em>: Internet Draft Announcements only=
 &lt;<a href=3D"http://i-d-announce.ietf.org/" target=3D"_blank">i-d-announ=
ce.ietf.org</a>&gt; <u></u><u></u></li>
<li class=3D"MsoNormal"><em>List-post</em>: &lt;<a href=3D"mailto:i-d-annou=
nce@ietf.org" target=3D"_blank">mailto:i-d-announce@ietf.org</a>&gt; <u></u=
><u></u></li>
<li class=3D"MsoNormal"><em>List-subscribe</em>: &lt;<a href=3D"https://www=
.ietf.org/mailman/listinfo/i-d-announce" target=3D"_blank">https://www.ietf=
.org/mailman/listinfo/i-d-announce</a>&gt;, &lt;<a href=3D"mailto:i-d-annou=
nce-request@ietf.org?subject=3Dsubscribe" target=3D"_blank">mailto:i-d-anno=
unce-request@ietf.org?subject=3Dsubscribe</a>&gt; <u></u><u></u></li>

<li class=3D"MsoNormal"><em>List-unsubscribe</em>: &lt;<a href=3D"https://w=
ww.ietf.org/mailman/options/i-d-announce" target=3D"_blank">https://www.iet=
f.org/mailman/options/i-d-announce</a>&gt;, &lt;<a href=3D"mailto:i-d-annou=
nce-request@ietf.org?subject=3Dunsubscribe" target=3D"_blank">mailto:i-d-an=
nounce-request@ietf.org?subject=3Dunsubscribe</a>&gt; <u></u><u></u></li>

<li class=3D"MsoNormal"><em>Reply-to</em>: <a href=3D"mailto:internet-draft=
s@DOMAIN.HIDDEN" target=3D"_blank">internet-drafts at ietf.org</a> <u></u><=
u></u></li></ul></span>
<div style=3D"TEXT-ALIGN:center" class=3D"MsoNormal" align=3D"center">
<hr align=3D"center" size=3D"2" width=3D"100%">
</div><pre>A New Internet-Draft is available from the on-line Internet-Draf=
ts directories.<u></u><u></u></pre><pre><u></u>=A0<u></u></pre><pre>=A0=A0=
=A0=A0=A0=A0 Title=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 : Network Virtualization O=
verlay Control Protocol Requirements<u></u><u></u></pre>
<pre>=A0=A0=A0=A0=A0=A0 Author(s)=A0=A0=A0=A0=A0=A0 : Lawrence Kreeger<u></=
u><u></u></pre><pre>=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0 Dinesh Dutt<u></u><u></u></pre><pre>=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 Thomas Narten<=
u></u><u></u></pre><pre style=3D"MARGIN-BOTTOM:12pt">
<u></u>=A0<u></u></pre>
<div>
<p class=3D"MsoNormal">On Fri, Jan 20, 2012 at 8:22 PM, &lt;<a href=3D"mail=
to:david.black@emc.com" target=3D"_blank">david.black@emc.com</a>&gt; wrote=
:<u></u><u></u></p>
<p class=3D"MsoNormal">Ashish,<br><br>Unfortunately, this is digging in the=
 &quot;wrong place&quot; because it recreates the<br>problem that OVF was d=
esigned to solve. =A0OVF is intended to be a self-contained<br>packaging an=
d distribution format that contains everything needed to instantiate<br>
one or more VMs. =A0As such, OVF can be moved by all of the protocols noted=
 below,<br>plus a variety of other means, such as sneaker-net.<br><br>If OV=
F is insufficient for the portability use case, then I suggest going to DMT=
F<br>
to work on adding what&#39;s missing instead of inventing a &quot;control p=
lane&quot; that is<br>at odds with OVF&#39;s design intent.<br><br>Thanks,<=
br>--David<br><br>&gt; -----Original Message-----<br>&gt; From: Ashish Dale=
la (adalela) [mailto:<a href=3D"mailto:adalela@cisco.com" target=3D"_blank"=
>adalela@cisco.com</a>]<br>
&gt; Sent: Thursday, January 19, 2012 11:10 PM<br>&gt; To: Black, David<br>=
&gt; Cc: <a href=3D"mailto:dc@ietf.org" target=3D"_blank">dc@ietf.org</a><b=
r>&gt; Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt<br>&gt;<br>&=
gt;<br>
&gt; &gt;&gt; Was that supposed to be a serious question?<br>&gt;<br>&gt; Y=
es, it is a serious question, because VM mobility goes beyond the VM.<br>&g=
t;<br>&gt; &gt;&gt; If it was, I suggest FTP or NFS, both of which are alre=
ady used to<br>
&gt; move VM<br>&gt; &gt;&gt; images in practice, and are already specified=
 in RFCs ;-). =A0OVF is<br>&gt; fundamentally<br>&gt; &gt;&gt; a VM image f=
ormat.<br>&gt;<br>&gt; That&#39;s one approach. Another approach is to use =
a SOAP/REST APIs. Yet<br>
&gt; another one is to define a cloud control plane, that does more than ju=
st<br>&gt; move VMs. E.g. when you move a VM, you have to move the firewall=
 rules,<br>&gt; the VLAN association, the bandwidth, VRF configuration, GRE=
 tunnel<br>
&gt; configuration, etc.<br>&gt;<br>&gt; Thanks, Ashish<br>&gt;<br>&gt;<br>=
&gt; -----Original Message-----<br>&gt; From: <a href=3D"mailto:david.black=
@emc.com" target=3D"_blank">david.black@emc.com</a> [mailto:<a href=3D"mail=
to:david.black@emc.com" target=3D"_blank">david.black@emc.com</a>]<br>
&gt; Sent: Friday, January 20, 2012 8:18 AM<br>&gt; To: Ashish Dalela (adal=
ela)<br>&gt; Cc: <a href=3D"mailto:dc@ietf.org" target=3D"_blank">dc@ietf.o=
rg</a><br>&gt; Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt<br>&=
gt;<br>
&gt; &gt; - Do we need a &quot;control plane&quot; to transfer OVF specific=
ation from<br>&gt; point<br>&gt; &gt; A to B - the portability problem?<br>=
&gt;<br>&gt; Was that supposed to be a serious question?<br>&gt;<br>&gt; If=
 it was, I suggest FTP or NFS, both of which are already used to move<br>
&gt; VM<br>&gt; images in practice, and are already specified in RFCs ;-). =
=A0OVF is<br>&gt; fundamentally<br>&gt; a VM image format.<br>&gt;<br>&gt; =
Thanks,<br>&gt; --David<br>&gt; -------------------------------------------=
---------<br>
&gt; David L. Black, Distinguished Engineer<br>&gt; EMC Corporation, 176 So=
uth St., Hopkinton, MA =A001748<br>&gt; <a href=3D"tel:%2B1%20%28508%29%202=
93-7953" target=3D"_blank">+1 (508) 293-7953</a> =A0 =A0 =A0 =A0 =A0 =A0 FA=
X: <a href=3D"tel:%2B1%20%28508%29%20293-7786" target=3D"_blank">+1 (508) 2=
93-7786</a><br>
&gt; <a href=3D"mailto:david.black@emc.com" target=3D"_blank">david.black@e=
mc.com</a> =A0 =A0 =A0 =A0Mobile: <a href=3D"tel:%2B1%20%28978%29%20394-775=
4" target=3D"_blank">+1 (978) 394-7754</a><br>&gt; ------------------------=
----------------------------<br>
&gt; ________________________________________<br>&gt; From: <a href=3D"mail=
to:dc-bounces@ietf.org" target=3D"_blank">dc-bounces@ietf.org</a> [<a href=
=3D"mailto:dc-bounces@ietf.org" target=3D"_blank">dc-bounces@ietf.org</a>] =
On Behalf Of Ashish<br>
&gt; Dalela (adalela) [<a href=3D"mailto:adalela@cisco.com" target=3D"_blan=
k">adalela@cisco.com</a>]<br>&gt; Sent: Thursday, January 19, 2012 8:20 PM<=
br>&gt; To: Thomas Narten; Steven Blake<br>&gt; Cc: <a href=3D"mailto:dc@ie=
tf.org" target=3D"_blank">dc@ietf.org</a><br>
&gt; Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt<br>&gt;<br>&gt=
; I think it is fair to say that there is a difference between mobility<br>=
&gt; and portability. Mobility is live migration, but portability is<br>
&gt; specifying a VM&#39;s properties, delete in one location and create in=
<br>&gt; another. The new location can be another hypervisor. In many cases=
, you<br>&gt; don&#39;t need mobility, just portability. E.g. if you have a=
 disaster<br>
&gt; recovery situation, then you aren&#39;t going to get mobility anyway.<=
br>&gt;<br>&gt; DMTF has specified a standard called OVF (Open Virtualizati=
on Format)<br>&gt; that addresses the &quot;description&quot; of the VM. Th=
is format is supported by<br>
&gt; various hypervisor vendors. So, some level of VM migration<br>&gt; sta=
ndardization has already happened (albeit portability and not<br>&gt; mobil=
ity).<br>&gt;<br>&gt; The questions are:<br>&gt;<br>&gt; - Do we need a &qu=
ot;control plane&quot; to transfer VM state from point A to B -<br>
&gt; the mobility problem?<br>&gt; - Do we need a &quot;control plane&quot;=
 to transfer OVF specification from point<br>&gt; A to B - the portability =
problem?<br>&gt;<br>&gt; The problem is relevant in the inter-datacenter, p=
ublic-private, or<br>
&gt; inter-cloud spaces, where there will be more than one hypervisor<br>&g=
t; controller by definition. Are we hitting the live migration issue today?=
<br>&gt; Maybe not. Is it conceivable that we will hit this issue? I think =
so.<br>
&gt;<br>&gt; However, the question has to be asked to the provider/operator=
s and not<br>&gt; to the vendors.<br>&gt;<br>&gt; Thanks, Ashish<br>&gt;<br=
>&gt;<br>&gt; -----Original Message-----<br>&gt; From: <a href=3D"mailto:dc=
-bounces@ietf.org" target=3D"_blank">dc-bounces@ietf.org</a> [mailto:<a hre=
f=3D"mailto:dc-bounces@ietf.org" target=3D"_blank">dc-bounces@ietf.org</a>]=
 On Behalf Of<br>
&gt; Thomas Narten<br>&gt; Sent: Thursday, January 19, 2012 11:17 PM<br>&gt=
; To: Steven Blake<br>&gt; Cc: <a href=3D"mailto:dc@ietf.org" target=3D"_bl=
ank">dc@ietf.org</a><br>&gt; Subject: Re: [dc] draft-khasnabish-vmmi-proble=
ms-00.txt<br>
&gt;<br>&gt; Steven,<br>&gt;<br>&gt; &gt; Several system vendors (myself in=
cluded) stood up in Taipei and said<br>&gt; &gt; &quot;one encapsulation, p=
lease&quot;. =A0If IETF can facilitate industry<br>&gt; &gt; convergence on=
 a small set of NVO3 encapsulations (preferably one),<br>
&gt; that<br>&gt; &gt; would be a big win for Ethernet switch vendors.<br>&=
gt;<br>&gt; I agree completely.<br>&gt;<br>&gt; But my questions were askin=
g about the apparent lack of =A0interest from<br>&gt; operators/implementer=
s/market players regarding Bhumip&#39;s draft and the<br>
&gt; apparent desire to have some sort of standards work related to the<br>=
&gt; general VM migration problem.<br>&gt;<br>&gt; Is there such interest?<=
br>&gt;<br>&gt; Thomas<br>&gt;<br>&gt; ____________________________________=
___________<br>
&gt; dc mailing list<br>&gt; <a href=3D"mailto:dc@ietf.org" target=3D"_blan=
k">dc@ietf.org</a><br>&gt; <a href=3D"https://www.ietf.org/mailman/listinfo=
/dc" target=3D"_blank">https://www.ietf.org/mailman/listinfo/dc</a><br>&gt;=
 _______________________________________________<br>
&gt; dc mailing list<br>&gt; <a href=3D"mailto:dc@ietf.org" target=3D"_blan=
k">dc@ietf.org</a><br>&gt; <a href=3D"https://www.ietf.org/mailman/listinfo=
/dc" target=3D"_blank">https://www.ietf.org/mailman/listinfo/dc</a><br><br>=
_______________________________________________<br>
dc mailing list<br><a href=3D"mailto:dc@ietf.org" target=3D"_blank">dc@ietf=
.org</a><br><a href=3D"https://www.ietf.org/mailman/listinfo/dc" target=3D"=
_blank">https://www.ietf.org/mailman/listinfo/dc</a><u></u><u></u></p></div=
>
<p class=3D"MsoNormal"><br><br clear=3D"all"><br>=A0 <u></u><u></u></p></di=
v></div></div></p></div></div></blockquote></div><br><br clear=3D"all"><br>=
=A0

--90e6ba1eff4add1c4704b7d64844--

From kreeger@cisco.com  Tue Jan 31 10:24:22 2012
Return-Path: <kreeger@cisco.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3389711E8072 for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 10:24:22 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -4.002
X-Spam-Level: 
X-Spam-Status: No, score=-4.002 tagged_above=-999 required=5 tests=[BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_13=0.6, J_CHICKENPOX_32=0.6, MIME_QP_LONG_LINE=1.396, RCVD_IN_DNSWL_MED=-4]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id etO47SyDyiDX for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 10:24:21 -0800 (PST)
Received: from mtv-iport-3.cisco.com (mtv-iport-3.cisco.com [173.36.130.14]) by ietfa.amsl.com (Postfix) with ESMTP id 73D9F21F84F8 for <dc@ietf.org>; Tue, 31 Jan 2012 10:24:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=kreeger@cisco.com; l=4391; q=dns/txt; s=iport; t=1328034261; x=1329243861; h=date:subject:from:to:message-id:in-reply-to:mime-version; bh=L6fw349fnPJBmyESJpm+xP4k0Hz2uPtQF9hTqX6dopQ=; b=ImEUrQSSN8dak8UTdtZmBQQ+LFERrReojKy1avTIntaGjmZPLaGkCeZ+ jYvyZ28UrNHxQ3r7PIprLY7XUh6EhBwvNtWohH67v3s/RZUz0j/t+SOCR aNzsRK4be3geymdNlINzSslqVU5A53Dbdod3JYG6Ehem5+HD2ufqbWGvP A=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: AhsFAKAxKE+rRDoG/2dsb2JhbABDgk2kBgGHJmaBBYFyAQEBBAEBAQ8BKjEdAQwNAwECVigIAQEEEwkSB4djmHWBJwGeUosJMhMBCAUDAwkHAQcHAoNaAgYqIQwBBxIBFYMcBIhAjGCSdQ
X-IronPort-AV: E=Sophos;i="4.71,597,1320624000"; d="scan'208,217";a="28056233"
Received: from mtv-core-1.cisco.com ([171.68.58.6]) by mtv-iport-3.cisco.com with ESMTP; 31 Jan 2012 18:24:10 +0000
Received: from xbh-sjc-221.amer.cisco.com (xbh-sjc-221.cisco.com [128.107.191.63]) by mtv-core-1.cisco.com (8.14.3/8.14.3) with ESMTP id q0VIO9FQ030250 for <dc@ietf.org>; Tue, 31 Jan 2012 18:24:09 GMT
Received: from xmb-sjc-21e.amer.cisco.com ([171.70.151.156]) by xbh-sjc-221.amer.cisco.com with Microsoft SMTPSVC(6.0.3790.4675);  Tue, 31 Jan 2012 10:24:09 -0800
Received: from 10.21.66.113 ([10.21.66.113]) by xmb-sjc-21e.amer.cisco.com ([171.70.151.156]) via Exchange Front-End Server email.cisco.com ([128.107.191.87]) with Microsoft Exchange Server HTTP-DAV ; Tue, 31 Jan 2012 18:24:09 +0000
User-Agent: Microsoft-Entourage/12.20.0.090605
Date: Tue, 31 Jan 2012 10:24:09 -0800
From: Larry Kreeger <kreeger@cisco.com>
To: <dc@ietf.org>
Message-ID: <CB4D71C9.5371E%kreeger@cisco.com>
Thread-Topic: Network Virtualization Overlay Control Plane Requirements posted
Thread-Index: AczfhwmCNjK0MgCDPkWJDbI3krxH/AAvnxaK
In-Reply-To: <CB4C3234.53546%kreeger@cisco.com>
Mime-version: 1.0
Content-type: multipart/alternative; boundary="B_3410850250_40286102"
X-OriginalArrivalTime: 31 Jan 2012 18:24:09.0616 (UTC) FILETIME=[86392900:01CCE045]
Subject: [dc] Network Virtualization Overlay Control Plane Requirements posted
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 31 Jan 2012 18:24:22 -0000

> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--B_3410850250_40286102
Content-type: text/plain;
	charset="US-ASCII"
Content-transfer-encoding: 7bit

This needed moderator approval for Bcc...so I am posting it directly now.  -
Larry

------ Forwarded Message
From: Larry Kreeger <kreeger@cisco.com>
Date: Mon, 30 Jan 2012 11:40:36 -0800
To: <nvo3@ietf.org>
Subject: [nvo3] Network Virtualization Overlay Control Plane Requirements
posted

To: nvo3 mailing list
Bcc: l2vpn, l3vpn, dc, armd, vpn4dc mailing lists

Hi everyone,

I have recently posted a first draft of the Control Plane requirements for
Network Virtualization Overlays.

http://www.ietf.org/id/draft-kreeger-nvo3-overlay-cp-00.txt

This draft is a follow-up to draft-narten-nvo3-overlay-problem-statement-01
( http://tools.ietf.org/html/draft-narten-nvo3-overlay-problem-statement-01
) which makes the case for overlay networks in data centers (not restricted
to L2 over L3) and the need for control plane solutions.

If you have not read draft-narten-nvo3-overlay-problem-statement-01, I
highly suggest reading it first because the posted control plane reqs
document picks up where it left off.

Since the posted draft is a followup to
draft-narten-nvo3-overlay-problem-statement-01 , and there is already an
NVO3 mailing list, please join and post any discussion and/or comments to
nvo3@ietf.org
https://www.ietf.org/mailman/listinfo/nvo3

Thank you,
Larry Kreeger

_______________________________________________
nvo3 mailing list
nvo3@ietf.org
https://www.ietf.org/mailman/listinfo/nvo3

------ End of Forwarded Message


--B_3410850250_40286102
Content-type: text/html;
	charset="US-ASCII"
Content-transfer-encoding: quoted-printable

<HTML>
<HEAD>
<TITLE>Network Virtualization Overlay Control Plane Requirements posted</TI=
TLE>
</HEAD>
<BODY>
<FONT FACE=3D"Calibri, Verdana, Helvetica, Arial"><SPAN STYLE=3D'font-size:11pt=
'>This needed moderator approval for Bcc...so I am posting it directly now. =
&nbsp;- Larry<BR>
<BR>
------ Forwarded Message<BR>
<B>From: </B>Larry Kreeger &lt;<a href=3D"kreeger@cisco.com">kreeger@cisco.co=
m</a>&gt;<BR>
<B>Date: </B>Mon, 30 Jan 2012 11:40:36 -0800<BR>
<B>To: </B>&lt;<a href=3D"nvo3@ietf.org">nvo3@ietf.org</a>&gt;<BR>
<B>Subject: </B>[nvo3] Network Virtualization Overlay Control Plane Require=
ments posted<BR>
<BR>
To: nvo3 mailing list<BR>
Bcc: l2vpn, l3vpn, dc, armd, vpn4dc mailing lists<BR>
<BR>
Hi everyone,<BR>
<BR>
I have recently posted a first draft of the Control Plane requirements for =
Network Virtualization Overlays. &nbsp;<BR>
<BR>
<a href=3D"http://www.ietf.org/id/draft-kreeger-nvo3-overlay-cp-00.txt">http:=
//www.ietf.org/id/draft-kreeger-nvo3-overlay-cp-00.txt</a><BR>
<BR>
This draft is a follow-up to draft-narten-nvo3-overlay-problem-statement-01=
 ( <a href=3D"http://tools.ietf.org/html/draft-narten-nvo3-overlay-problem-sta=
tement-01">http://tools.ietf.org/html/draft-narten-nvo3-overlay-problem-stat=
ement-01</a> ) which makes the case for overlay networks in data centers (no=
t restricted to L2 over L3) and the need for control plane solutions.<BR>
<BR>
If you have not read draft-narten-nvo3-overlay-problem-statement-01, I high=
ly suggest reading it first because the posted control plane reqs document p=
icks up where it left off.<BR>
<BR>
Since the posted draft is a followup to draft-narten-nvo3-overlay-problem-s=
tatement-01 , and there is already an NVO3 mailing list, please join and pos=
t any discussion and/or comments to <a href=3D"nvo3@ietf.org">nvo3@ietf.org</a=
><BR>
<a href=3D"https://www.ietf.org/mailman/listinfo/nvo3">https://www.ietf.org/m=
ailman/listinfo/nvo3</a><BR>
<BR>
Thank you,<BR>
Larry Kreeger<BR>
<HR ALIGN=3DCENTER SIZE=3D"3" WIDTH=3D"95%"></SPAN></FONT><FONT SIZE=3D"2"><FONT FA=
CE=3D"Consolas, Courier New, Courier"><SPAN STYLE=3D'font-size:10pt'>___________=
____________________________________<BR>
nvo3 mailing list<BR>
<a href=3D"nvo3@ietf.org">nvo3@ietf.org</a><BR>
<a href=3D"https://www.ietf.org/mailman/listinfo/nvo3">https://www.ietf.org/m=
ailman/listinfo/nvo3</a><BR>
<BR>
------ End of Forwarded Message<BR>
</SPAN></FONT></FONT>
</BODY>
</HTML>


--B_3410850250_40286102--


From d3e3e3@gmail.com  Tue Jan 31 11:13:44 2012
Return-Path: <d3e3e3@gmail.com>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 20CA811E8080 for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 11:13:44 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -104.242
X-Spam-Level: 
X-Spam-Status: No, score=-104.242 tagged_above=-999 required=5 tests=[AWL=-0.643, BAYES_00=-2.599, RCVD_IN_DNSWL_LOW=-1, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id pZg0ENNHV-Lf for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 11:13:41 -0800 (PST)
Received: from mail-lpp01m010-f44.google.com (mail-lpp01m010-f44.google.com [209.85.215.44]) by ietfa.amsl.com (Postfix) with ESMTP id 476D111E8072 for <dc@ietf.org>; Tue, 31 Jan 2012 11:13:41 -0800 (PST)
Received: by lahl5 with SMTP id l5so230361lah.31 for <dc@ietf.org>; Tue, 31 Jan 2012 11:13:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type:content-transfer-encoding; bh=WCcDxNTFZHB9j0qd8XlTffdOM+w0RWwOm6iyAHh9GY8=; b=coJnVL2WPb9E007JBCOCB1qYnf2lVEqq+5Xd9FO+e43Amdd6c6fySSym+J+TePtAli ClbTKcJrTGwe7+GmD9Cgku9UybVF82st9fvH/0JsAyVNZNtiFhnJ+nt4xq7flEGISkwV 0VodGv07Ter4TcaVoDpDfaIDSCkoBfor53RjM=
Received: by 10.152.133.229 with SMTP id pf5mr11834185lab.18.1328037220269; Tue, 31 Jan 2012 11:13:40 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.40.231 with HTTP; Tue, 31 Jan 2012 11:13:20 -0800 (PST)
In-Reply-To: <618BE8B40039924EB9AED233D4A09C5102CB22FF@XMB-BGL-416.cisco.com>
References: <CAH==cJxfmae0u0bSF4cn_haLgY1T-vnw2102PApzYtj5Aty=GQ@mail.gmail.com> <CANtnpwhFJ746ooi9GUCxfBqsOXu14hDka0D9inhh5pPq3U_ZTA@mail.gmail.com> <201201171540.q0HFeNan008591@cichlid.raleigh.ibm.com> <CANtnpwjexDPazOXLYHHjn3+JDi-o49Bv5ptDExAZHAA8Ra2m-A@mail.gmail.com> <201201191419.q0JEJTLF010649@cichlid.raleigh.ibm.com> <1326989277.2513.4.camel@ecliptic.extremenetworks.com> <618BE8B40039924EB9AED233D4A09C5102CB2291@XMB-BGL-416.cisco.com> <1326992094.2513.10.camel@ecliptic.extremenetworks.com> <618BE8B40039924EB9AED233D4A09C5102CB22FF@XMB-BGL-416.cisco.com>
From: Donald Eastlake <d3e3e3@gmail.com>
Date: Tue, 31 Jan 2012 14:13:20 -0500
Message-ID: <CAF4+nEEZYsvX3AsLUP8ZrapOeKrqX5U8Lp45dNxsv+TZtW28sA@mail.gmail.com>
To: "Ashish Dalela (adalela)" <adalela@cisco.com>
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Cc: Thomas Narten <narten@us.ibm.com>, Steven Blake <sblake@extremenetworks.com>, dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 31 Jan 2012 19:13:44 -0000

See below

On Thu, Jan 19, 2012 at 8:07 PM, Ashish Dalela (adalela)
<adalela@cisco.com> wrote:
>> - Hypervisor based encapsulations
>> - Network based encapsulations
>
>> I don't see any reason why these need to differ.
>
> The fact is that they do differ. A hypervisor based solution will not run=
 a routing protocol (control plane), or we haven't seen that yet. That mean=
s many types of information will be pulled through configuration / mgmt con=
trol from a hypervisor controller rather than the network control plane.
>
>> - L2 in L2 encapsulation
>
>> IEEE owns this space.
>
> Then there is TRILL in IETF.

TRILL is not L2 in L2. TRILL as specified defines a new layer that is
above all Layer 2 bridging but below Layer 3 routing. TRILL switches
appear to be end stations to bridges but are as transparent to L3
routers as bridges are.

Thanks,
Donald
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D
=A0Donald E. Eastlake 3rd=A0=A0 +1-508-333-2270 (cell)
=A0155 Beaver Street,=A0Milford, MA 01757 USA
=A0d3e3e3@gmail.com

>> - L2 in L3 encapsulation
>
>> This is NVO3. =A0I would prefer to have to support the minimum number of
>> NVO3 encapsulations in HW.
>
> Minimum is not quantified :-) I thought you said you wanted only one.
>
>> - L3 in L3 encapsulation
>
>> There are already plenty to choose from.
>
> We agree.
>
> Thanks, Ashish
>
> -----Original Message-----
> From: Steven Blake [mailto:sblake@extremenetworks.com]
> Sent: Thursday, January 19, 2012 10:25 PM
> To: Ashish Dalela (adalela)
> Cc: Thomas Narten; dc@ietf.org
> Subject: RE: [dc] draft-khasnabish-vmmi-problems-00.txt
>
> On Thu, 2012-01-19 at 08:23 -0800, Ashish Dalela (adalela) wrote:
>
>> >> "one encapsulation, please".
>>
>> How do you propose we reconcile the present scenario:
>>
>> - Hypervisor based encapsulations
>> - Network based encapsulations
>
> I don't see any reason why these need to differ.
>
>> - L2 in L2 encapsulation
>
> IEEE owns this space.
>
>> - L2 in L3 encapsulation
>
> This is NVO3. =A0I would prefer to have to support the minimum number of
> NVO3 encapsulations in HW.
>
>> - L3 in L3 encapsulation
>
> There are already plenty to choose from.
>
>
> Regards,
>
> /////////////////////////////////////////////
> Steven Blake =A0 =A0 =A0 sblake@extremenetworks.com
> Extreme Networks =A0 =A0 =A0 =A0 =A0 =A0 =A0+1 919-884-3211
>
> _______________________________________________
> dc mailing list
> dc@ietf.org
> https://www.ietf.org/mailman/listinfo/dc

From liu.bin21@zte.com.cn  Tue Jan 31 20:35:00 2012
Return-Path: <liu.bin21@zte.com.cn>
X-Original-To: dc@ietfa.amsl.com
Delivered-To: dc@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 78BD711E80A5 for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 20:35:00 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -95.714
X-Spam-Level: 
X-Spam-Status: No, score=-95.714 tagged_above=-999 required=5 tests=[AWL=5.524, BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_62=0.6, RCVD_DOUBLE_IP_LOOSE=0.76, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NqmyrHwYrJOJ for <dc@ietfa.amsl.com>; Tue, 31 Jan 2012 20:34:59 -0800 (PST)
Received: from mx5.zte.com.cn (mx5.zte.com.cn [63.217.80.70]) by ietfa.amsl.com (Postfix) with ESMTP id 09F4A11E808D for <dc@ietf.org>; Tue, 31 Jan 2012 20:34:58 -0800 (PST)
Received: from [10.30.17.99] by mx5.zte.com.cn with surfront esmtp id 53829122734555; Wed, 1 Feb 2012 12:30:28 +0800 (CST)
Received: from [10.30.3.20] by [192.168.168.15] with StormMail ESMTP id 5467.122734555; Wed, 1 Feb 2012 12:34:50 +0800 (CST)
Received: (from root@localhost) by mse01.zte.com.cn id q114YnPX010960 for <dc@ietf.org>; Wed, 1 Feb 2012 12:34:49 +0800 (GMT-8) (envelope-from liu.bin21@zte.com.cn)
Received: from notes_smtp.zte.com.cn ([10.30.1.239]) by mse01.zte.com.cn with ESMTP id q113unSV082136; Wed, 1 Feb 2012 11:56:50 +0800 (GMT-8) (envelope-from liu.bin21@zte.com.cn)
Message-Id: <201202010434.q114YnPX010960@mse01.zte.com.cn>
To: narten@us.ibm.com, vumip1@gmail.com
MIME-Version: 1.0
X-Mailer: Lotus Notes Release 7.0.1 January 17, 2006
From: liu.bin21@zte.com.cn
Date: Wed, 1 Feb 2012 11:56:47 +0800
X-MIMETrack: Serialize by Router on notes_smtp/zte_ltd(Release 8.5.1FP4|July 25, 2010) at 2012-02-01 11:56:51, Serialize complete at 2012-02-01 11:56:51
Content-Type: multipart/alternative; boundary="=_alternative 0015AECF48257997_="
X-MAIL: mse01.zte.com.cn q114YnPX010960
X-MSS: AUDITRELEASE@mse01.zte.com.cn
Cc: dc@ietf.org
Subject: Re: [dc] draft-khasnabish-vmmi-problems-00.txt
X-BeenThere: dc@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: IETF Data Center Mailing List <dc.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/dc>, <mailto:dc-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/dc>
List-Post: <mailto:dc@ietf.org>
List-Help: <mailto:dc-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/dc>, <mailto:dc-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 01 Feb 2012 04:35:00 -0000

This is a multipart message in MIME format.
--=_alternative 0015AECF48257997_=
Content-Type: text/plain; charset="US-ASCII"

Thomas, thank you for your comments! 

About VM migration in the draft, we did not demand forcibly between the 
different vender(or SPs), so your concerns may not 
exist. As you say, "let's be realistic", our intention is to improve the 
flexibility of VM migration, as well as the breadth 
of applications under the premise of market heavyweights are not opposed 
to it.In order to achieve these goals, analyse possible problems, discuss 
and resolve these problems, such as: VM migration is due to a non-public 
sector energy-efficient needs, rather than to public access without 
interruption, or business needs of the user's desktop migration, this 
demand may exist within a same service provider, or you say a same vender 
with mixed network, for example:

With the promotion of IPv6 technology, the existing IPv4 networks will be 
more and more IPv6 hosts, these applications driven 
a series of tunnel technologies to provide solutions, such as: 6to4 tunnel 
technology, ISATAP tunnel technology, and so on. 
Virtual machine migration technology will also be the basis of these 
network environments,in the transition network using tunneling transition 
technique, the connections between the subnets and the backbone network 
are achieved through the tunneling gateway. In the IPv4/IPv6 transition 
period, a variety of tunnels coexist. The establishment of the tunnel 
varies with different gateways. The traditional tunneling gateway only 
establishes tunnels for communication with the same type of gateway, the 
different types of traditional tunneling gateway cannot communicate with 
each other, which cannot meet the requirements of VPN communications in 
the transition period. A multi-tunnel VPN gateway is used to solve the 
problem of establishing the tunnel between the heterogeneous gateways.

Many thanks for guidance

Regards,

Bin Liu

liu.bin21@zte.com.cn
Richard.BoHan.liu@gmail.com


--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail is solely property of the sender's organization. This mail communication is confidential. Recipients named above are obligated to maintain secrecy and are not permitted to disclose the contents of this communication to others.
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the originator of the message. Any views expressed in this message are those of the individual sender.
This message has been scanned for viruses and Spam by ZTE Anti-Spam system.

--=_alternative 0015AECF48257997_=
Content-Type: text/html; charset="US-ASCII"


<br><font size=2 face="sans-serif">Thomas, thank you for your comments!
</font>
<br>
<br><font size=2 face="sans-serif">About VM migration in the draft, we
did not demand forcibly between the different vender(or SPs), so your concerns
may not </font>
<br><font size=2 face="sans-serif">exist. As you say, &quot;let's be realistic&quot;,
our intention is to improve the flexibility of VM migration, as well as
the breadth </font>
<br><font size=2 face="sans-serif">of applications under the premise of
market heavyweights are not opposed to it.In order to achieve these goals,
analyse possible problems, discuss and resolve these problems, such as:
VM migration is due to a non-public sector energy-efficient needs, rather
than to public access without interruption, or business needs of the user's
desktop migration, this demand may exist within a same service provider,
or you say a same vender with mixed network, for example:</font>
<br>
<br><font size=2 face="sans-serif">With the promotion of IPv6 technology,
the existing IPv4 networks will be more and more IPv6 hosts, these applications
driven </font>
<br><font size=2 face="sans-serif">a series of tunnel technologies to provide
solutions, such as: 6to4 tunnel technology, ISATAP tunnel technology, and
so on. </font>
<br><font size=2 face="sans-serif">Virtual machine migration technology
will also be the basis of these network environments,in the transition
network using tunneling transition technique, the connections between the
subnets and the backbone network are achieved through the tunneling gateway.
In the IPv4/IPv6 transition period, a variety of tunnels coexist. The establishment
of the tunnel varies with different gateways. The traditional tunneling
gateway only establishes tunnels for communication with the same type of
gateway, the different types of traditional tunneling gateway cannot communicate
with each other, which cannot meet the requirements of VPN communications
in the transition period. A multi-tunnel VPN gateway is used to solve the
problem of establishing the tunnel between the heterogeneous gateways.</font>
<br>
<br><font size=2 face="sans-serif">Many thanks for guidance</font>
<br>
<br><font size=2 face="sans-serif">Regards,</font>
<br>
<br><font size=2 face="sans-serif">Bin Liu</font>
<br>
<br><font size=2 face="sans-serif">liu.bin21@zte.com.cn</font>
<br><font size=2 face="sans-serif">Richard.BoHan.liu@gmail.com</font>
<br><br><pre>
--------------------------------------------------------
ZTE&nbsp;Information&nbsp;Security&nbsp;Notice:&nbsp;The&nbsp;information&nbsp;contained&nbsp;in&nbsp;this&nbsp;mail&nbsp;is&nbsp;solely&nbsp;property&nbsp;of&nbsp;the&nbsp;sender's&nbsp;organization.&nbsp;This&nbsp;mail&nbsp;communication&nbsp;is&nbsp;confidential.&nbsp;Recipients&nbsp;named&nbsp;above&nbsp;are&nbsp;obligated&nbsp;to&nbsp;maintain&nbsp;secrecy&nbsp;and&nbsp;are&nbsp;not&nbsp;permitted&nbsp;to&nbsp;disclose&nbsp;the&nbsp;contents&nbsp;of&nbsp;this&nbsp;communication&nbsp;to&nbsp;others.
This&nbsp;email&nbsp;and&nbsp;any&nbsp;files&nbsp;transmitted&nbsp;with&nbsp;it&nbsp;are&nbsp;confidential&nbsp;and&nbsp;intended&nbsp;solely&nbsp;for&nbsp;the&nbsp;use&nbsp;of&nbsp;the&nbsp;individual&nbsp;or&nbsp;entity&nbsp;to&nbsp;whom&nbsp;they&nbsp;are&nbsp;addressed.&nbsp;If&nbsp;you&nbsp;have&nbsp;received&nbsp;this&nbsp;email&nbsp;in&nbsp;error&nbsp;please&nbsp;notify&nbsp;the&nbsp;originator&nbsp;of&nbsp;the&nbsp;message.&nbsp;Any&nbsp;views&nbsp;expressed&nbsp;in&nbsp;this&nbsp;message&nbsp;are&nbsp;those&nbsp;of&nbsp;the&nbsp;individual&nbsp;sender.
This&nbsp;message&nbsp;has&nbsp;been&nbsp;scanned&nbsp;for&nbsp;viruses&nbsp;and&nbsp;Spam&nbsp;by&nbsp;ZTE&nbsp;Anti-Spam&nbsp;system.
</pre>
--=_alternative 0015AECF48257997_=--

