From owner-ietf-ldup@mail.imc.org  Sat Dec  1 13:37:01 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id NAA19048
	for <ldup-archive@odin.ietf.org>; Sat, 1 Dec 2001 13:37:00 -0500 (EST)
Received: by above.proper.com (8.11.6/8.11.3) id fB1I6vi10434
	for ietf-ldup-bks; Sat, 1 Dec 2001 10:06:57 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB1I6u810427
	for <ietf-ldup@imc.org>; Sat, 1 Dec 2001 10:06:56 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB1IDKC26820
	for <ietf-ldup@imc.org>; Sat, 1 Dec 2001 18:13:20 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011201093245.017d0278@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Sat, 01 Dec 2001 10:06:28 -0800
To: ietf-ldup@imc.org
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: LCUP overview comments
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


The I-D states:
   The problem areas not being considered:
       - directory server to directory server synchronization.  The
	replication protocol that is being defined by the LDUP IETF
	working group should be used for this purpose. 

I would suggest the last sentence be stricken or replaced with:
	The IETF is developing a LDAP replication protocol, called
	[LDUP], specifically designed to addresss this problem area.

The [LDUP] reference, of course, would be informative.

	Several features of the protocol distinguish it from
	LDUP replication.

Given that LDUP is a ''work in progress'', I'm not sure why it
is useful to detail how LCUP distinguish itself from it LDUP.
The information provide is useful as design overview, but I would
suggest it be framed in the context of the problem areas that LCUP
does or does not address.  That is, I would strike the above sentence
and replace:
	First, the server does not maintain any state information
	on behalf of its clients.

with:
	LCUP is designed such that the server does not need to
	maintain state information on behalf of the client.

I assume that that "no predefined agreements" in:
	Second, no predefined agreements exist between the clients
	and the servers.

refers to LCUP-specific agreements.  There are obviously other
predefined agreements which may (and like should) exist.

I suggest replacing this with:
	LCUP design avoid need for an LCUP-specific update agreements
	to be made between client and server prior to LCUP use.

Lastly, I suggest:
	Finally, the server never pushes the data to the client;
	the client always initiates the update session during which
	it pulls the changes from the server.

be replaced with:
	LCUP design requires clients to initiate the update
	session and "pull" the changes from server.


The overview continues with:
	The set of clients that are allowed to synchronize with
	an LDAP server is determined by the server defined policy. 

I find this statement confusing in the face of "no predefined
agreements".  I assume here the I-D means:
	LCUP operations are subject to administrative and access
	control policies enforced by the server.

The paragraph starting with:
	There are currently several protocols 

seems out of place.  Likely should be moved towards the top of
the overview.

The paragraph starting with:
	A server can define 

is defining either a convention or part of the normative specification
of the protocol (the overview appears informative).

The sentence:
	The LCUP context may be coincident with the replicated area,
	depending on the server's implementation.
(and likely the following sentence) should be stricken as the
imperative requires that "replication area" be defined.




From owner-ietf-ldup@mail.imc.org  Sat Dec  1 16:25:30 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id QAA22104
	for <ldup-archive@lists.ietf.org>; Sat, 1 Dec 2001 16:25:30 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB1Kvw419666
	for ietf-ldup-bks; Sat, 1 Dec 2001 12:57:58 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB1Kvu819662
	for <ietf-ldup@imc.org>; Sat, 1 Dec 2001 12:57:56 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB1L4MC27157
	for <ietf-ldup@imc.org>; Sat, 1 Dec 2001 21:04:22 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011201124952.017b4600@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Sat, 01 Dec 2001 12:57:29 -0800
To: ietf-ldup@imc.org
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: unique identifiers in LCUP/LDUP I-Ds
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


I suggest that the term 'unique identifier' not be used,
especially capitalized as 'Unique Identifier' or as
'UniqueIdentifier', as this will be confused with pre-existing
LDAP Unique Identifiers and X.500 Unique Identifiers attribute
types.  Instead, I suggest you refer to the identifier as a
'universally unique identifier' or 'UUID'.

Kurt



From owner-ietf-ldup@mail.imc.org  Sun Dec  2 18:52:17 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id SAA00447
	for <ldup-archive@odin.ietf.org>; Sun, 2 Dec 2001 18:52:17 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB2NXD326423
	for ietf-ldup-bks; Sun, 2 Dec 2001 15:33:13 -0800 (PST)
Received: from nexus.adacel.com (shelob.adacel.com.au [203.36.26.146] (may be forged))
	by above.proper.com (8.11.6/8.11.3) with SMTP id fB2NXA226419
	for <ietf-ldup@imc.org>; Sun, 2 Dec 2001 15:33:11 -0800 (PST)
Received: (qmail 19567 invoked from network); 2 Dec 2001 23:28:43 -0000
Received: from unknown (HELO osmium) (10.32.24.165)
  by nexus.adacel.com with SMTP; 2 Dec 2001 23:28:43 -0000
Reply-To: <steven.legg@adacel.com.au>
From: "Steven Legg" <steven.legg@adacel.com.au>
To: "'Kurt D. Zeilenga'" <Kurt@OpenLDAP.org>, <ietf-ldup@imc.org>
Subject: RE: unique identifiers in LCUP/LDUP I-Ds
Date: Mon, 3 Dec 2001 10:33:30 +1100
Message-ID: <00a001c17b89$c0675ce0$a518200a@osmium.mtwav.adacel.com.au>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2377.0
In-Reply-To: <5.1.0.14.0.20011201124952.017b4600@127.0.0.1>
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2120.0
Importance: Normal
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit



Kurt,

Kurt D. Zeilenga wrote:
> I suggest that the term 'unique identifier' not be used,
> especially capitalized as 'Unique Identifier' or as
> 'UniqueIdentifier', as this will be confused with pre-existing
> LDAP Unique Identifiers and X.500 Unique Identifiers attribute
> types.  Instead, I suggest you refer to the identifier as a
> 'universally unique identifier' or 'UUID'.

I have no objection to this change. I'll make it in the next revision
of the URP draft if the other document authors agree.

Regards,
Steven



From owner-ietf-ldup@mail.imc.org  Wed Dec  5 22:09:04 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id WAA27701
	for <ldup-archive@odin.ietf.org>; Wed, 5 Dec 2001 22:09:04 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB62kIu15512
	for ietf-ldup-bks; Wed, 5 Dec 2001 18:46:18 -0800 (PST)
Received: from out006pub.verizon.net (out006pub.verizon.net [206.46.170.106])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB62kH215508
	for <ietf-ldup@imc.org>; Wed, 5 Dec 2001 18:46:17 -0800 (PST)
Received: from D7ST2111 (cc543453-a.frmnt1.pa.home.com [65.14.159.170])
	by out006pub.verizon.net  with ESMTP
	for <ietf-ldup@imc.org>; id fB62i0Z07146
	Wed, 5 Dec 2001 20:44:00 -0600 (CST)
Reply-To: <christopher.apple@verizon.net>
From: "Chris Apple" <christopher.apple@verizon.net>
To: <ietf-ldup@imc.org>
Subject: New e-mail address
Date: Wed, 5 Dec 2001 21:45:39 -0600
Message-ID: <000801c17e08$7cb1f4c0$0300a8c0@D7ST2111>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook, Build 10.0.3311
Importance: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit


I have a new e-mail address: christopher.apple@verizon.net.

Please excuse if you receive this posting multiple times. I have sent it
a few
times in the past few days and do not notice it in the mailing list
archives.

Chris Apple




From owner-ietf-ldup@mail.imc.org  Wed Dec  5 23:05:39 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id XAA29838
	for <ldup-archive@odin.ietf.org>; Wed, 5 Dec 2001 23:05:39 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB63ihV17688
	for ietf-ldup-bks; Wed, 5 Dec 2001 19:44:43 -0800 (PST)
Received: from nexus.adacel.com (shelob.adacel.com.au [203.36.26.146] (may be forged))
	by above.proper.com (8.11.6/8.11.3) with SMTP id fB63ie217679
	for <ietf-ldup@imc.org>; Wed, 5 Dec 2001 19:44:40 -0800 (PST)
Received: (qmail 20339 invoked from network); 6 Dec 2001 03:40:12 -0000
Received: from unknown (HELO osmium) (10.32.24.165)
  by nexus.adacel.com with SMTP; 6 Dec 2001 03:40:12 -0000
Reply-To: <steven.legg@adacel.com.au>
From: "Steven Legg" <steven.legg@adacel.com.au>
To: <ietf-ldup@imc.org>
Subject: Supporting Partial Replication
Date: Thu, 6 Dec 2001 14:45:14 +1100
Message-ID: <000c01c17e08$6a867780$a518200a@osmium.mtwav.adacel.com.au>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2377.0
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2120.0
Importance: Normal
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit



Folks,

A while ago I promised to write up my thoughts on changes to the LDUP
architecture to support partial replication. Well this is part one of
that write up, which discusses changes to the architecture to make it
more amenable to replication topologies involving partial replicas.


Consider the following replication topology:

  S1 ====== S2
    \      /
     \    /
      \  /
       S3

Servers S1 & S2 hold full copies of replication area R1. S3 holds
replication area R2, a subset of R1. R2 could be a subtree of R1, a
sparse replica or a fractional replica. The exact details don't matter
at this stage. It is enough to recognize that R2 is a subset of the
information in R1.

Suppose that there are two successive update operations, U1 & U2, performed
at S2, where U1 affects information in R1 but wholly outside of R2 and U2
is wholly within R2 (and thus also within R1). The CSN alloted to U1 is
less than the CSN alloted to U2.

Suppose S3 and S2 establish replication sessions to exchange updates.
S3 has no changes to send. S2 will send U2 because it is within the scope
of the replication agreement S3 has with S2, but will not send U1.

S3 and S1 then establish replication sessions. S1 has no changes to send.
S3 sends U2 since the CSN for U2 is more recent than the CSN corresponding
to S2 in S1's update vector. S1 will set the CSN corresponding to S2 in
its update vector to be the CSN for U2.

Now, if S2 establishes a replication session with S1 it will send no
updates. In particular, it won't send U1 because the CSN corresponding to
S2 in S1's update vector is already greater than the CSN for U1. In fact,
S1 will never receive U1, so the requirement for all replicas to converge
will not be satisfied. In general, the current LDUP architecture only
works if the replication topology has no cycles, or where there are
cycles, if the replicas in each cycle have replication agreements for
exactly the same area of replication.

However we can get around this restriction by maintaining an update vector
and replica ID per replication area for which a server has a replication
agreement, instead of a single update vector and single replica ID per
server. Let UV(S,R) be a reference to the update vector maintained by
server S for replication area R. It becomes convenient at this point to
have global unique identifiers for replication areas, e.g. R.
ASIDE: It also makes sense to have replication area descriptions as distinct
managed objects, and for replication agreement objects to just reference
a replication area by its unique identifier, instead of itself describing
the information to be replicated.

Suppose there is a server, S, with replication agreements for replication
area, R. We require a replica ID to uniquely identify the copy of the
information in R maintained by S. It is convenient for the purposes of
this discussion to use the notation S.R for that replica ID.

Let T be some other server and let Q be a replication area maintained by T.
An element in UV(S,R) for replica T.Q with the CSN value, C, is an assertion
that S has received from T.Q all updates to R with CSNs less than or equal
to C.

If all such updates have been received then it is also true that S has
received from T.Q all updates (with CSNs less than or equal to C) to
every replication area P, where P is a subset of R.

All the client updates processed by T.Q must be within replication area Q,
so if Q is a subset of, or the same as, R then S has received from T.Q all
updates (with CSNs less than or equal to C) to every replication area P,
where P is a superset of R.

We can use these results to obtain the following rule for maintaining
multiple update vectors in the one server, which for the sake of argument
I will call the update vector cascade rule:

  Given that S is receiving updates for replication area R, when S
  receives an update with a CSN containing a replica ID of T.Q it shall
  revise the CSN corresponding to T.Q in UV(S,R) and in every UV(S,P)
  where P is a subset of R. If Q is a subset of, or the same as, R then
  S shall revise the CSN corresponding to T.Q in every UV(S,P) where P
  is a superset of R.

For each update we need to be able to determine the replication area to
which it has been applied. Provided the replica and replication area
administrative objects are available then a lookup using the replica ID
in the CSN associated with update can give us the replication area. We also
need to be able to determine the supersets and subsets of the replication
area. This can be precalculated and cached from examination of the
replication area objects.

Now I'll show how the new architecture supports the topology of the
original example. Conceptually, S1 will now hold two replicas S1.R1 and
S1.R2, and two update vectors UV(S1,R1) and UV(S1,R2). S2 will now hold
two replicas S2.R1 and S2.R2, and two update vectors UV(S2,R1) and
UV(S2,R2). S3 holds only one replica S3.R2 and one update vector UV(S3,R2).

The update U1 is within R1 but outside R2 so this update is necessarily
applied to the replica S2.R1. The replica ID in the CSN for U1 will be
S2.R1.

In applying the update U2, S2 has a choice between replicas S2.R1 and
S2.R2 since U2 is within both R1 and R2. The detailed steps following
each choice are different, but the final outcome is always the same.
Note that S2 doesn't hold duplicates of all the entries and attributes
in R2. U2 acts on the same instance of the target entry and its attributes
regardless of the selected replica. The only material difference is the
replica ID that goes into the CSN generated for U2.

Firstly, I'll run through what happens if S2 chooses to apply U2 within R2.
The replica ID in the CSN for U2 will be S2.R2. S2 will set the CSN
corresponding to S2.R2 in UV(S2,R2) to be the CSN for U2. It will also
set the CSN corresponding to S2.R2 in UV(S2,R1) to be the CSN for U2.
This is the result of S2 applying the cascade rule to itself (R = Q = R2,
S = T = S2).

S3 and S2 establish replication sessions to exchange updates to replication
area R2. As before, S3 has no changes to send, and S2 will send U2 but
will not send U1. S3 sets the CSN corresponding to S2.R2 in UV(S3,R2) to
the CSN for U2.

S3 and S1 establish replication sessions to exchange updates to replication
area R2. S1 has no changes to send, as before. S3 sends U2 since the CSN
on U2 is more recent than the CSN corresponding to S2.R2 in UV(S1,R2).
S1 will set the CSN corresponding to S2.R2 in UV(S1,R2) to be the CSN for
U2. S1 will also set the CSN corresponding to S2.R2 in UV(S1,R1) by
application of the cascade rule (S = S1, T = S2, R = Q = R2).

If S2 establishes a replication session with S1 to send updates to
replication area R1 it will obtain UV(S1,R1). S2 will send U1 to S1 since
the CSN for U1 is greater than the CSN corresponding to S2.R1 in UV(S1,R1).
It won't send U2 since the CSN corresponding to S2.R2 in UV(S1,R1) already
has the value of the CSN for U2.

So S3 gets U2 and S1 gets both U1 and U2, exactly as it should be.

Now, I'll run through what happens if S2 chooses to apply U2 within R1.
The replica ID in the CSN for U2 will be S2.R1. S2 will set the CSN
corresponding to S2.R1 in UV(S2,R1) to be the CSN for U2. It will also
set the CSN corresponding to S2.R1 in UV(S2,R2) to be the CSN for U2.
This is the result of S2 applying the cascade rule to itself (R = Q = R1,
S = T = S2).

S3 and S2 establish replication sessions to exchange updates to replication
area R2. As before, S3 has no changes to send, and S2 will send U2 but
will not send U1. S3 sets the CSN corresponding to S2.R1 in UV(S3,R2) to
the CSN for U2.

S3 and S1 establish replication sessions to exchange updates to replication
area R2. S1 has no changes to send, as before. S3 sends U2 since the CSN
on U2 is more recent than the CSN corresponding to S2.R1 in UV(S1,R2).
S1 will set the CSN corresponding to S2.R1 in UV(S1,R2) to be the CSN for
U2. Application of the cascade rule (S = S1, T = S2, R = R2, Q = R1)
results in NO changes to UV(S1,R1). We now have the situation that U2 is
notionally present in the replica S1.R2 but not in the replica S1.R1. As
I indicated earlier, a server doesn't hold duplicates of entries and
attributes that are in multiple replication areas. If S1 engages in any
replication sessions with other servers for the replication area R1 it
must exclude any changes with CSNs greater than the relevant CSN in
UV(S1,R1). This includes U2 when it has been originally applied within R1
and so far only received via S3.

If S2 establishes a replication session with S1 to send updates to
replication area R1 it will obtain UV(S1,R1). S2 will send both U1 and U2
to S1 since the CSNs for U1 and U2 are greater than the CSN corresponding
to S2.R1 in UV(S1,R1). S1 will receive U2 twice but URP will quickly
ignore the duplicate. Importantly, S1 will set the CSN corresponding to
S2.R1 in UV(S1,R1) to be the CSN for U2.

We can avoid the duplication if we arrange for S2 to obtain both UV(S1,R1)
and UV(S1,R2) (in general, any UV(S1,P) where P is a subset of R1) at the
start of the replication session, but we must still make some provision
for UV(S1,R1) to be revised correctly. It will be easier to just accept
that there may be some harmless duplication.

This example is too simple and narrow to show it but in general, applying
an update to the smallest subset replication area (e.g. R2 instead of R1)
allows it to propagate through more paths, more quickly and with less
duplication.

The following topology is more interesting but I'll leave example
walkthroughs as an exercise for the reader.

       S0
      /  \
     /    \
    /      \
  S1        S2
    \      /
     \    /
      \  /
       S3

Servers S0, S1 and S2 hold full copies of replication area R1.
S3 holds replication area R2, a subset of R1.

In this situation S0 only has replication agreements for R1 and therefore
only needs to maintain one update vector UV(S0,R1). It doesn't need to
bother with R2. This is a particularly useful result because it means that
a server, having entered into a replication agreement with some peer
server, isn't signficantly affected by the replication agreements the peer
server might make with yet other servers.


The extended architecture described here also provides a mechanism for
supporting expedited changes, which aren't possible in the current
architecture. The information, changes to which are to be expedited, is
set up as a subset replication area. Additional replication agreements
are established with the peer servers for this subset replication area,
presumably with on-change replication schedules. Updates to the subset
area get propagated immediately, while other updates propagate less
frequently, but eventually all updates get through.


Regards,
Steven




From owner-ietf-ldup@mail.imc.org  Thu Dec  6 00:42:12 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id AAA01949
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 00:42:12 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB65Oa722070
	for ietf-ldup-bks; Wed, 5 Dec 2001 21:24:36 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB65OZ222066
	for <ietf-ldup@imc.org>; Wed, 5 Dec 2001 21:24:35 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB65VLC47219
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 05:31:21 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011205202851.016e2e88@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Wed, 05 Dec 2001 21:24:04 -0800
To: ietf-ldup@imc.org
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: naming contexts
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


It seems to me that the definition of a "naming context"
in LDAP/X.500 makes little sense in the face of multi-master
replication.  A LDAP/X.500 naming context is subtree of
entries held in a single master DSA.

Consider three DSAs A, B, C where
	A masters subtree X
	B masters subtree Y
	C masters subtrees X and Y, and
	Subtrees X and Y are adjacent.

If A and B are masters and C is a shadow of each,
LDAP/X.500 says that
	A holds context X, B holds context Y, and
	C holds contexts X and Y.

If C masters X and Y and A and B are shadows,
LDAP/X.500 says that:
	A, B, C holds context X

If A, B, and C master the subtrees they hold, which
contexts does LDUP say they hold?

I can only find one reasonable way to answer this question,
it requires each entry to have held by one and only one
"primary" master DSA and defining the LDUP naming context
as a subtree of entries held in a single "primary" master
DSA.  I believe this same solution can be used to define
other terms and to detail directory models for multi-master
replication which maps reasonable well onto the X.500 models.

I recommend the WG consider defining the LDUP multi-master
replication directory models such that, for any particular
entry, there is only one "primary" master DSA and zero or
more "secondary" master DSAs.   Otherwise, defining the
models consistent with the LDAP/X.500 models will be
extremely difficult.

Kurt



From owner-ietf-ldup@mail.imc.org  Thu Dec  6 02:50:50 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id CAA18501
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 02:50:50 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB67Yb829981
	for ietf-ldup-bks; Wed, 5 Dec 2001 23:34:37 -0800 (PST)
Received: from nexus.adacel.com (shelob.adacel.com.au [203.36.26.146] (may be forged))
	by above.proper.com (8.11.6/8.11.3) with SMTP id fB67YZ229971
	for <ietf-ldup@imc.org>; Wed, 5 Dec 2001 23:34:35 -0800 (PST)
Received: (qmail 31944 invoked from network); 6 Dec 2001 07:30:02 -0000
Received: from unknown (HELO osmium) (10.32.24.165)
  by nexus.adacel.com with SMTP; 6 Dec 2001 07:30:02 -0000
Reply-To: <steven.legg@adacel.com.au>
From: "Steven Legg" <steven.legg@adacel.com.au>
To: "'Kurt D. Zeilenga'" <Kurt@OpenLDAP.org>, <ietf-ldup@imc.org>
Subject: RE: naming contexts
Date: Thu, 6 Dec 2001 18:35:06 +1100
Message-ID: <000d01c17e28$87099bb0$a518200a@osmium.mtwav.adacel.com.au>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2377.0
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2120.0
Importance: Normal
In-Reply-To: <5.1.0.14.0.20011205202851.016e2e88@127.0.0.1>
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit



Kurt,

> It seems to me that the definition of a "naming context"
> in LDAP/X.500 makes little sense in the face of multi-master
> replication.  A LDAP/X.500 naming context is subtree of
> entries held in a single master DSA.
>
> Consider three DSAs A, B, C where
> 	A masters subtree X
> 	B masters subtree Y
> 	C masters subtrees X and Y, and
> 	Subtrees X and Y are adjacent.

From what follows, I assume that Y is subordinate to X.

> If A and B are masters and C is a shadow of each,
> LDAP/X.500 says that
> 	A holds context X, B holds context Y, and
> 	C holds contexts X and Y.

C holds a shadow copy of contexts X and Y.

> If C masters X and Y and A and B are shadows,
> LDAP/X.500 says that:
> 	A, B, C holds context X

C holds context X, which is now the union of the subtrees X and Y,
i.e. there is no longer a context Y. A holds a shadow copy of a
portion (subtree X) of context X. B holds a shadow copy of a different
portion (subtree Y) of context X.
>
> If A, B, and C master the subtrees they hold, which
> contexts does LDUP say they hold?

Having an entry mastered by more that one master DSA doesn't invalidate
the definition of naming context as far as I can see, but we do need
to be a bit more careful how we phrase things.

A holds a naming context with the context prefix being the root of the
subtree X. B holds a naming context with the context prefix being the
root of the subtree Y. C holds a naming context with the context prefix
being the root of subtree X, the same root as A but with a superset of the
entries. Off the top of my head I can't think of anything that is broken
because A and C have the same context prefix but unequal sets of entries
in their naming contexts.

For LDUP we're okay if we say we are replicating "replication contexts"
rather than "naming contexts". C can be said to hold two adjacent
replication
contexts (for subtree X and subtree Y).

>
> I can only find one reasonable way to answer this question,
> it requires each entry to have held by one and only one
> "primary" master DSA and defining the LDUP naming context
> as a subtree of entries held in a single "primary" master
> DSA.  I believe this same solution can be used to define
> other terms and to detail directory models for multi-master
> replication which maps reasonable well onto the X.500 models.

I don't think we need impose this restriction for
what is only a definitional problem.

>
> I recommend the WG consider defining the LDUP multi-master
> replication directory models such that, for any particular
> entry, there is only one "primary" master DSA and zero or
> more "secondary" master DSAs.   Otherwise, defining the
> models consistent with the LDAP/X.500 models will be
> extremely difficult.

Regards,
Steven



From owner-ietf-ldup@mail.imc.org  Thu Dec  6 09:01:15 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id JAA25380
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 09:01:15 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB6DjsY09369
	for ietf-ldup-bks; Thu, 6 Dec 2001 05:45:54 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB6Djr209365
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 05:45:53 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB6DqTC48290;
	Thu, 6 Dec 2001 13:52:30 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011206051734.01792150@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Thu, 06 Dec 2001 05:45:10 -0800
To: <steven.legg@adacel.com.au>
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: RE: naming contexts
Cc: <ietf-ldup@imc.org>
In-Reply-To: <000d01c17e28$87099bb0$a518200a@osmium.mtwav.adacel.com.au>
References: <5.1.0.14.0.20011205202851.016e2e88@127.0.0.1>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


At 11:35 PM 2001-12-05, Steven Legg wrote:


>Kurt,
>
>> It seems to me that the definition of a "naming context"
>> in LDAP/X.500 makes little sense in the face of multi-master
>> replication.  A LDAP/X.500 naming context is subtree of
>> entries held in a single master DSA.
>>
>> Consider three DSAs A, B, C where
>>       A masters subtree X
>>       B masters subtree Y
>>       C masters subtrees X and Y, and
>>       Subtrees X and Y are adjacent.
>
>>From what follows, I assume that Y is subordinate to X.
>
>> If A and B are masters and C is a shadow of each,
>> LDAP/X.500 says that
>>       A holds context X, B holds context Y, and
>>       C holds contexts X and Y.
>
>C holds a shadow copy of contexts X and Y.

Yes, but the distinction here is what values go into
the root DSE namingContexts attribute of each server.
  A publishes the name of the vertex of X.
  B publishes the name of the vertex of Y.
  C publishes the names of the vertex of X and Y.


>> If C masters X and Y and A and B are shadows,
>> LDAP/X.500 says that:
>>       A, B, C holds context X
>
>C holds context X, which is now the union of the subtrees X and Y,
>i.e. there is no longer a context Y. A holds a shadow copy of a
>portion (subtree X) of context X. B holds a shadow copy of a different
>portion (subtree Y) of context X.

Yes,
  A,B,C publishes the name of the vertex of X.

>>
>> If A, B, and C master the subtrees they hold, which
>> contexts does LDUP say they hold?
>
>Having an entry mastered by more that one master DSA doesn't invalidate
>the definition of naming context as far as I can see, but we do need
>to be a bit more careful how we phrase things.
>
>A holds a naming context with the context prefix being the root of the
>subtree X. B holds a naming context with the context prefix being the
>root of the subtree Y. C holds a naming context with the context prefix
>being the root of subtree X, the same root as A but with a superset of the
>entries.

You imply that context prefix information is not replicated
between masters and the definition of a context is local to
each master.

There are three basic ways one could define naming contexts
in face of multi-master replication.
        a) contexts are determined at each master
        b) contexts are determined across all masters
        c) contexts are determined at one master

I note that in single-master replication, naming contexts are
defined consistent with all three.  In multi-master, you need
to choose one.  You appear to choose a), yes?

>For LDUP we're okay if we say we are replicating "replication contexts"
>rather than "naming contexts".

Yes, one could consider the X and Y subtrees as "replication contexts".

>C can be said to hold two adjacent replication contexts (for subtree X and subtree Y).

Yes.  One could rephrase the question in terms of replication
contexts, not subtrees.

I think you are saying that where a server masters multiple
adjacent replication contexts, these replication contexts
comprise one naming context on that server.  Yes?



From owner-ietf-ldup@mail.imc.org  Thu Dec  6 09:04:19 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id JAA25492
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 09:04:19 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB6Dj5f09355
	for ietf-ldup-bks; Thu, 6 Dec 2001 05:45:05 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB6Dj4209350
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 05:45:04 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Thu, 06 Dec 2001 08:36:05 -0700
Message-Id: <sc0f2df5.001@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Thu, 06 Dec 2001 08:35:51 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <steven.legg@adacel.com.au>, <ietf-ldup@imc.org>, <Kurt@OpenLDAP.org>
Subject: RE: naming contexts
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fB6Dj5209351
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


1) I'm pleased to see Kurt referring to a "Primary Master", as I think such a concept is needed to represent the single-master of replica topology information about a replication context.  However,

2) I agree with Steven that what becomes interesting in a multi-master environment is the replication context, not the naming context, held by a server.  Frankly, in the mostly-distributed directories I've worked with, replication contexts were all that were available, since they're more informative about what is actually held, allow clients to piece together a minimal set of servers needed to fully traverse a distinguished name when searching for a named entry, etc.

But here the interesting point - there IS a place for a "Primary Master" when you ABSOLUTELY NEED some data in the DIT to be single-mastered, even if the rest of the data can be multi-mastered.  In my example 1 above, those data are the operational attributes and subentries associated with the definition, description and management of a replication area.

Thus, the Model (Architecture) document and, I think, the Information Model, still retain the Primary master replica type, even if the requirements document has dropped it.

One fine point, though - I don't know how to represent nor manage a scenario where DIFFERENT DATA are "owned" by different PRIMARY MASTER replicas in the same replication area...I know how to deal and manage having a single replica designated as the primary of all the masters, but not how to tag ownership of specific schema elements.  

So, the information model places the replica type designation on the replicaSubentry, and only one replica of the replication area is deemed to be the "primary master" of the replication context.

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585

>>> "Steven Legg" <steven.legg@adacel.com.au> 12/06/01 02:35AM >>>


Kurt,

> It seems to me that the definition of a "naming context"
> in LDAP/X.500 makes little sense in the face of multi-master
> replication.  A LDAP/X.500 naming context is subtree of
> entries held in a single master DSA.
>
> Consider three DSAs A, B, C where
> 	A masters subtree X
> 	B masters subtree Y
> 	C masters subtrees X and Y, and
> 	Subtrees X and Y are adjacent.

From what follows, I assume that Y is subordinate to X.

> If A and B are masters and C is a shadow of each,
> LDAP/X.500 says that
> 	A holds context X, B holds context Y, and
> 	C holds contexts X and Y.

C holds a shadow copy of contexts X and Y.

> If C masters X and Y and A and B are shadows,
> LDAP/X.500 says that:
> 	A, B, C holds context X

C holds context X, which is now the union of the subtrees X and Y,
i.e. there is no longer a context Y. A holds a shadow copy of a
portion (subtree X) of context X. B holds a shadow copy of a different
portion (subtree Y) of context X.
>
> If A, B, and C master the subtrees they hold, which
> contexts does LDUP say they hold?

Having an entry mastered by more that one master DSA doesn't invalidate
the definition of naming context as far as I can see, but we do need
to be a bit more careful how we phrase things.

A holds a naming context with the context prefix being the root of the
subtree X. B holds a naming context with the context prefix being the
root of the subtree Y. C holds a naming context with the context prefix
being the root of subtree X, the same root as A but with a superset of the
entries. Off the top of my head I can't think of anything that is broken
because A and C have the same context prefix but unequal sets of entries
in their naming contexts.

For LDUP we're okay if we say we are replicating "replication contexts"
rather than "naming contexts". C can be said to hold two adjacent
replication
contexts (for subtree X and subtree Y).

>
> I can only find one reasonable way to answer this question,
> it requires each entry to have held by one and only one
> "primary" master DSA and defining the LDUP naming context
> as a subtree of entries held in a single "primary" master
> DSA.  I believe this same solution can be used to define
> other terms and to detail directory models for multi-master
> replication which maps reasonable well onto the X.500 models.

I don't think we need impose this restriction for
what is only a definitional problem.

>
> I recommend the WG consider defining the LDUP multi-master
> replication directory models such that, for any particular
> entry, there is only one "primary" master DSA and zero or
> more "secondary" master DSAs.   Otherwise, defining the
> models consistent with the LDAP/X.500 models will be
> extremely difficult.

Regards,
Steven



From owner-ietf-ldup@mail.imc.org  Thu Dec  6 09:57:12 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id JAA26717
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 09:57:11 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB6Egj312350
	for ietf-ldup-bks; Thu, 6 Dec 2001 06:42:45 -0800 (PST)
Received: from e1.ny.us.ibm.com (e1.ny.us.ibm.com [32.97.182.101])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB6Egi212345
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 06:42:44 -0800 (PST)
Received: from northrelay01.pok.ibm.com (northrelay01.pok.ibm.com [9.117.200.21])
	by e1.ny.us.ibm.com (8.9.3/8.9.3) with ESMTP id JAA324016
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 09:39:52 -0500
Received: from d01mlc96.pok.ibm.com (d01mlc96.pok.ibm.com [9.117.250.33])
	by northrelay01.pok.ibm.com (8.11.1m3/NCO v5.01) with ESMTP id fB6Egbj135768
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 09:42:37 -0500
To: ietf-ldup@imc.org
MIME-Version: 1.0
Subject: RE: naming contexts
X-Mailer: Lotus Notes Release 5.0.7  March 21, 2001
From: "Timothy Hahn" <hahnt@us.ibm.com>
Message-ID: <OF1120D83C.C3EE954B-ON85256B1A.004EBC8F@pok.ibm.com>
Date: Thu, 6 Dec 2001 09:42:36 -0500
X-MIMETrack: Serialize by Router on D01MLC96/01/M/IBM(Release 5.0.9 |November 26, 2001) at
 12/06/2001 09:42:38 AM,
	Serialize complete at 12/06/2001 09:42:38 AM
Content-Type: multipart/alternative; boundary="=_alternative 004F835685256B1A_="
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


This is a multipart message in MIME format.
--=_alternative 004F835685256B1A_=
Content-Type: text/plain; charset="us-ascii"

Ed,

In its current state, the information model does not yet contain 
indicators of "primary master".  I believe we are, currently, requiring 
that IF replicaSubentry information is entered at different server 
instances, that it WILL be entered consistently with respect to what was 
entered on the other server instances.  If not, then results are 
indeterminant.  If so, then the two servers will "synchronize".

Either that or we were thinking that the "initial set of replicaSubEntry" 
information would be entered on ONE OF the servers" and then replicated to 
(and so be consistent) with the OTHER servers that were referenced.  The 
idea being that no SINGLE server is noted as "primary master" - just that 
ONE server is used to enter the "initial settings".

This is what I recall anyway,
Tim Hahn

Internet: hahnt@us.ibm.com
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681





"Ed Reed" <eer@OnCallDBA.COM>
Sent by: owner-ietf-ldup@mail.imc.org
12/06/2001 10:35 AM

 
        To:     <steven.legg@adacel.com.au>, <ietf-ldup@imc.org>, <Kurt@OpenLDAP.org>
        cc: 
        Subject:        RE: naming contexts

 


1) I'm pleased to see Kurt referring to a "Primary Master", as I think 
such a concept is needed to represent the single-master of replica 
topology information about a replication context.  However,

2) I agree with Steven that what becomes interesting in a multi-master 
environment is the replication context, not the naming context, held by a 
server.  Frankly, in the mostly-distributed directories I've worked with, 
replication contexts were all that were available, since they're more 
informative about what is actually held, allow clients to piece together a 
minimal set of servers needed to fully traverse a distinguished name when 
searching for a named entry, etc.

But here the interesting point - there IS a place for a "Primary Master" 
when you ABSOLUTELY NEED some data in the DIT to be single-mastered, even 
if the rest of the data can be multi-mastered.  In my example 1 above, 
those data are the operational attributes and subentries associated with 
the definition, description and management of a replication area.

Thus, the Model (Architecture) document and, I think, the Information 
Model, still retain the Primary master replica type, even if the 
requirements document has dropped it.

One fine point, though - I don't know how to represent nor manage a 
scenario where DIFFERENT DATA are "owned" by different PRIMARY MASTER 
replicas in the same replication area...I know how to deal and manage 
having a single replica designated as the primary of all the masters, but 
not how to tag ownership of specific schema elements. 

So, the information model places the replica type designation on the 
replicaSubentry, and only one replica of the replication area is deemed to 
be the "primary master" of the replication context.

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585

>>> "Steven Legg" <steven.legg@adacel.com.au> 12/06/01 02:35AM >>>


Kurt,

> It seems to me that the definition of a "naming context"
> in LDAP/X.500 makes little sense in the face of multi-master
> replication.  A LDAP/X.500 naming context is subtree of
> entries held in a single master DSA.
>
> Consider three DSAs A, B, C where
>                A masters subtree X
>                B masters subtree Y
>                C masters subtrees X and Y, and
>                Subtrees X and Y are adjacent.

From what follows, I assume that Y is subordinate to X.

> If A and B are masters and C is a shadow of each,
> LDAP/X.500 says that
>                A holds context X, B holds context Y, and
>                C holds contexts X and Y.

C holds a shadow copy of contexts X and Y.

> If C masters X and Y and A and B are shadows,
> LDAP/X.500 says that:
>                A, B, C holds context X

C holds context X, which is now the union of the subtrees X and Y,
i.e. there is no longer a context Y. A holds a shadow copy of a
portion (subtree X) of context X. B holds a shadow copy of a different
portion (subtree Y) of context X.
>
> If A, B, and C master the subtrees they hold, which
> contexts does LDUP say they hold?

Having an entry mastered by more that one master DSA doesn't invalidate
the definition of naming context as far as I can see, but we do need
to be a bit more careful how we phrase things.

A holds a naming context with the context prefix being the root of the
subtree X. B holds a naming context with the context prefix being the
root of the subtree Y. C holds a naming context with the context prefix
being the root of subtree X, the same root as A but with a superset of the
entries. Off the top of my head I can't think of anything that is broken
because A and C have the same context prefix but unequal sets of entries
in their naming contexts.

For LDUP we're okay if we say we are replicating "replication contexts"
rather than "naming contexts". C can be said to hold two adjacent
replication
contexts (for subtree X and subtree Y).

>
> I can only find one reasonable way to answer this question,
> it requires each entry to have held by one and only one
> "primary" master DSA and defining the LDUP naming context
> as a subtree of entries held in a single "primary" master
> DSA.  I believe this same solution can be used to define
> other terms and to detail directory models for multi-master
> replication which maps reasonable well onto the X.500 models.

I don't think we need impose this restriction for
what is only a definitional problem.

>
> I recommend the WG consider defining the LDUP multi-master
> replication directory models such that, for any particular
> entry, there is only one "primary" master DSA and zero or
> more "secondary" master DSAs.   Otherwise, defining the
> models consistent with the LDAP/X.500 models will be
> extremely difficult.

Regards,
Steven




--=_alternative 004F835685256B1A_=
Content-Type: text/html; charset="us-ascii"


<br><font size=2 face="sans-serif">Ed,</font>
<br>
<br><font size=2 face="sans-serif">In its current state, the information model does not yet contain indicators of &quot;primary master&quot;. &nbsp;I believe we are, currently, requiring that IF replicaSubentry information is entered at different server instances, that it WILL be entered consistently with respect to what was entered on the other server instances. &nbsp;If not, then results are indeterminant. &nbsp;If so, then the two servers will &quot;synchronize&quot;.<br>
</font>
<br><font size=2 face="sans-serif">Either that or we were thinking that the &quot;initial set of replicaSubEntry&quot; information would be entered on ONE OF the servers&quot; and then replicated to (and so be consistent) with the OTHER servers that were referenced. &nbsp;The idea being that no SINGLE server is noted as &quot;primary master&quot; - just that ONE server is used to enter the &quot;initial settings&quot;.</font>
<br>
<br><font size=2 face="sans-serif">This is what I recall anyway,<br>
Tim Hahn<br>
<br>
Internet: hahnt@us.ibm.com<br>
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)<br>
phone: 607.752.6388 &nbsp; &nbsp; tie-line: 8/852.6388<br>
fax: 607.752.3681<br>
</font>
<br>
<br>
<br>
<table width=100%>
<tr valign=top>
<td>
<td><font size=1 face="sans-serif"><b>&quot;Ed Reed&quot; &lt;eer@OnCallDBA.COM&gt;</b></font>
<br><font size=1 face="sans-serif">Sent by: owner-ietf-ldup@mail.imc.org</font>
<p><font size=1 face="sans-serif">12/06/2001 10:35 AM</font>
<br>
<td><font size=1 face="Arial">&nbsp; &nbsp; &nbsp; &nbsp; </font>
<br><font size=1 face="sans-serif">&nbsp; &nbsp; &nbsp; &nbsp; To: &nbsp; &nbsp; &nbsp; &nbsp;&lt;steven.legg@adacel.com.au&gt;, &lt;ietf-ldup@imc.org&gt;, &lt;Kurt@OpenLDAP.org&gt;</font>
<br><font size=1 face="sans-serif">&nbsp; &nbsp; &nbsp; &nbsp; cc: &nbsp; &nbsp; &nbsp; &nbsp;</font>
<br><font size=1 face="sans-serif">&nbsp; &nbsp; &nbsp; &nbsp; Subject: &nbsp; &nbsp; &nbsp; &nbsp;RE: naming contexts</font>
<br>
<br><font size=1 face="Arial">&nbsp; &nbsp; &nbsp; &nbsp;</font></table>
<br>
<br><font size=2 face="Courier New"><br>
1) I'm pleased to see Kurt referring to a &quot;Primary Master&quot;, as I think such a concept is needed to represent the single-master of replica topology information about a replication context. &nbsp;However,<br>
<br>
2) I agree with Steven that what becomes interesting in a multi-master environment is the replication context, not the naming context, held by a server. &nbsp;Frankly, in the mostly-distributed directories I've worked with, replication contexts were all that were available, since they're more informative about what is actually held, allow clients to piece together a minimal set of servers needed to fully traverse a distinguished name when searching for a named entry, etc.<br>
<br>
But here the interesting point - there IS a place for a &quot;Primary Master&quot; when you ABSOLUTELY NEED some data in the DIT to be single-mastered, even if the rest of the data can be multi-mastered. &nbsp;In my example 1 above, those data are the operational attributes and subentries associated with the definition, description and management of a replication area.<br>
<br>
Thus, the Model (Architecture) document and, I think, the Information Model, still retain the Primary master replica type, even if the requirements document has dropped it.<br>
<br>
One fine point, though - I don't know how to represent nor manage a scenario where DIFFERENT DATA are &quot;owned&quot; by different PRIMARY MASTER replicas in the same replication area...I know how to deal and manage having a single replica designated as the primary of all the masters, but not how to tag ownership of specific schema elements. &nbsp;<br>
<br>
So, the information model places the replica type designation on the replicaSubentry, and only one replica of the replication area is deemed to be the &quot;primary master&quot; of the replication context.<br>
<br>
Ed<br>
<br>
=================<br>
Ed Reed<br>
Reed-Matthews, Inc.<br>
+1 585 624 2402<br>
http://www.Reed-Matthews.COM<br>
Note: &nbsp;Area code is 585<br>
<br>
&gt;&gt;&gt; &quot;Steven Legg&quot; &lt;steven.legg@adacel.com.au&gt; 12/06/01 02:35AM &gt;&gt;&gt;<br>
<br>
<br>
Kurt,<br>
<br>
&gt; It seems to me that the definition of a &quot;naming context&quot;<br>
&gt; in LDAP/X.500 makes little sense in the face of multi-master<br>
&gt; replication. &nbsp;A LDAP/X.500 naming context is subtree of<br>
&gt; entries held in a single master DSA.<br>
&gt;<br>
&gt; Consider three DSAs A, B, C where<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;A masters subtree X<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;B masters subtree Y<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;C masters subtrees X and Y, and<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Subtrees X and Y are adjacent.<br>
<br>
From what follows, I assume that Y is subordinate to X.<br>
<br>
&gt; If A and B are masters and C is a shadow of each,<br>
&gt; LDAP/X.500 says that<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;A holds context X, B holds context Y, and<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;C holds contexts X and Y.<br>
<br>
C holds a shadow copy of contexts X and Y.<br>
<br>
&gt; If C masters X and Y and A and B are shadows,<br>
&gt; LDAP/X.500 says that:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;A, B, C holds context X<br>
<br>
C holds context X, which is now the union of the subtrees X and Y,<br>
i.e. there is no longer a context Y. A holds a shadow copy of a<br>
portion (subtree X) of context X. B holds a shadow copy of a different<br>
portion (subtree Y) of context X.<br>
&gt;<br>
&gt; If A, B, and C master the subtrees they hold, which<br>
&gt; contexts does LDUP say they hold?<br>
<br>
Having an entry mastered by more that one master DSA doesn't invalidate<br>
the definition of naming context as far as I can see, but we do need<br>
to be a bit more careful how we phrase things.<br>
<br>
A holds a naming context with the context prefix being the root of the<br>
subtree X. B holds a naming context with the context prefix being the</font>
<br><font size=2 face="Courier New">root of the subtree Y. C holds a naming context with the context prefix<br>
being the root of subtree X, the same root as A but with a superset of the<br>
entries. Off the top of my head I can't think of anything that is broken<br>
because A and C have the same context prefix but unequal sets of entries<br>
in their naming contexts.<br>
<br>
For LDUP we're okay if we say we are replicating &quot;replication contexts&quot;<br>
rather than &quot;naming contexts&quot;. C can be said to hold two adjacent<br>
replication<br>
contexts (for subtree X and subtree Y).<br>
<br>
&gt;<br>
&gt; I can only find one reasonable way to answer this question,<br>
&gt; it requires each entry to have held by one and only one<br>
&gt; &quot;primary&quot; master DSA and defining the LDUP naming context<br>
&gt; as a subtree of entries held in a single &quot;primary&quot; master<br>
&gt; DSA. &nbsp;I believe this same solution can be used to define<br>
&gt; other terms and to detail directory models for multi-master<br>
&gt; replication which maps reasonable well onto the X.500 models.<br>
<br>
I don't think we need impose this restriction for<br>
what is only a definitional problem.<br>
<br>
&gt;<br>
&gt; I recommend the WG consider defining the LDUP multi-master<br>
&gt; replication directory models such that, for any particular<br>
&gt; entry, there is only one &quot;primary&quot; master DSA and zero or<br>
&gt; more &quot;secondary&quot; master DSAs. &nbsp; Otherwise, defining the<br>
&gt; models consistent with the LDAP/X.500 models will be<br>
&gt; extremely difficult.<br>
<br>
Regards,<br>
Steven<br>
<br>
</font>
<br>
<br>
--=_alternative 004F835685256B1A_=--


From owner-ietf-ldup@mail.imc.org  Thu Dec  6 10:23:38 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id KAA27358
	for <ldup-archive@lists.ietf.org>; Thu, 6 Dec 2001 10:23:38 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB6F0Fp13694
	for ietf-ldup-bks; Thu, 6 Dec 2001 07:00:15 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB6F0D213689
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 07:00:13 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Thu, 06 Dec 2001 09:51:13 -0700
Message-Id: <sc0f3f91.004@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Thu, 06 Dec 2001 09:51:07 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <steven.legg@adacel.com.au>, <Kurt@OpenLDAP.org>
Cc: <ietf-ldup@imc.org>
Subject: RE: naming contexts
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fB6F0E213691
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


<eer> comments </eer>

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585

>>> "Kurt D. Zeilenga" <Kurt@OpenLDAP.org> 12/06/01 08:45AM >>>

At 11:35 PM 2001-12-05, Steven Legg wrote:


>Kurt,
>
...

>> If C masters X and Y and A and B are shadows,
>> LDAP/X.500 says that:
>>       A, B, C holds context X
>
>C holds context X, which is now the union of the subtrees X and Y,
>i.e. there is no longer a context Y. A holds a shadow copy of a
>portion (subtree X) of context X. B holds a shadow copy of a different
>portion (subtree Y) of context X.

Yes,
  A,B,C publishes the name of the vertex of X.

<eer> my interpretation is that yes, A, B, and C all publish X as a 
naming context on their rootDSE.  They also will publish X as a 
repliation context on their rootDSE.  B and C will also publish Y as a
replication context on their rootDSE.  

This example is why I was finally persuaded that naming contexts
are different from replica contexts, and that a different rootDSE
attribute was needed to hold the names of verticies of replication
contexts separately from the names of naming contexts.
</eer>

>>
>> If A, B, and C master the subtrees they hold, which
>> contexts does LDUP say they hold?
>
>Having an entry mastered by more that one master DSA doesn't invalidate
>the definition of naming context as far as I can see, but we do need
>to be a bit more careful how we phrase things.
>
>A holds a naming context with the context prefix being the root of the
>subtree X. B holds a naming context with the context prefix being the
>root of the subtree Y. C holds a naming context with the context prefix
>being the root of subtree X, the same root as A but with a superset of the
>entries.

You imply that context prefix information is not replicated
between masters and the definition of a context is local to
each master.

There are three basic ways one could define naming contexts
in face of multi-master replication.
        a) contexts are determined at each master
        b) contexts are determined across all masters
        c) contexts are determined at one master

I note that in single-master replication, naming contexts are
defined consistent with all three.  In multi-master, you need
to choose one.  You appear to choose a), yes?

<eer> My own personal preference is that naming contexts, as I understand
them, are determined for each server, describing the verticies of subtrees
held by that server.

Clearly, at that point, the naming context doesn't describe a vertix in the
DIT, but rather a vertix in a servers DIB.

Knowledge about subordinate subtrees not held locally are defined, in my
world view, in subordinate entries held by servers holding the immediately
superior namespace to the absent subtree.  

In my world view, those subordinate entries are a special type of replica
of the subordinate namespace - they hold only the name of the vertex
of the subordinate namespace and the replicaSubentries for the replication
area that begins at that vertex.  The replicaSubentries contain accessPoint
and replica type attributes for the server that holds each corresponding replica, 
so that a client can pick a replica of the appropriate type.  Alternatively,
if a server is conducting a subtree search for a client encounters one of the
subordinate entries, it can then refer the client to one of the other servers to 
continue the search.

I think this is the "normal" X.500 use of subordinate entries, though I'm fuzzy as
to whether they're simply "glue" entries or "real" entries in the superior
namespace.  I recall being told at one point that the scheme I'm describing
is similar to the one QUIPU used.  Certainly, it's a style of specific subordinate
entries, not nonspecific subordinant entries (NSSE).

The use of replica toplogy information seems useful, to me, as a natural
result of chosing to store the replica topology information in the directory.

I've not tried to be rigorous in my use of terms, here, but rather to convey
a broad impression of what's desireable.  Distributed operations based on
using this in-the-tree replica topology information need to be rigorously
specified, but alas, I know of no LDAP design group chartered to undertake
such an endeavor.

</eer>

>For LDUP we're okay if we say we are replicating "replication contexts"
>rather than "naming contexts".

Yes, one could consider the X and Y subtrees as "replication contexts".

>C can be said to hold two adjacent replication contexts (for subtree X and subtree Y).

Yes.  One could rephrase the question in terms of replication
contexts, not subtrees.

I think you are saying that where a server masters multiple
adjacent replication contexts, these replication contexts
comprise one naming context on that server.  Yes?

<eer>
yes, because that's our interpretation of what people expect to be told by a server when it is asked about the naming contexts that it holds.  And thus, the additional rootDSE attribute to hold information about the replication contexts it holds, which may well list more vertices than the list of naming contexts it holds.
</eer>


From owner-ietf-ldup@mail.imc.org  Thu Dec  6 10:39:40 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id KAA27725
	for <ldup-archive@lists.ietf.org>; Thu, 6 Dec 2001 10:39:39 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB6FOwY14267
	for ietf-ldup-bks; Thu, 6 Dec 2001 07:24:58 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB6FOu214260
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 07:24:56 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Thu, 06 Dec 2001 10:15:58 -0700
Message-Id: <sc0f455e.006@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Thu, 06 Dec 2001 10:15:41 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <ietf-ldup@imc.org>, <hahnt@us.ibm.com>
Subject: RE: naming contexts
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fB6FOv214264
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


Tim - from the currently posted draft...

8.2.8. replicaType 
    
   (2.16.840.1.113719.1.142.4.4 NAME 'replicaType' 
      DESC 'Enum: 0-reserved, 1-Primary, 2-Updateable, 
            3-ReadOnly, all others reserved' 
      SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 
      EQUALITY integerMatch 
      SINGLE-VALUE 
      NO-USER-MODIFICATION 
      USAGE dSAOperation ) 
    
   ReplicaType is a simple enumeration, used to identify what kind of 
   replica is being described in a Replica object entry.

It is replicaType 1-Primary that I'm speaking of.

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585

>>> "Timothy Hahn" <hahnt@us.ibm.com> 12/06/01 09:42AM >>>
Ed,

In its current state, the information model does not yet contain 
indicators of "primary master".  I believe we are, currently, requiring 
that IF replicaSubentry information is entered at different server 
instances, that it WILL be entered consistently with respect to what was 
entered on the other server instances.  If not, then results are 
indeterminant.  If so, then the two servers will "synchronize".

Either that or we were thinking that the "initial set of replicaSubEntry" 
information would be entered on ONE OF the servers" and then replicated to 
(and so be consistent) with the OTHER servers that were referenced.  The 
idea being that no SINGLE server is noted as "primary master" - just that 
ONE server is used to enter the "initial settings".

This is what I recall anyway,
Tim Hahn

Internet: hahnt@us.ibm.com 
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681





"Ed Reed" <eer@OnCallDBA.COM>
Sent by: owner-ietf-ldup@mail.imc.org 
12/06/2001 10:35 AM

 
        To:     <steven.legg@adacel.com.au>, <ietf-ldup@imc.org>, <Kurt@OpenLDAP.org>
        cc: 
        Subject:        RE: naming contexts

 


1) I'm pleased to see Kurt referring to a "Primary Master", as I think 
such a concept is needed to represent the single-master of replica 
topology information about a replication context.  However,

2) I agree with Steven that what becomes interesting in a multi-master 
environment is the replication context, not the naming context, held by a 
server.  Frankly, in the mostly-distributed directories I've worked with, 
replication contexts were all that were available, since they're more 
informative about what is actually held, allow clients to piece together a 
minimal set of servers needed to fully traverse a distinguished name when 
searching for a named entry, etc.

But here the interesting point - there IS a place for a "Primary Master" 
when you ABSOLUTELY NEED some data in the DIT to be single-mastered, even 
if the rest of the data can be multi-mastered.  In my example 1 above, 
those data are the operational attributes and subentries associated with 
the definition, description and management of a replication area.

Thus, the Model (Architecture) document and, I think, the Information 
Model, still retain the Primary master replica type, even if the 
requirements document has dropped it.

One fine point, though - I don't know how to represent nor manage a 
scenario where DIFFERENT DATA are "owned" by different PRIMARY MASTER 
replicas in the same replication area...I know how to deal and manage 
having a single replica designated as the primary of all the masters, but 
not how to tag ownership of specific schema elements. 

So, the information model places the replica type designation on the 
replicaSubentry, and only one replica of the replication area is deemed to 
be the "primary master" of the replication context.

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM 
Note:  Area code is 585

>>> "Steven Legg" <steven.legg@adacel.com.au> 12/06/01 02:35AM >>>


Kurt,

> It seems to me that the definition of a "naming context"
> in LDAP/X.500 makes little sense in the face of multi-master
> replication.  A LDAP/X.500 naming context is subtree of
> entries held in a single master DSA.
>
> Consider three DSAs A, B, C where
>                A masters subtree X
>                B masters subtree Y
>                C masters subtrees X and Y, and
>                Subtrees X and Y are adjacent.

From what follows, I assume that Y is subordinate to X.

> If A and B are masters and C is a shadow of each,
> LDAP/X.500 says that
>                A holds context X, B holds context Y, and
>                C holds contexts X and Y.

C holds a shadow copy of contexts X and Y.

> If C masters X and Y and A and B are shadows,
> LDAP/X.500 says that:
>                A, B, C holds context X

C holds context X, which is now the union of the subtrees X and Y,
i.e. there is no longer a context Y. A holds a shadow copy of a
portion (subtree X) of context X. B holds a shadow copy of a different
portion (subtree Y) of context X.
>
> If A, B, and C master the subtrees they hold, which
> contexts does LDUP say they hold?

Having an entry mastered by more that one master DSA doesn't invalidate
the definition of naming context as far as I can see, but we do need
to be a bit more careful how we phrase things.

A holds a naming context with the context prefix being the root of the
subtree X. B holds a naming context with the context prefix being the
root of the subtree Y. C holds a naming context with the context prefix
being the root of subtree X, the same root as A but with a superset of the
entries. Off the top of my head I can't think of anything that is broken
because A and C have the same context prefix but unequal sets of entries
in their naming contexts.

For LDUP we're okay if we say we are replicating "replication contexts"
rather than "naming contexts". C can be said to hold two adjacent
replication
contexts (for subtree X and subtree Y).

>
> I can only find one reasonable way to answer this question,
> it requires each entry to have held by one and only one
> "primary" master DSA and defining the LDUP naming context
> as a subtree of entries held in a single "primary" master
> DSA.  I believe this same solution can be used to define
> other terms and to detail directory models for multi-master
> replication which maps reasonable well onto the X.500 models.

I don't think we need impose this restriction for
what is only a definitional problem.

>
> I recommend the WG consider defining the LDUP multi-master
> replication directory models such that, for any particular
> entry, there is only one "primary" master DSA and zero or
> more "secondary" master DSAs.   Otherwise, defining the
> models consistent with the LDAP/X.500 models will be
> extremely difficult.

Regards,
Steven





From owner-ietf-ldup@mail.imc.org  Thu Dec  6 11:15:16 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id LAA29360
	for <ldup-archive@lists.ietf.org>; Thu, 6 Dec 2001 11:15:16 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB6FwJb17841
	for ietf-ldup-bks; Thu, 6 Dec 2001 07:58:19 -0800 (PST)
Received: from e1.ny.us.ibm.com (e1.ny.us.ibm.com [32.97.182.101])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB6FwG217830
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 07:58:16 -0800 (PST)
Received: from northrelay03.pok.ibm.com (northrelay03.pok.ibm.com [9.117.200.23])
	by e1.ny.us.ibm.com (8.9.3/8.9.3) with ESMTP id KAA396292
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 10:55:25 -0500
Received: from d01mlc96.pok.ibm.com (d01mlc96.pok.ibm.com [9.117.250.33])
	by northrelay03.pok.ibm.com (8.11.1m3/NCO v5.01) with ESMTP id fB6FwAD150680
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 10:58:10 -0500
To: ietf-ldup@imc.org
MIME-Version: 1.0
Subject: RE: naming contexts
X-Mailer: Lotus Notes Release 5.0.7  March 21, 2001
From: "Timothy Hahn" <hahnt@us.ibm.com>
Message-ID: <OFDA6A5260.88AE22A8-ON85256B1A.00571B62@pok.ibm.com>
Date: Thu, 6 Dec 2001 10:58:10 -0500
X-MIMETrack: Serialize by Router on D01MLC96/01/M/IBM(Release 5.0.9 |November 26, 2001) at
 12/06/2001 10:58:11 AM,
	Serialize complete at 12/06/2001 10:58:11 AM
Content-Type: multipart/alternative; boundary="=_alternative 00575A5885256B1A_="
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


This is a multipart message in MIME format.
--=_alternative 00575A5885256B1A_=
Content-Type: text/plain; charset="us-ascii"

Ed,

Yes, but the explanation in the DRAFT for this attribute is MUCH LESS 
restrictive than what you have been implying.

There are indications that NO Primary replica need exist, it need NOT be 
the same for different replication contexts, and even that MULTIPLE 
primary replicas might exist.  Thus, I have not interpreted the meaning of 
replicaType=Primary to be as "strong" as was implied in earlier postings 
on this thread.

Regards,
Tim Hahn

Internet: hahnt@us.ibm.com
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681





"Ed Reed" <eer@OnCallDBA.COM>
12/06/2001 12:15 PM

 
        To:     <ietf-ldup@imc.org>, Timothy Hahn/Endicott/IBM@IBMUS
        cc: 
        Subject:        RE: naming contexts

 

Tim - from the currently posted draft...

8.2.8. replicaType 
 
   (2.16.840.1.113719.1.142.4.4 NAME 'replicaType' 
      DESC 'Enum: 0-reserved, 1-Primary, 2-Updateable, 
            3-ReadOnly, all others reserved' 
      SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 
      EQUALITY integerMatch 
      SINGLE-VALUE 
      NO-USER-MODIFICATION 
      USAGE dSAOperation ) 
 
   ReplicaType is a simple enumeration, used to identify what kind of 
   replica is being described in a Replica object entry.

It is replicaType 1-Primary that I'm speaking of.

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585

>>> "Timothy Hahn" <hahnt@us.ibm.com> 12/06/01 09:42AM >>>
Ed,

In its current state, the information model does not yet contain 
indicators of "primary master".  I believe we are, currently, requiring 
that IF replicaSubentry information is entered at different server 
instances, that it WILL be entered consistently with respect to what was 
entered on the other server instances.  If not, then results are 
indeterminant.  If so, then the two servers will "synchronize".

Either that or we were thinking that the "initial set of replicaSubEntry" 
information would be entered on ONE OF the servers" and then replicated to 

(and so be consistent) with the OTHER servers that were referenced.  The 
idea being that no SINGLE server is noted as "primary master" - just that 
ONE server is used to enter the "initial settings".

This is what I recall anyway,
Tim Hahn

Internet: hahnt@us.ibm.com 
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681





"Ed Reed" <eer@OnCallDBA.COM>
Sent by: owner-ietf-ldup@mail.imc.org 
12/06/2001 10:35 AM

 
        To:     <steven.legg@adacel.com.au>, <ietf-ldup@imc.org>, 
<Kurt@OpenLDAP.org>
        cc: 
        Subject:        RE: naming contexts

 


1) I'm pleased to see Kurt referring to a "Primary Master", as I think 
such a concept is needed to represent the single-master of replica 
topology information about a replication context.  However,

2) I agree with Steven that what becomes interesting in a multi-master 
environment is the replication context, not the naming context, held by a 
server.  Frankly, in the mostly-distributed directories I've worked with, 
replication contexts were all that were available, since they're more 
informative about what is actually held, allow clients to piece together a 

minimal set of servers needed to fully traverse a distinguished name when 
searching for a named entry, etc.

But here the interesting point - there IS a place for a "Primary Master" 
when you ABSOLUTELY NEED some data in the DIT to be single-mastered, even 
if the rest of the data can be multi-mastered.  In my example 1 above, 
those data are the operational attributes and subentries associated with 
the definition, description and management of a replication area.

Thus, the Model (Architecture) document and, I think, the Information 
Model, still retain the Primary master replica type, even if the 
requirements document has dropped it.

One fine point, though - I don't know how to represent nor manage a 
scenario where DIFFERENT DATA are "owned" by different PRIMARY MASTER 
replicas in the same replication area...I know how to deal and manage 
having a single replica designated as the primary of all the masters, but 
not how to tag ownership of specific schema elements. 

So, the information model places the replica type designation on the 
replicaSubentry, and only one replica of the replication area is deemed to 

be the "primary master" of the replication context.

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM 
Note:  Area code is 585

>>> "Steven Legg" <steven.legg@adacel.com.au> 12/06/01 02:35AM >>>


Kurt,

> It seems to me that the definition of a "naming context"
> in LDAP/X.500 makes little sense in the face of multi-master
> replication.  A LDAP/X.500 naming context is subtree of
> entries held in a single master DSA.
>
> Consider three DSAs A, B, C where
>                A masters subtree X
>                B masters subtree Y
>                C masters subtrees X and Y, and
>                Subtrees X and Y are adjacent.

From what follows, I assume that Y is subordinate to X.

> If A and B are masters and C is a shadow of each,
> LDAP/X.500 says that
>                A holds context X, B holds context Y, and
>                C holds contexts X and Y.

C holds a shadow copy of contexts X and Y.

> If C masters X and Y and A and B are shadows,
> LDAP/X.500 says that:
>                A, B, C holds context X

C holds context X, which is now the union of the subtrees X and Y,
i.e. there is no longer a context Y. A holds a shadow copy of a
portion (subtree X) of context X. B holds a shadow copy of a different
portion (subtree Y) of context X.
>
> If A, B, and C master the subtrees they hold, which
> contexts does LDUP say they hold?

Having an entry mastered by more that one master DSA doesn't invalidate
the definition of naming context as far as I can see, but we do need
to be a bit more careful how we phrase things.

A holds a naming context with the context prefix being the root of the
subtree X. B holds a naming context with the context prefix being the
root of the subtree Y. C holds a naming context with the context prefix
being the root of subtree X, the same root as A but with a superset of the
entries. Off the top of my head I can't think of anything that is broken
because A and C have the same context prefix but unequal sets of entries
in their naming contexts.

For LDUP we're okay if we say we are replicating "replication contexts"
rather than "naming contexts". C can be said to hold two adjacent
replication
contexts (for subtree X and subtree Y).

>
> I can only find one reasonable way to answer this question,
> it requires each entry to have held by one and only one
> "primary" master DSA and defining the LDUP naming context
> as a subtree of entries held in a single "primary" master
> DSA.  I believe this same solution can be used to define
> other terms and to detail directory models for multi-master
> replication which maps reasonable well onto the X.500 models.

I don't think we need impose this restriction for
what is only a definitional problem.

>
> I recommend the WG consider defining the LDUP multi-master
> replication directory models such that, for any particular
> entry, there is only one "primary" master DSA and zero or
> more "secondary" master DSAs.   Otherwise, defining the
> models consistent with the LDAP/X.500 models will be
> extremely difficult.

Regards,
Steven






--=_alternative 00575A5885256B1A_=
Content-Type: text/html; charset="us-ascii"


<br><font size=2 face="sans-serif">Ed,</font>
<br>
<br><font size=2 face="sans-serif">Yes, but the explanation in the DRAFT for this attribute is MUCH LESS restrictive than what you have been implying.</font>
<br>
<br><font size=2 face="sans-serif">There are indications that NO Primary replica need exist, it need NOT be the same for different replication contexts, and even that MULTIPLE primary replicas might exist. &nbsp;Thus, I have not interpreted the meaning of replicaType=Primary to be as &quot;strong&quot; as was implied in earlier postings on this thread.<br>
</font>
<br><font size=2 face="sans-serif">Regards,<br>
Tim Hahn<br>
<br>
Internet: hahnt@us.ibm.com<br>
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)<br>
phone: 607.752.6388 &nbsp; &nbsp; tie-line: 8/852.6388<br>
fax: 607.752.3681<br>
</font>
<br>
<br>
<br>
<table width=100%>
<tr valign=top>
<td>
<td><font size=1 face="sans-serif"><b>&quot;Ed Reed&quot; &lt;eer@OnCallDBA.COM&gt;</b></font>
<p><font size=1 face="sans-serif">12/06/2001 12:15 PM</font>
<br>
<td><font size=1 face="Arial">&nbsp; &nbsp; &nbsp; &nbsp; </font>
<br><font size=1 face="sans-serif">&nbsp; &nbsp; &nbsp; &nbsp; To: &nbsp; &nbsp; &nbsp; &nbsp;&lt;ietf-ldup@imc.org&gt;, Timothy Hahn/Endicott/IBM@IBMUS</font>
<br><font size=1 face="sans-serif">&nbsp; &nbsp; &nbsp; &nbsp; cc: &nbsp; &nbsp; &nbsp; &nbsp;</font>
<br><font size=1 face="sans-serif">&nbsp; &nbsp; &nbsp; &nbsp; Subject: &nbsp; &nbsp; &nbsp; &nbsp;RE: naming contexts</font>
<br>
<br><font size=1 face="Arial">&nbsp; &nbsp; &nbsp; &nbsp;</font></table>
<br>
<br><font size=2 face="Courier New">Tim - from the currently posted draft...<br>
<br>
8.2.8. replicaType <br>
 &nbsp; &nbsp;<br>
 &nbsp; (2.16.840.1.113719.1.142.4.4 NAME 'replicaType' <br>
 &nbsp; &nbsp; &nbsp;DESC 'Enum: 0-reserved, 1-Primary, 2-Updateable, <br>
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3-ReadOnly, all others reserved' <br>
 &nbsp; &nbsp; &nbsp;SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 <br>
 &nbsp; &nbsp; &nbsp;EQUALITY integerMatch <br>
 &nbsp; &nbsp; &nbsp;SINGLE-VALUE <br>
 &nbsp; &nbsp; &nbsp;NO-USER-MODIFICATION <br>
 &nbsp; &nbsp; &nbsp;USAGE dSAOperation ) <br>
 &nbsp; &nbsp;<br>
 &nbsp; ReplicaType is a simple enumeration, used to identify what kind of <br>
 &nbsp; replica is being described in a Replica object entry.<br>
<br>
It is replicaType 1-Primary that I'm speaking of.<br>
<br>
Ed<br>
<br>
=================<br>
Ed Reed<br>
Reed-Matthews, Inc.<br>
+1 585 624 2402<br>
http://www.Reed-Matthews.COM<br>
Note: &nbsp;Area code is 585<br>
<br>
&gt;&gt;&gt; &quot;Timothy Hahn&quot; &lt;hahnt@us.ibm.com&gt; 12/06/01 09:42AM &gt;&gt;&gt;<br>
Ed,<br>
<br>
In its current state, the information model does not yet contain <br>
indicators of &quot;primary master&quot;. &nbsp;I believe we are, currently, requiring <br>
that IF replicaSubentry information is entered at different server <br>
instances, that it WILL be entered consistently with respect to what was <br>
entered on the other server instances. &nbsp;If not, then results are <br>
indeterminant. &nbsp;If so, then the two servers will &quot;synchronize&quot;.<br>
<br>
Either that or we were thinking that the &quot;initial set of replicaSubEntry&quot; <br>
information would be entered on ONE OF the servers&quot; and then replicated to <br>
(and so be consistent) with the OTHER servers that were referenced. &nbsp;The <br>
idea being that no SINGLE server is noted as &quot;primary master&quot; - just that <br>
ONE server is used to enter the &quot;initial settings&quot;.<br>
<br>
This is what I recall anyway,<br>
Tim Hahn<br>
<br>
Internet: hahnt@us.ibm.com <br>
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)<br>
phone: 607.752.6388 &nbsp; &nbsp; tie-line: 8/852.6388<br>
fax: 607.752.3681<br>
<br>
<br>
<br>
<br>
<br>
&quot;Ed Reed&quot; &lt;eer@OnCallDBA.COM&gt;<br>
Sent by: owner-ietf-ldup@mail.imc.org <br>
12/06/2001 10:35 AM<br>
<br>
 <br>
 &nbsp; &nbsp; &nbsp; &nbsp;To: &nbsp; &nbsp; &lt;steven.legg@adacel.com.au&gt;, &lt;ietf-ldup@imc.org&gt;, &lt;Kurt@OpenLDAP.org&gt;<br>
 &nbsp; &nbsp; &nbsp; &nbsp;cc: <br>
 &nbsp; &nbsp; &nbsp; &nbsp;Subject: &nbsp; &nbsp; &nbsp; &nbsp;RE: naming contexts<br>
<br>
 <br>
<br>
<br>
1) I'm pleased to see Kurt referring to a &quot;Primary Master&quot;, as I think <br>
such a concept is needed to represent the single-master of replica <br>
topology information about a replication context. &nbsp;However,<br>
<br>
2) I agree with Steven that what becomes interesting in a multi-master <br>
environment is the replication context, not the naming context, held by a <br>
server. &nbsp;Frankly, in the mostly-distributed directories I've worked with, <br>
replication contexts were all that were available, since they're more <br>
informative about what is actually held, allow clients to piece together a <br>
minimal set of servers needed to fully traverse a distinguished name when <br>
searching for a named entry, etc.<br>
</font>
<br><font size=2 face="Courier New">But here the interesting point - there IS a place for a &quot;Primary Master&quot; <br>
when you ABSOLUTELY NEED some data in the DIT to be single-mastered, even <br>
if the rest of the data can be multi-mastered. &nbsp;In my example 1 above, <br>
those data are the operational attributes and subentries associated with <br>
the definition, description and management of a replication area.<br>
<br>
Thus, the Model (Architecture) document and, I think, the Information <br>
Model, still retain the Primary master replica type, even if the <br>
requirements document has dropped it.<br>
<br>
One fine point, though - I don't know how to represent nor manage a <br>
scenario where DIFFERENT DATA are &quot;owned&quot; by different PRIMARY MASTER <br>
replicas in the same replication area...I know how to deal and manage <br>
having a single replica designated as the primary of all the masters, but <br>
not how to tag ownership of specific schema elements. <br>
<br>
So, the information model places the replica type designation on the <br>
replicaSubentry, and only one replica of the replication area is deemed to <br>
be the &quot;primary master&quot; of the replication context.<br>
<br>
Ed<br>
<br>
=================<br>
Ed Reed<br>
Reed-Matthews, Inc.<br>
+1 585 624 2402<br>
http://www.Reed-Matthews.COM <br>
Note: &nbsp;Area code is 585<br>
<br>
&gt;&gt;&gt; &quot;Steven Legg&quot; &lt;steven.legg@adacel.com.au&gt; 12/06/01 02:35AM &gt;&gt;&gt;<br>
<br>
<br>
Kurt,<br>
<br>
&gt; It seems to me that the definition of a &quot;naming context&quot;<br>
&gt; in LDAP/X.500 makes little sense in the face of multi-master<br>
&gt; replication. &nbsp;A LDAP/X.500 naming context is subtree of<br>
&gt; entries held in a single master DSA.<br>
&gt;<br>
&gt; Consider three DSAs A, B, C where<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;A masters subtree X<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;B masters subtree Y<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;C masters subtrees X and Y, and<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Subtrees X and Y are adjacent.<br>
<br>
From what follows, I assume that Y is subordinate to X.<br>
<br>
&gt; If A and B are masters and C is a shadow of each,<br>
&gt; LDAP/X.500 says that<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;A holds context X, B holds context Y, and<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;C holds contexts X and Y.<br>
<br>
C holds a shadow copy of contexts X and Y.<br>
<br>
&gt; If C masters X and Y and A and B are shadows,<br>
&gt; LDAP/X.500 says that:<br>
&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;A, B, C holds context X<br>
<br>
C holds context X, which is now the union of the subtrees X and Y,<br>
i.e. there is no longer a context Y. A holds a shadow copy of a<br>
portion (subtree X) of context X. B holds a shadow copy of a different<br>
portion (subtree Y) of context X.<br>
&gt;<br>
&gt; If A, B, and C master the subtrees they hold, which<br>
&gt; contexts does LDUP say they hold?<br>
<br>
Having an entry mastered by more that one master DSA doesn't invalidate<br>
the definition of naming context as far as I can see, but we do need<br>
to be a bit more careful how we phrase things.<br>
<br>
A holds a naming context with the context prefix being the root of the<br>
subtree X. B holds a naming context with the context prefix being the<br>
root of the subtree Y. C holds a naming context with the context prefix<br>
being the root of subtree X, the same root as A but with a superset of the<br>
entries. Off the top of my head I can't think of anything that is broken<br>
because A and C have the same context prefix but unequal sets of entries<br>
in their naming contexts.<br>
<br>
For LDUP we're okay if we say we are replicating &quot;replication contexts&quot;<br>
rather than &quot;naming contexts&quot;. C can be said to hold two adjacent<br>
replication<br>
contexts (for subtree X and subtree Y).<br>
<br>
&gt;<br>
&gt; I can only find one reasonable way to answer this question,<br>
&gt; it requires each entry to have held by one and only one<br>
&gt; &quot;primary&quot; master DSA and defining the LDUP naming context<br>
&gt; as a subtree of entries held in a single &quot;primary&quot; master<br>
&gt; DSA. &nbsp;I believe this same solution can be used to define<br>
&gt; other terms and to detail directory models for multi-master<br>
&gt; replication which maps reasonable well onto the X.500 models.<br>
<br>
I don't think we need impose this restriction for<br>
what is only a definitional problem.<br>
<br>
&gt;<br>
&gt; I recommend the WG consider defining the LDUP multi-master<br>
&gt; replication directory models such that, for any particular<br>
&gt; entry, there is only one &quot;primary&quot; master DSA and zero or<br>
&gt; more &quot;secondary&quot; master DSAs. &nbsp; Otherwise, defining the<br>
&gt; models consistent with the LDAP/X.500 models will be<br>
&gt; extremely difficult.<br>
<br>
Regards,<br>
Steven<br>
<br>
<br>
<br>
</font>
<br>
<br>
--=_alternative 00575A5885256B1A_=--


From owner-ietf-ldup@mail.imc.org  Thu Dec  6 12:47:31 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA05615
	for <ldup-archive@lists.ietf.org>; Thu, 6 Dec 2001 12:47:31 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB6HCAs24206
	for ietf-ldup-bks; Thu, 6 Dec 2001 09:12:10 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB6HC8224202
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 09:12:08 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Thu, 06 Dec 2001 12:03:09 -0700
Message-Id: <sc0f5e7d.010@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Thu, 06 Dec 2001 12:02:46 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <ietf-ldup@imc.org>, <hahnt@us.ibm.com>
Subject: RE: naming contexts
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fB6HC9224203
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


Yeah - there are those that believe they can manage the replication toplogy
information without resorting to using the Primary as the single-master for
replica topology information - I'm not one of them.  The draft verbage
invites those who think they can to actively participate in the management
operations draft work that Ryan and others are working on...

So, I'm trying not to impose the limits of my understanding on others...and
to leave the door open to other approaches...I simply don't have a clue as
to how to make those other approaches work.

The one I understand how to do is to designate one of the replicas as the
Primary replica, and to REQUIRE all replica topology information changes to
be made AT THE PRIMARY and to then allow the Primary to shadow that
replica toplogy information to those servers that have a replicaSubentry.

This approach admittedly presumes that the primary replica OWNS the
replica toplogy information, and in turn might be considered to be the
owner of all the data in the replication area of the DIT.  That later is 
subject to much more discussion, but it's a useful simplification.

That notion of ownership returns the X.500 notion of Master as Owner
of data to the multi-master model (and is why I've always used the
term "updateable" where others insist on using "master").  If there is
a "master" in the X.500 sense of the word...owning the data...that
would be the "primary" replica.  Other updateable replicas hold copies
that may be changed by users, but the data owner could still be viewed
as the primary replica.

What's nice about this approach is that it explicitly identifies who gets
to decide whether a new replica will be allowed to be added to the topology -
the primary replica does.  I think that ability to constrain who may
begin replicating an area is an important defense against trawling for
data, and from denial of service attacks that create replicas and then
destroy them willy-nilly (attack:  while (true) {add replica; replicate
knowledge of the new replica to everyone else so they add the replica
to their update vector and purge vector arrays; take the replica off line
without removing its information from the replica toplogy on all the other
servers so they can never purge themselves};)

This is an architypical case where even with a multi-master system, there
are still appropriate places where individual applications can and should
be able to  choose to cooperate by using the multimaster directory in
a single-master fashion for their own data - in this case, the application is
LDUP, and the data are the replica toplogy information.  Other applications
may similarly choose to do so, and the primary replica, used by LDUP
in this instance to designate the "chosen" replica that is to be used to
coordinate (strictly serialize) updates to its data, may be similarly used
by those other applications, though they could also use some other
mechanism to select their application-specific "single master".

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585

>>> "Timothy Hahn" <hahnt@us.ibm.com> 12/06/01 10:58AM >>>
Ed,

Yes, but the explanation in the DRAFT for this attribute is MUCH LESS 
restrictive than what you have been implying.

There are indications that NO Primary replica need exist, it need NOT be 
the same for different replication contexts, and even that MULTIPLE 
primary replicas might exist.  Thus, I have not interpreted the meaning of 
replicaType=Primary to be as "strong" as was implied in earlier postings 
on this thread.

Regards,
Tim Hahn

Internet: hahnt@us.ibm.com 
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681





"Ed Reed" <eer@OnCallDBA.COM>
12/06/2001 12:15 PM

 
        To:     <ietf-ldup@imc.org>, Timothy Hahn/Endicott/IBM@IBMUS
        cc: 
        Subject:        RE: naming contexts

 

Tim - from the currently posted draft...

8.2.8. replicaType 
 
   (2.16.840.1.113719.1.142.4.4 NAME 'replicaType' 
      DESC 'Enum: 0-reserved, 1-Primary, 2-Updateable, 
            3-ReadOnly, all others reserved' 
      SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 
      EQUALITY integerMatch 
      SINGLE-VALUE 
      NO-USER-MODIFICATION 
      USAGE dSAOperation ) 
 
   ReplicaType is a simple enumeration, used to identify what kind of 
   replica is being described in a Replica object entry.

It is replicaType 1-Primary that I'm speaking of.

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM 
Note:  Area code is 585

>>> "Timothy Hahn" <hahnt@us.ibm.com> 12/06/01 09:42AM >>>
Ed,

In its current state, the information model does not yet contain 
indicators of "primary master".  I believe we are, currently, requiring 
that IF replicaSubentry information is entered at different server 
instances, that it WILL be entered consistently with respect to what was 
entered on the other server instances.  If not, then results are 
indeterminant.  If so, then the two servers will "synchronize".

Either that or we were thinking that the "initial set of replicaSubEntry" 
information would be entered on ONE OF the servers" and then replicated to 

(and so be consistent) with the OTHER servers that were referenced.  The 
idea being that no SINGLE server is noted as "primary master" - just that 
ONE server is used to enter the "initial settings".

This is what I recall anyway,
Tim Hahn

Internet: hahnt@us.ibm.com 
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681





"Ed Reed" <eer@OnCallDBA.COM>
Sent by: owner-ietf-ldup@mail.imc.org 
12/06/2001 10:35 AM

 
        To:     <steven.legg@adacel.com.au>, <ietf-ldup@imc.org>, 
<Kurt@OpenLDAP.org>
        cc: 
        Subject:        RE: naming contexts

 


1) I'm pleased to see Kurt referring to a "Primary Master", as I think 
such a concept is needed to represent the single-master of replica 
topology information about a replication context.  However,

2) I agree with Steven that what becomes interesting in a multi-master 
environment is the replication context, not the naming context, held by a 
server.  Frankly, in the mostly-distributed directories I've worked with, 
replication contexts were all that were available, since they're more 
informative about what is actually held, allow clients to piece together a 

minimal set of servers needed to fully traverse a distinguished name when 
searching for a named entry, etc.

But here the interesting point - there IS a place for a "Primary Master" 
when you ABSOLUTELY NEED some data in the DIT to be single-mastered, even 
if the rest of the data can be multi-mastered.  In my example 1 above, 
those data are the operational attributes and subentries associated with 
the definition, description and management of a replication area.

Thus, the Model (Architecture) document and, I think, the Information 
Model, still retain the Primary master replica type, even if the 
requirements document has dropped it.

One fine point, though - I don't know how to represent nor manage a 
scenario where DIFFERENT DATA are "owned" by different PRIMARY MASTER 
replicas in the same replication area...I know how to deal and manage 
having a single replica designated as the primary of all the masters, but 
not how to tag ownership of specific schema elements. 

So, the information model places the replica type designation on the 
replicaSubentry, and only one replica of the replication area is deemed to 

be the "primary master" of the replication context.

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM 
Note:  Area code is 585

>>> "Steven Legg" <steven.legg@adacel.com.au> 12/06/01 02:35AM >>>


Kurt,

> It seems to me that the definition of a "naming context"
> in LDAP/X.500 makes little sense in the face of multi-master
> replication.  A LDAP/X.500 naming context is subtree of
> entries held in a single master DSA.
>
> Consider three DSAs A, B, C where
>                A masters subtree X
>                B masters subtree Y
>                C masters subtrees X and Y, and
>                Subtrees X and Y are adjacent.

From what follows, I assume that Y is subordinate to X.

> If A and B are masters and C is a shadow of each,
> LDAP/X.500 says that
>                A holds context X, B holds context Y, and
>                C holds contexts X and Y.

C holds a shadow copy of contexts X and Y.

> If C masters X and Y and A and B are shadows,
> LDAP/X.500 says that:
>                A, B, C holds context X

C holds context X, which is now the union of the subtrees X and Y,
i.e. there is no longer a context Y. A holds a shadow copy of a
portion (subtree X) of context X. B holds a shadow copy of a different
portion (subtree Y) of context X.
>
> If A, B, and C master the subtrees they hold, which
> contexts does LDUP say they hold?

Having an entry mastered by more that one master DSA doesn't invalidate
the definition of naming context as far as I can see, but we do need
to be a bit more careful how we phrase things.

A holds a naming context with the context prefix being the root of the
subtree X. B holds a naming context with the context prefix being the
root of the subtree Y. C holds a naming context with the context prefix
being the root of subtree X, the same root as A but with a superset of the
entries. Off the top of my head I can't think of anything that is broken
because A and C have the same context prefix but unequal sets of entries
in their naming contexts.

For LDUP we're okay if we say we are replicating "replication contexts"
rather than "naming contexts". C can be said to hold two adjacent
replication
contexts (for subtree X and subtree Y).

>
> I can only find one reasonable way to answer this question,
> it requires each entry to have held by one and only one
> "primary" master DSA and defining the LDUP naming context
> as a subtree of entries held in a single "primary" master
> DSA.  I believe this same solution can be used to define
> other terms and to detail directory models for multi-master
> replication which maps reasonable well onto the X.500 models.a

I don't think we need impose this restriction for
what is only a definitional problem.

>
> I recommend the WG consider defining the LDUP multi-master
> replication directory models such that, for any particular
> entry, there is only one "primary" master DSA and zero or
> more "secondary" master DSAs.   Otherwise, defining the
> models consistent with the LDAP/X.500 models will be
> extremely difficult.

Regards,
Steven







From owner-ietf-ldup@mail.imc.org  Thu Dec  6 16:37:31 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id QAA13350
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 16:37:30 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB6LFaM07834
	for ietf-ldup-bks; Thu, 6 Dec 2001 13:15:36 -0800 (PST)
Received: from out003pub.verizon.net (out003pub.verizon.net [206.46.170.103])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB6LFY207824
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 13:15:34 -0800 (PST)
Received: from D7ST2111 ([141.151.17.54])
	by out003pub.verizon.net  with ESMTP
	; id fB6LAHP18709
	Thu, 6 Dec 2001 15:10:23 -0600 (CST)
Reply-To: <christopher.apple@verizon.net>
From: "Chris Apple" <christopher.apple@verizon.net>
To: <agenda@ietf.org>
Cc: <dinaras@ietf.org>, "'John Strassner'" <strazzie@earthlink.net>,
        "Patrik Faltstrom" <paf@cisco.com>, <ietf-ldup@imc.org>
Subject: LDUP WG Agenda
Date: Thu, 6 Dec 2001 16:13:52 -0600
Message-ID: <000001c17ea3$5c3c4fb0$0200a8c0@D7ST2111>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook, Build 10.0.3311
Importance: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit


I and my service provider have been severely affected by the "goner"
virus over the past few days.

This and many other postings I have sent to the list and other
recipients have not made it
into the archives nor have I received any bounced/rejected messages. If
you have been trying
to reach me via e-mail and have gotten no response, please resend.

The WG agenda for LDUP may or may not actually be published on the web
site because I was
unable to get it through before the deadline. This is what we will use
as an agenda. Note
That there is an item called agenda bashing to accommodate for not being
able to post to
the list for WG review prior to submission.

LDAP Duplication/Replication/Update Protocols WG (ldup)
Thursday, December 13 at 1530-1730 
===================================
CHAIRS: Chris Apple <christopher.apple@verizon.net>
	    John Strassner <john.strassner@intelliden.com>
AGENDA:

0) Agenda Bashing

1) LDUP Update Reconciliation Procedures

    http://www.ietf.org/internet-drafts/draft-ietf-ldup-urp-05.txt

2) LDAPv3 Replication Requirements

 
http://www.ietf.org/internet-drafts/draft-ietf-ldup-replica-req-10.txt

3) LDAP Replication Architecture

    http://www.ietf.org/internet-drafts/draft-ietf-ldup-model-06.txt

4) LDUP Replication Information Model

    http://www.ietf.org/internet-drafts/draft-ietf-ldup-infomod-04.txt

5) LDAP Subentry Schema

    http://www.ietf.org/internet-drafts/draft-ietf-ldup-subentry-08.txt

6) The LDUP Replication Update Protocol

    http://www.ietf.org/internet-drafts/draft-ietf-ldup-protocol-03.txt

7) General Usage Profile for LDAPv3 Replication

 
http://www.ietf.org/internet-drafts/draft-ietf-ldup-usage-profile-02.txt

8) LDAP Client Update Protocol

    http://www.ietf.org/internet-drafts/draft-ietf-ldup-lcup-02.txt

9) Profile for Framing LDAPv3 Operations

 
http://www.ietf.org/internet-drafts/draft-ietf-ldup-framing-profile-00.t
xt

10) Mandatory LDAP Replica Management

      http://www.ietf.org/internet-drafts/draft-ietf-ldup-mrm-00.txt

11) LDAPv3 Access Control - Options to Consider

	a) Adding it to LDUP?
	b) Forming a WG Solely to Address Access Control for LDAPv3?
	c) Handling the Access Control problem by (potentially
competing) individual contributions?
            d) Do nothing and let the work go on outside of the IETF?
	e) Other options?

12) Broader WG Charter Discussion




From owner-ietf-ldup@mail.imc.org  Thu Dec  6 17:29:41 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id RAA14841
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 17:29:40 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB6MCDs11281
	for ietf-ldup-bks; Thu, 6 Dec 2001 14:12:13 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB6MCB211273
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 14:12:11 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Thu, 06 Dec 2001 17:03:10 -0700
Message-Id: <sc0fa4ce.019@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Thu, 06 Dec 2001 17:02:54 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <ietf-ldup@imc.org>, <hahnt@us.ibm.com>, <donh@windows.microsoft.com>
Subject: RE: naming contexts
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fB6MCC211277
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


Actually, I accept that example as support of at least part of my position - as you said, AD does single master for the creation and deletion of naming/replication contexts.  After that you may well be able to treat the topology information as multimaster...inspection (and time) will tell.  It's certainly worth trying.

Ed

>>> "Don Hacherl" <donh@windows.microsoft.com> 12/06/01 03:04PM >>>
I'd offer a counter example.  Active Directory in Windows 2000 manages the replication topology in a distributed manner, with each DSA independently computing a fraction of the total replication topology such that the (overlapping) fractions together form a single connected replication network.  AD does single master the creation and deletion of naming/replication contexts, but once an NC exists on one DSA its replication topology (as well as all the data in the NC itself) is fully multi-master.

Don Hacherl

-----Original Message-----
From: Ed Reed [mailto:eer@OnCallDBA.COM] 
Sent: Thursday, December 06, 2001 11:03 AM
To: ietf-ldup@imc.org; hahnt@us.ibm.com 
Subject: RE: naming contexts



Yeah - there are those that believe they can manage the replication toplogy
information without resorting to using the Primary as the single-master for
replica topology information - I'm not one of them.  The draft verbage
invites those who think they can to actively participate in the management
operations draft work that Ryan and others are working on...

So, I'm trying not to impose the limits of my understanding on others...and
to leave the door open to other approaches...I simply don't have a clue as
to how to make those other approaches work.

The one I understand how to do is to designate one of the replicas as the
Primary replica, and to REQUIRE all replica topology information changes to
be made AT THE PRIMARY and to then allow the Primary to shadow that
replica toplogy information to those servers that have a replicaSubentry.

This approach admittedly presumes that the primary replica OWNS the
replica toplogy information, and in turn might be considered to be the
owner of all the data in the replication area of the DIT.  That later is 
subject to much more discussion, but it's a useful simplification.

That notion of ownership returns the X.500 notion of Master as Owner
of data to the multi-master model (and is why I've always used the
term "updateable" where others insist on using "master").  If there is
a "master" in the X.500 sense of the word...owning the data...that
would be the "primary" replica.  Other updateable replicas hold copies
that may be changed by users, but the data owner could still be viewed
as the primary replica.

What's nice about this approach is that it explicitly identifies who gets
to decide whether a new replica will be allowed to be added to the topology -
the primary replica does.  I think that ability to constrain who may
begin replicating an area is an important defense against trawling for
data, and from denial of service attacks that create replicas and then
destroy them willy-nilly (attack:  while (true) {add replica; replicate
knowledge of the new replica to everyone else so they add the replica
to their update vector and purge vector arrays; take the replica off line
without removing its information from the replica toplogy on all the other
servers so they can never purge themselves};)

This is an architypical case where even with a multi-master system, there
are still appropriate places where individual applications can and should
be able to  choose to cooperate by using the multimaster directory in
a single-master fashion for their own data - in this case, the application is
LDUP, and the data are the replica toplogy information.  Other applications
may similarly choose to do so, and the primary replica, used by LDUP
in this instance to designate the "chosen" replica that is to be used to
coordinate (strictly serialize) updates to its data, may be similarly used
by those other applications, though they could also use some other
mechanism to select their application-specific "single master".

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM 
Note:  Area code is 585

>>> "Timothy Hahn" <hahnt@us.ibm.com> 12/06/01 10:58AM >>>
Ed,

Yes, but the explanation in the DRAFT for this attribute is MUCH LESS 
restrictive than what you have been implying.

There are indications that NO Primary replica need exist, it need NOT be 
the same for different replication contexts, and even that MULTIPLE 
primary replicas might exist.  Thus, I have not interpreted the meaning of 
replicaType=Primary to be as "strong" as was implied in earlier postings 
on this thread.

Regards,
Tim Hahn

Internet: hahnt@us.ibm.com 
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681





"Ed Reed" <eer@OnCallDBA.COM>
12/06/2001 12:15 PM

 
        To:     <ietf-ldup@imc.org>, Timothy Hahn/Endicott/IBM@IBMUS
        cc: 
        Subject:        RE: naming contexts

 

Tim - from the currently posted draft...

8.2.8. replicaType 
 
   (2.16.840.1.113719.1.142.4.4 NAME 'replicaType' 
      DESC 'Enum: 0-reserved, 1-Primary, 2-Updateable, 
            3-ReadOnly, all others reserved' 
      SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 
      EQUALITY integerMatch 
      SINGLE-VALUE 
      NO-USER-MODIFICATION 
      USAGE dSAOperation ) 
 
   ReplicaType is a simple enumeration, used to identify what kind of 
   replica is being described in a Replica object entry.

It is replicaType 1-Primary that I'm speaking of.

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM 
Note:  Area code is 585

>>> "Timothy Hahn" <hahnt@us.ibm.com> 12/06/01 09:42AM >>>
Ed,

In its current state, the information model does not yet contain 
indicators of "primary master".  I believe we are, currently, requiring 
that IF replicaSubentry information is entered at different server 
instances, that it WILL be entered consistently with respect to what was 
entered on the other server instances.  If not, then results are 
indeterminant.  If so, then the two servers will "synchronize".

Either that or we were thinking that the "initial set of replicaSubEntry" 
information would be entered on ONE OF the servers" and then replicated to 

(and so be consistent) with the OTHER servers that were referenced.  The 
idea being that no SINGLE server is noted as "primary master" - just that 
ONE server is used to enter the "initial settings".

This is what I recall anyway,
Tim Hahn

Internet: hahnt@us.ibm.com 
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681





"Ed Reed" <eer@OnCallDBA.COM>
Sent by: o should
be able to  choose to cowner-ietf-ldup@mail.imc.org 
12/06/2001 10:35 AM

 
        To:     <steven.legg@adacel.com.au>, <ietf-ldup@imc.org>, 
<Kurt@OpenLDAP.org>
        cc: 
        Subject:        RE: naming contexts

 


1) I'm pleased to see Kurt referring to a "Primary Master", as I think 
such a concept is needed to represent the single-master of replica 
topology information about a replication context.  However,

2) I agree with Steven that what becomes interesting in a multi-master 
environment is the replication context, not the naming context, held by a 
server.  Frankly, in the mostly-distributed directories I've worked with, 
replication contexts were all that were available, since they're more 
informative about what is actually held, allow clients to piece together a 

minimal set of servers needed to fully traverse a distinguished name when 
searching for a named entry, etc.

But here the interesting point - there IS a place for a "Primary Master" 
when you ABSOLUTELY NEED some data in the DIT to be single-mastered, even 
if the rest of the data can be multi-mastered.  In my example 1 above, 
those data are the operational attributes and subentries associated with 
the definition, description and management of a replication area.

Thus, the Model (Architecture) document and, I think, the Information 
Model, still retain the Primary master replica type, even if the 
requirements document has dropped it.

One fine point, though - I don't know how to represent nor manage a 
scenario where DIFFERENT DATA are "owned" by different PRIMARY MASTER 
replicas in the same replication area...I know how to deal and manage 
having a single replica designated as the primary of all the masters, but 
not how to tag ownership of specific schema elements. 

So, the information model places the replica type designation on the 
replicaSubentry, and only one replica of the replication area is deemed to 

be the "primary master" of the replication context.

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM 
Note:  Area code is 585

>>> "Steven Legg" <steven.legg@adacel.com.au> 12/06/01 02:35AM >>>


Kurt,

> It seems to me that the definition of a "naming context"
> in LDAP/X.500 makes little sense in the face of multi-master
> replication.  A LDAP/X.500 naming context is subtree of
> entries held in a single master DSA.
>
> Consider three DSAs A, B, C where
>                A masters subtree X
>                B masters subtree Y
>                C masters subtrees X and Y, and
>                Subtrees X and Y are adjacent.

From what follows, I assume that Y is subordinate to X.

> If A and B are masters and C is a shadow of each,
> LDAP/X.500 says that
>                A holds context X, B holds context Y, and
>                C holds contexts X and Y.

C holds a shadow copy of contexts X and Y.

> If C masters X and Y and A and B are shadows,
> LDAP/X.500 says that:
>                A, B, C holds context X

C holds context X, which is now the union of the subtrees X and Y,
i.e. there is no longer a context Y. A holds a shadow copy of a
portion (subtree X) of context X. B holds a shadow copy of a different
portion (subtree Y) of context X.
>
> If A, B, and C master the subtrees they hold, which
> contexts does LDUP say they hold?

Having an entry mastered by more that one master DSA doesn't invalidate
the definition of naming context as far as I can see, but we do need
to be a bit more careful how we phrase things.

A holds a naming context with the context prefix being the root of the
subtree X. B holds a naming context with the context prefix being the
root of the subtree Y. C holds a naming context with the context prefix
being the root of subtree X, the same root as A but with a superset of the
entries. Off the top of my head I can't think of anything that is broken
because A and C have the same context prefix but unequal sets of entries
in their naming contexts.

For LDUP we're okay if we say we are replicating "replication contexts"
rather than "naming contexts". C can be said to hold two adjacent
replication
contexts (for subtree X and subtree Y).

>
> I can only find one reasonable way to answer this question,
> it requires each entry to have held by one and only one
> "primary" master DSA and defining the LDUP naming context
> as a subtree of entries held in a single "primary" master
> DSA.  I believe this same solution can be used to define
> other terms and to detail directory models for multi-master
> replication which maps reasonable well onto the X.500 models.a

I don't think we need impose this restriction for
what is only a definitional problem.

>
> I recommend the WG consider defining the LDUP multi-master
> replication directory models such that, for any particular
> entry, there is only one "primary" master DSA and zero or
> more "secondary" master DSAs.   Otherwise, defining the
> models consistent with the LDAP/X.500 models will be
> extremely difficult.

Regards,
Steven







From owner-ietf-ldup@mail.imc.org  Thu Dec  6 19:47:15 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id TAA19444
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 19:47:15 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB70ULp18498
	for ietf-ldup-bks; Thu, 6 Dec 2001 16:30:21 -0800 (PST)
Received: from nexus.adacel.com (shelob.adacel.com.au [203.36.26.146] (may be forged))
	by above.proper.com (8.11.6/8.11.3) with SMTP id fB70UI218489
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 16:30:18 -0800 (PST)
Received: (qmail 1312 invoked from network); 7 Dec 2001 00:25:44 -0000
Received: from unknown (HELO osmium) (10.32.24.165)
  by nexus.adacel.com with SMTP; 7 Dec 2001 00:25:44 -0000
Reply-To: <steven.legg@adacel.com.au>
From: "Steven Legg" <steven.legg@adacel.com.au>
To: "'Kurt D. Zeilenga'" <Kurt@OpenLDAP.org>
Cc: <ietf-ldup@imc.org>
Subject: RE: naming contexts
Date: Fri, 7 Dec 2001 11:30:51 +1100
Message-ID: <000d01c17eb6$6d178490$a518200a@osmium.mtwav.adacel.com.au>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2377.0
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2120.0
Importance: Normal
In-Reply-To: <5.1.0.14.0.20011206051734.01792150@127.0.0.1>
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit



Kurt,

Kurt D. Zeilenga wrote:
> At 11:35 PM 2001-12-05, Steven Legg wrote:
>
>
> >Kurt,
> >
> >> It seems to me that the definition of a "naming context"
> >> in LDAP/X.500 makes little sense in the face of multi-master
> >> replication.  A LDAP/X.500 naming context is subtree of
> >> entries held in a single master DSA.
> >>
> >> Consider three DSAs A, B, C where
> >>       A masters subtree X
> >>       B masters subtree Y
> >>       C masters subtrees X and Y, and
> >>       Subtrees X and Y are adjacent.
> >
> >>From what follows, I assume that Y is subordinate to X.
> >
> >> If A and B are masters and C is a shadow of each,
> >> LDAP/X.500 says that
> >>       A holds context X, B holds context Y, and
> >>       C holds contexts X and Y.
> >
> >C holds a shadow copy of contexts X and Y.
>
> Yes, but the distinction here is what values go into
> the root DSE namingContexts attribute of each server.
>   A publishes the name of the vertex of X.
>   B publishes the name of the vertex of Y.
>   C publishes the names of the vertex of X and Y.

Strictly speaking, C holds no naming contexts because it only
has shadow entries. Its namingContexts attribute should be absent.

Note that X.501 and X.525 aren't entirely consistent in how they
refer to shadow copies of naming contexts. More often than not
the implication is that a naming context is necessarily made up
of master entries.

> >> If C masters X and Y and A and B are shadows,
> >> LDAP/X.500 says that:
> >>       A, B, C holds context X
> >
> >C holds context X, which is now the union of the subtrees X and Y,
> >i.e. there is no longer a context Y. A holds a shadow copy of a
> >portion (subtree X) of context X. B holds a shadow copy of a
> different
> >portion (subtree Y) of context X.
>
> Yes,
>   A,B,C publishes the name of the vertex of X.

A and B are shadows so they don't hold any naming contexts.
That's why I wrote "a shadow copy of ... context X".

>
> >>
> >> If A, B, and C master the subtrees they hold, which
> >> contexts does LDUP say they hold?
> >
> >Having an entry mastered by more that one master DSA doesn't
> invalidate
> >the definition of naming context as far as I can see, but we do need
> >to be a bit more careful how we phrase things.
> >
> >A holds a naming context with the context prefix being the
> root of the
> >subtree X. B holds a naming context with the context prefix being the
> >root of the subtree Y. C holds a naming context with the
> context prefix
> >being the root of subtree X, the same root as A but with a
> superset of the
> >entries.
>
> You imply that context prefix information is not replicated
> between masters

I don't see how you arrived at that conclusion. I expect A and C will
receive context prefix information from the DSA(s) mastering the immediate
superior of the root of subtree X, possibly via each other. B will receive
context prefix information from A and C, since they both master the
immediate
superior of the root of subtree Y.

A, B and C effectively have a read-only replica of their respective
prefix information.

BTW, URP was designed to handle the situation where context prefix
information comes from multiple sources.

> and the definition of a context is local to
> each master.

Pretty much.

>
> There are three basic ways one could define naming contexts
> in face of multi-master replication.
>         a) contexts are determined at each master
>         b) contexts are determined across all masters
>         c) contexts are determined at one master
>
> I note that in single-master replication, naming contexts are
> defined consistent with all three.  In multi-master, you need
> to choose one.  You appear to choose a), yes?

Yes, and for b) I use replication contexts.

>
> >For LDUP we're okay if we say we are replicating
> "replication contexts"
> >rather than "naming contexts".
>
> Yes, one could consider the X and Y subtrees as "replication
> contexts".
>
> >C can be said to hold two adjacent replication contexts (for
> subtree X and subtree Y).
>
> Yes.  One could rephrase the question in terms of replication
> contexts, not subtrees.
>
> I think you are saying that where a server masters multiple
> adjacent replication contexts, these replication contexts
> comprise one naming context on that server.  Yes?

Yes.

Regards,
Steven



From owner-ietf-ldup@mail.imc.org  Thu Dec  6 20:01:38 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id UAA20055
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 20:01:38 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB70ktp19790
	for ietf-ldup-bks; Thu, 6 Dec 2001 16:46:55 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB70ks219786
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 16:46:54 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB70rbC51190;
	Fri, 7 Dec 2001 00:53:37 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011206164015.016c06e8@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Thu, 06 Dec 2001 16:46:23 -0800
To: <steven.legg@adacel.com.au>
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: RE: naming contexts
Cc: <ietf-ldup@imc.org>
In-Reply-To: <000d01c17eb6$6d178490$a518200a@osmium.mtwav.adacel.com.au>
References: <5.1.0.14.0.20011206051734.01792150@127.0.0.1>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


At 04:30 PM 2001-12-06, Steven Legg wrote:
>Kurt D. Zeilenga wrote:
>> At 11:35 PM 2001-12-05, Steven Legg wrote:
>>
>>
>> >Kurt,
>> >
>> >> It seems to me that the definition of a "naming context"
>> >> in LDAP/X.500 makes little sense in the face of multi-master
>> >> replication.  A LDAP/X.500 naming context is subtree of
>> >> entries held in a single master DSA.
>> >>
>> >> Consider three DSAs A, B, C where
>> >>       A masters subtree X
>> >>       B masters subtree Y
>> >>       C masters subtrees X and Y, and
>> >>       Subtrees X and Y are adjacent.
>> >
>> >>From what follows, I assume that Y is subordinate to X.
>> >
>> >> If A and B are masters and C is a shadow of each,
>> >> LDAP/X.500 says that
>> >>       A holds context X, B holds context Y, and
>> >>       C holds contexts X and Y.
>> >
>> >C holds a shadow copy of contexts X and Y.
>>
>> Yes, but the distinction here is what values go into
>> the root DSE namingContexts attribute of each server.
>>   A publishes the name of the vertex of X.
>>   B publishes the name of the vertex of Y.
>>   C publishes the names of the vertex of X and Y.
>
>Strictly speaking, C holds no naming contexts because it only
>has shadow entries. Its namingContexts attribute should be absent.

RFC 2256, 5.2.1. namingContexts
   The values of this attribute correspond to naming contexts
   which this server masters or shadows. 



From owner-ietf-ldup@mail.imc.org  Thu Dec  6 20:24:56 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id UAA20646
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 20:24:56 -0500 (EST)
Received: by above.proper.com (8.11.6/8.11.3) id fB71AJk21272
	for ietf-ldup-bks; Thu, 6 Dec 2001 17:10:19 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB71AI221268
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 17:10:18 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB71H4C51286;
	Fri, 7 Dec 2001 01:17:04 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011206170033.016c1730@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Thu, 06 Dec 2001 17:09:50 -0800
To: <steven.legg@adacel.com.au>
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: RE: naming contexts
Cc: <ietf-ldup@imc.org>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


For additional clarify...

At 04:30 PM 2001-12-06, Steven Legg wrote:
>Strictly speaking, C holds no naming contexts because it only
>has shadow entries. Its namingContexts attribute should be absent.

RFC 2256, 5.2.1. namingContexts
   The values of this attribute correspond to naming contexts
   which this server masters or shadows. 

I should note that the next sentence could be taken to
mean that a LDAP server mastering no entries may have
no namingContexts attribute in the root DSE.
   If the server does not master any information
   (e.g. it is an LDAP gateway to a public X.500 directory) 
   this attribute will be absent.

Should s/master/hold/ to be consistent with preceeding line
and RFC 2251:
   - namingContexts: naming contexts held in the server. Naming contexts
     are defined in section 17 of X.501 [6].

Please note that verb "hold" (or variant) when applied to an
entry (or context), unless qualified, implies the server
contains either a master or shadow copy.

Kurt



From owner-ietf-ldup@mail.imc.org  Thu Dec  6 20:42:49 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id UAA20943
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 20:42:49 -0500 (EST)
Received: by above.proper.com (8.11.6/8.11.3) id fB71NZk21759
	for ietf-ldup-bks; Thu, 6 Dec 2001 17:23:35 -0800 (PST)
Received: from nexus.adacel.com (shelob.adacel.com.au [203.36.26.146] (may be forged))
	by above.proper.com (8.11.6/8.11.3) with SMTP id fB71NX221753
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 17:23:33 -0800 (PST)
Received: (qmail 5170 invoked from network); 7 Dec 2001 01:18:58 -0000
Received: from unknown (HELO osmium) (10.32.24.165)
  by nexus.adacel.com with SMTP; 7 Dec 2001 01:18:58 -0000
Reply-To: <steven.legg@adacel.com.au>
From: "Steven Legg" <steven.legg@adacel.com.au>
To: "'Kurt D. Zeilenga'" <Kurt@OpenLDAP.org>
Cc: <ietf-ldup@imc.org>
Subject: RE: naming contexts
Date: Fri, 7 Dec 2001 12:24:03 +1100
Message-ID: <000f01c17ebd$dc1ea880$a518200a@osmium.mtwav.adacel.com.au>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2377.0
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2120.0
Importance: Normal
In-Reply-To: <5.1.0.14.0.20011206164015.016c06e8@127.0.0.1>
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit



Kurt,

Kurt D. Zeilenga wrote:
> At 04:30 PM 2001-12-06, Steven Legg wrote:
> >Kurt D. Zeilenga wrote:
> >> At 11:35 PM 2001-12-05, Steven Legg wrote:
> >>
> >>
> >> >Kurt,
> >> >
> >> >> It seems to me that the definition of a "naming context"
> >> >> in LDAP/X.500 makes little sense in the face of multi-master
> >> >> replication.  A LDAP/X.500 naming context is subtree of
> >> >> entries held in a single master DSA.
> >> >>
> >> >> Consider three DSAs A, B, C where
> >> >>       A masters subtree X
> >> >>       B masters subtree Y
> >> >>       C masters subtrees X and Y, and
> >> >>       Subtrees X and Y are adjacent.
> >> >
> >> >>From what follows, I assume that Y is subordinate to X.
> >> >
> >> >> If A and B are masters and C is a shadow of each,
> >> >> LDAP/X.500 says that
> >> >>       A holds context X, B holds context Y, and
> >> >>       C holds contexts X and Y.
> >> >
> >> >C holds a shadow copy of contexts X and Y.
> >>
> >> Yes, but the distinction here is what values go into
> >> the root DSE namingContexts attribute of each server.
> >>   A publishes the name of the vertex of X.
> >>   B publishes the name of the vertex of Y.
> >>   C publishes the names of the vertex of X and Y.
> >
> >Strictly speaking, C holds no naming contexts because it only
> >has shadow entries. Its namingContexts attribute should be absent.
> 
> RFC 2256, 5.2.1. namingContexts
>    The values of this attribute correspond to naming contexts
>    which this server masters or shadows.

Of course you mean RFC 2252, but I take the point. I didn't look
beyond the description in RFC 2251 (doh!). However, I wonder if it
should instead say: "The values of this attribute correspond to the context
prefixes of naming contexts which this server masters and the names
of the base entries for replication areas that this server shadows." ?

In X.525, the root of a shadowed subtree doesn't have to correspond
to a context prefix entry. It can be a subordinate of a context prefix.
Perhaps it is more useful to a client to know the topmost shadowed
entry rather than the context prefix ?

Regards,
Steven 


From owner-ietf-ldup@mail.imc.org  Thu Dec  6 20:48:34 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id UAA21010
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 20:48:33 -0500 (EST)
Received: by above.proper.com (8.11.6/8.11.3) id fB71UWJ21970
	for ietf-ldup-bks; Thu, 6 Dec 2001 17:30:32 -0800 (PST)
Received: from nexus.adacel.com (shelob.adacel.com.au [203.36.26.146] (may be forged))
	by above.proper.com (8.11.6/8.11.3) with SMTP id fB71UT221965
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 17:30:29 -0800 (PST)
Received: (qmail 5568 invoked from network); 7 Dec 2001 01:25:55 -0000
Received: from unknown (HELO osmium) (10.32.24.165)
  by nexus.adacel.com with SMTP; 7 Dec 2001 01:25:55 -0000
Reply-To: <steven.legg@adacel.com.au>
From: "Steven Legg" <steven.legg@adacel.com.au>
To: "'Kurt D. Zeilenga'" <Kurt@OpenLDAP.org>
Cc: <ietf-ldup@imc.org>
Subject: RE: naming contexts
Date: Fri, 7 Dec 2001 12:31:03 +1100
Message-ID: <001001c17ebe$d5ff02f0$a518200a@osmium.mtwav.adacel.com.au>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2377.0
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2120.0
Importance: Normal
In-Reply-To: <5.1.0.14.0.20011206170033.016c1730@127.0.0.1>
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit



Kurt,

> RFC 2256, 5.2.1. namingContexts
>    The values of this attribute correspond to naming contexts
>    which this server masters or shadows. 
> 
> I should note that the next sentence could be taken to
> mean that a LDAP server mastering no entries may have
> no namingContexts attribute in the root DSE.
>    If the server does not master any information
>    (e.g. it is an LDAP gateway to a public X.500 directory) 
>    this attribute will be absent.
> 
> Should s/master/hold/ to be consistent with preceeding line
> and RFC 2251:
>    - namingContexts: naming contexts held in the server. 
> Naming contexts
>      are defined in section 17 of X.501 [6].
> 
> Please note that verb "hold" (or variant) when applied to an
> entry (or context), unless qualified, implies the server
> contains either a master or shadow copy.

Hmmm. I took "hold" to mean only a master. We should explicitly
spell out that we mean both naming contexts mastered by the server and
naming contexts shadowed by the server.

Regards,
Steven


From owner-ietf-ldup@mail.imc.org  Thu Dec  6 21:39:34 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id VAA22635
	for <ldup-archive@odin.ietf.org>; Thu, 6 Dec 2001 21:39:33 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB72PdM25763
	for ietf-ldup-bks; Thu, 6 Dec 2001 18:25:39 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB72Pc225759
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 18:25:38 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB72W1C51470;
	Fri, 7 Dec 2001 02:32:01 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011206180447.01739728@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Thu, 06 Dec 2001 18:24:47 -0800
To: <steven.legg@adacel.com.au>
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: RE: naming contexts
Cc: <ietf-ldup@imc.org>
In-Reply-To: <000f01c17ebd$dc1ea880$a518200a@osmium.mtwav.adacel.com.au>
References: <5.1.0.14.0.20011206164015.016c06e8@127.0.0.1>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


At 05:24 PM 2001-12-06, Steven Legg wrote:
>Kurt D. Zeilenga wrote:
>> RFC 2256, 5.2.1. namingContexts
>>    The values of this attribute correspond to naming contexts
>>    which this server masters or shadows.
>
>Of course you mean RFC 2252, but I take the point. I didn't look
>beyond the description in RFC 2251 (doh!). However, I wonder if it
>should instead say: "The values of this attribute correspond to the context
>prefixes of naming contexts which this server masters and the names
>of the base entries for replication areas that this server shadows." ?

Well, the problem is partial replication.  Consider a naming context
containing has N entries where N-1 are immediately subordinate to
context prefix and a replica which holds these N-1 entries (and not
the context prefix).  Your definition would require N-1 namingContext
values.

The LDAP/X.500 models assume that the shadow server has knowledge
as where the context prefix is mastered.  While publishing this
the context prefix may lead the client to believe the shadow server
holds the context prefix, this is a bad assumption.  The value
means that the shadow server holds a portion of the naming context.
This is "good enough" for most uses (the shadow generally has
knowledge to refer the client as needed).

>In X.525, the root of a shadowed subtree doesn't have to correspond
>to a context prefix entry. It can be a subordinate of a context prefix.
>Perhaps it is more useful to a client to know the topmost shadowed
>entry rather than the context prefix ?

Well, that could be argued.  But LDAP/X.500 is the way it is.  I
have no problem with replicaContexts behaving in this manner (though
I believe the partial replication issue applies here as well).

My concern is that LDUP does not define how a server is to
populate namingContext.  I argue it need to be defined in a manner
consistent with LDAP technical specification [including, by reference,
Section 17 of X.500(93)].  I believe the best (only?) way to do this
is to term one of the masters the "primary" in respect to this naming
context information (and, as Ed suggests, portions of replication
management information).

Kurt



From owner-ietf-ldup@mail.imc.org  Fri Dec  7 01:06:54 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id BAA28174
	for <ldup-archive@odin.ietf.org>; Fri, 7 Dec 2001 01:06:54 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB75oD400830
	for ietf-ldup-bks; Thu, 6 Dec 2001 21:50:13 -0800 (PST)
Received: from nexus.adacel.com (shelob.adacel.com.au [203.36.26.146] (may be forged))
	by above.proper.com (8.11.6/8.11.3) with SMTP id fB75oB200826
	for <ietf-ldup@imc.org>; Thu, 6 Dec 2001 21:50:11 -0800 (PST)
Received: (qmail 21762 invoked from network); 7 Dec 2001 05:45:36 -0000
Received: from unknown (HELO osmium) (10.32.24.165)
  by nexus.adacel.com with SMTP; 7 Dec 2001 05:45:36 -0000
Reply-To: <steven.legg@adacel.com.au>
From: "Steven Legg" <steven.legg@adacel.com.au>
To: "'Kurt D. Zeilenga'" <Kurt@OpenLDAP.org>
Cc: <ietf-ldup@imc.org>
Subject: RE: naming contexts
Date: Fri, 7 Dec 2001 16:50:45 +1100
Message-ID: <001401c17ee3$1d93dc70$a518200a@osmium.mtwav.adacel.com.au>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2377.0
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2120.0
Importance: Normal
In-Reply-To: <5.1.0.14.0.20011206180447.01739728@127.0.0.1>
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit



Kurt,

Kurt D. Zeilenga wrote:
> At 05:24 PM 2001-12-06, Steven Legg wrote:
> >Kurt D. Zeilenga wrote:
> >> RFC 2256, 5.2.1. namingContexts
> >>    The values of this attribute correspond to naming contexts
> >>    which this server masters or shadows.
> >
> >Of course you mean RFC 2252, but I take the point. I didn't look
> >beyond the description in RFC 2251 (doh!). However, I wonder if it
> >should instead say: "The values of this attribute correspond 
> to the context
> >prefixes of naming contexts which this server masters and the names
> >of the base entries for replication areas that this server 
> shadows." ?
> 
> Well, the problem is partial replication.  Consider a naming context
> containing has N entries where N-1 are immediately subordinate to
> context prefix and a replica which holds these N-1 entries (and not
> the context prefix).  Your definition would require N-1 namingContext
> values.

It depends on the replication agreements. If there is a separate
replication agreement for each one of the N-1 subordinates then
there will be N-1 namingContext values. If there is a single
replication agreement that happens to exclude the context prefix
entry then there will be only one namingContext value.

> 
> The LDAP/X.500 models assume that the shadow server has knowledge
> as where the context prefix is mastered.  While publishing this
> the context prefix may lead the client to believe the shadow server
> holds the context prefix, this is a bad assumption.  The value
> means that the shadow server holds a portion of the naming context.
> This is "good enough" for most uses (the shadow generally has
> knowledge to refer the client as needed).
> 
> >In X.525, the root of a shadowed subtree doesn't have to correspond
> >to a context prefix entry. It can be a subordinate of a 
> context prefix.
> >Perhaps it is more useful to a client to know the topmost shadowed
> >entry rather than the context prefix ?
> 
> Well, that could be argued.

Yep. That's why I made it a question. :-)

> But LDAP/X.500 is the way it is.  I
> have no problem with replicaContexts behaving in this manner (though
> I believe the partial replication issue applies here as well).
> 
> My concern is that LDUP does not define how a server is to
> populate namingContext.  I argue it need to be defined in a manner
> consistent with LDAP technical specification [including, by reference,
> Section 17 of X.500(93)]. I believe the best (only?) way to do this
> is to term one of the masters the "primary" in respect to this naming
> context information (and, as Ed suggests, portions of replication
> management information).

I don't believe there is any way we can define multimaster replication
such that we remain 100% consistent with the existing definition
for a naming context. Either we violate the assumption that the DIT
is partititioned into disjoint naming contexts, or we violate the
requirement that the superior of a context prefix entry is in a
different master DSA. Determining naming contexts with respect to a
primary satisfies the former but not the latter.

Introducing the concept of replication contexts provides a disjoint
partitioning of the DIT to use where such is needed while preserving
the constraint that the superior of a context prefix is in a different
master DSA. Servers can work out the naming contexts from the
replication contexts they hold. A naming context is just a bunch
of adjacent mastered replication contexts.

Whether we call fragments of the disjoint partitioning of the DIT
naming contexts or replication contexts, I think the partitioning
can be administered without resorting to a primary replica.

See you in Salt Lake City.

Regards,
Steven



From owner-ietf-ldup@mail.imc.org  Fri Dec  7 12:10:58 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA24747
	for <ldup-archive@odin.ietf.org>; Fri, 7 Dec 2001 12:10:57 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB7Gr9v01633
	for ietf-ldup-bks; Fri, 7 Dec 2001 08:53:09 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB7Gr4201627
	for <ietf-ldup@imc.org>; Fri, 7 Dec 2001 08:53:04 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB7GxpC54119;
	Fri, 7 Dec 2001 16:59:51 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011207085135.016fc6d8@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Fri, 07 Dec 2001 08:52:34 -0800
To: <steven.legg@adacel.com.au>
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: RE: naming contexts
Cc: <ietf-ldup@imc.org>
In-Reply-To: <001401c17ee3$1d93dc70$a518200a@osmium.mtwav.adacel.com.au>
References: <5.1.0.14.0.20011206180447.01739728@127.0.0.1>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


At 09:50 PM 2001-12-06, Steven Legg wrote:
>I don't believe there is any way we can define multimaster replication
>such that we remain 100% consistent with the existing definition
>for a naming context. Either we violate the assumption that the DIT
>is partititioned into disjoint naming contexts, or we violate the
>requirement that the superior of a context prefix entry is in a
>different master DSA.

Guess I would prefer to violate the latter, not the former.

>Determining naming contexts with respect to a
>primary satisfies the former but not the latter.

Concur.



From owner-ietf-ldup@mail.imc.org  Fri Dec  7 12:43:50 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA25645
	for <ldup-archive@odin.ietf.org>; Fri, 7 Dec 2001 12:43:50 -0500 (EST)
Received: by above.proper.com (8.11.6/8.11.3) id fB7HSeT03453
	for ietf-ldup-bks; Fri, 7 Dec 2001 09:28:40 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB7HSc203447
	for <ietf-ldup@imc.org>; Fri, 7 Dec 2001 09:28:38 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB7HZLC54284;
	Fri, 7 Dec 2001 17:35:21 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011207091920.0177b090@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Fri, 07 Dec 2001 09:28:04 -0800
To: <steven.legg@adacel.com.au>
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: RE: naming contexts
Cc: <ietf-ldup@imc.org>
In-Reply-To: <5.1.0.14.0.20011207085135.016fc6d8@127.0.0.1>
References: <001401c17ee3$1d93dc70$a518200a@osmium.mtwav.adacel.com.au>
 <5.1.0.14.0.20011206180447.01739728@127.0.0.1>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


Replying to my own message to stress one point:

I believe that the LDUP technical specification must define
how replicas are to populate the namingContext attribute in
their root DSE as clearly replicas participating in multi-master
replication cannot do this in accordance with LDAP "core"
technical specification.

Kurt

At 08:52 AM 2001-12-07, Kurt D. Zeilenga wrote:
>At 09:50 PM 2001-12-06, Steven Legg wrote:
>>I don't believe there is any way we can define multimaster replication
>>such that we remain 100% consistent with the existing definition
>>for a naming context. Either we violate the assumption that the DIT
>>is partititioned into disjoint naming contexts, or we violate the
>>requirement that the superior of a context prefix entry is in a
>>different master DSA.
>
>Guess I would prefer to violate the latter, not the former.
>
>>Determining naming contexts with respect to a
>>primary satisfies the former but not the latter.
>
>Concur.



From owner-ietf-ldup@mail.imc.org  Fri Dec  7 16:07:32 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id QAA00251
	for <ldup-archive@odin.ietf.org>; Fri, 7 Dec 2001 16:07:31 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB7Ki5911532
	for ietf-ldup-bks; Fri, 7 Dec 2001 12:44:05 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB7Ki3211526
	for <ietf-ldup@imc.org>; Fri, 7 Dec 2001 12:44:03 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Fri, 07 Dec 2001 15:34:56 -0700
Message-Id: <sc10e1a0.009@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Fri, 07 Dec 2001 15:34:35 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <steven.legg@adacel.com.au>, <Kurt@OpenLDAP.org>
Cc: <ietf-ldup@imc.org>
Subject: RE: naming contexts
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fB7Ki4211528
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


What do you mean "how"?  I'm just not sure what you're
asking the docs to describe - I have been assuming that the
management operations doc would say something like...

"to create a new replication area "B", pick the root of the
subtree that will be partitioned off from the current
replication area "A".  Designate that container entry
as the new replication context root "ou=B", update the
server rootDSE replicationContexts attribute by
adding the name of the new replication context root
to those values already there.  Mark the entry
with something to designate it as an administrative
area root - either via some glue bit in the entry DSE,
or via the addition of an auxilliary class that accomplishes
the same thing.  Proceed by creating a replicaSubentry
directly subordinate to the replication context root for
the local server and set its replica type to "primary" (since
it's updateable, and right now it's the only one, so it's
able to operate as a single-master of the replication area).

Note that other replicas of the replication area "A" will
receive notification that entry "ou=B" is now the root
of a replication area (done via the presence of that
DSE glue stuff or the auxilliary class that was added to
the entry), and the replicaSubentry that designates
which server holds the replica of the new replication
area.  Those other replicas should delete the other
contents of the subtree below "ou=B" (not including
any replicaSubentries), unless they find a replicaSubentry
naming themselves, indicating that they're to hold replicas,
too.  An alternative to this scenario would be that all
replicas of "A" would automatically become replicas of "B",
and any that are no longer needed can be deleted after
the replication topology is fully propagated.

To add another replica of the new replication context, first
add a replicaSubentry for the new server on the replica of 
type "Primary" for replica context "B".  Then, contact
the server that will hold the new replica of "B" and using
some yet-to-be-defined LDAP control, tell it to basically
accept the replication of replica context root "ou=B" and
its replicaSubentry subordinates, which should now include
one pointing to itself.  Then, because the new server update
vector entry is {null, empty, zero, whatever}, both the Primary
replica server and the new replica server should realize that a
full update should be initiated.  That may be done by the
Primary replica, or if some other replica would be better, by
any of the other replicas the new server has just been told
about.  Thus, it probably makes sense for the new server
to pick a replica and request a full update, rather than to
automatically have the Primary initiate it.

Note that no replica that doesn't know about the new
server (ie, that doesn't yet hold a copy of the replicaSubentry
for the new server) should accept a request to provide a
full update to the new server, nor can it begin sending
updates to the new server until it hears about the new
server via receiving the replicaSubentry for the new server.

Also, no server "X" should ACCEPT changes to replica "B" from any server
"Y" that isn't represented in "X"'s set of replicaSubentries for "B".  The
special control mentioned above should provide a specially
access controlled mechanism to "persuade" "X" to receive
the initial replica topology information about a new replica "B"
context from the primary replica of the naming context.  Some
folks may want there to be some action at server "X" to advise it
to expect such a thing, and that's fine - it's just a special
access control thing ("accept it if this has been set already").

Ed

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585

>>> "Kurt D. Zeilenga" <Kurt@OpenLDAP.org> 12/07/01 12:28PM >>>

Replying to my own message to stress one point:

I believe that the LDUP technical specification must define
how replicas are to populate the namingContext attribute in
their root DSE as clearly replicas participating in multi-master
replication cannot do this in accordance with LDAP "core"
technical specification.

Kurt

At 08:52 AM 2001-12-07, Kurt D. Zeilenga wrote:
>At 09:50 PM 2001-12-06, Steven Legg wrote:
>>I don't believe there is any way we can define multimaster replication
>>such that we remain 100% consistent with the existing definition
>>for a naming context. Either we violate the assumption that the DIT
>>is partititioned into disjoint naming contexts, or we violate the
>>requirement that the superior of a context prefix entry is in a
>>different master DSA.
>
>Guess I would prefer to violate the latter, not the former.
>
>>Determining naming contexts with respect to a
>>primary satisfies the former but not the latter.
>
>Concur.



From owner-ietf-ldup@mail.imc.org  Fri Dec  7 18:21:26 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id SAA02337
	for <ldup-archive@odin.ietf.org>; Fri, 7 Dec 2001 18:21:25 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB7N6jQ19606
	for ietf-ldup-bks; Fri, 7 Dec 2001 15:06:45 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB7N6i219602
	for <ietf-ldup@imc.org>; Fri, 7 Dec 2001 15:06:44 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB7NDTC55460;
	Fri, 7 Dec 2001 23:13:29 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011207134734.016c2808@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Fri, 07 Dec 2001 15:06:11 -0800
To: "Ed Reed" <eer@OnCallDBA.COM>
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: RE: naming contexts
Cc: <steven.legg@adacel.com.au>, <ietf-ldup@imc.org>
In-Reply-To: <sc10e1a0.009@smtp.oncalldba.com>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


At 02:34 PM 2001-12-07, Ed Reed wrote:
>What do you mean "how"?  I'm just not sure what you're
>asking the docs to describe - I have been assuming that the
>management operations doc would say something like...

LDAP [RFC 2251] says "namingContexts: naming contexts held in
the server. Naming contexts are defined in section 17 of X.501 [6]."

This, IMO, provides an adequate technical specification (the
"how") for the namingContexts attribute.

However, as we've discussed on this list, this definition
is inadequate in the face of LDUP multi-master replication.
Hence, LDUP needs to provide an adequate technical specification
(the "how") for the namingContexts attribute.

Currently, the LDUP Architecture seems to define "naming context"
as:
        A Naming Context is a subtree of entries in the
        Directory Information Tree (DIT).  There may be
        multiple Naming Contexts stored on a single server.
        Naming Contexts are defined in section 17 of [X501].

Though the first sentence I find a little misleading as a
Naming Context is a subtree of entries with very particular
properties, the second sentence indicates that the definition
of naming context is that in section 17 of [X501].  The
document then goes on to define "replication contexts".

The Information Model seems to equate a "naming context"
with a "replication context".


> [lots of stuff about how replication areas are created]

>Also, no server "X" should ACCEPT changes to replica "B" from any server
>"Y" that isn't represented in "X"'s set of replicaSubentries for "B".  The
>special control mentioned above should provide a specially
>access controlled mechanism to "persuade" "X" to receive
>the initial replica topology information about a new replica "B"
>context from the primary replica of the naming context.

Do you mean really mean "naming context" here?



From owner-ietf-ldup@mail.imc.org  Fri Dec  7 19:30:28 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id TAA03431
	for <ldup-archive@odin.ietf.org>; Fri, 7 Dec 2001 19:30:27 -0500 (EST)
Received: by above.proper.com (8.11.6/8.11.3) id fB80CZC23420
	for ietf-ldup-bks; Fri, 7 Dec 2001 16:12:35 -0800 (PST)
Received: from pretender.boolean.net (root@router.boolean.net [198.144.206.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB80CW223416
	for <ietf-ldup@imc.org>; Fri, 7 Dec 2001 16:12:33 -0800 (PST)
Received: from nomad.OpenLDAP.org (root@localhost [127.0.0.1])
	by pretender.boolean.net (8.11.3/8.11.1/Boolean/Hub) with ESMTP id fB80JKC55739;
	Sat, 8 Dec 2001 00:19:20 GMT
	(envelope-from Kurt@OpenLDAP.org)
Message-Id: <5.1.0.14.0.20011207155651.016cfa80@127.0.0.1>
X-Sender: kurt@127.0.0.1
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Fri, 07 Dec 2001 16:12:02 -0800
To: "Ed Reed" <eer@OnCallDBA.COM>
From: "Kurt D. Zeilenga" <Kurt@OpenLDAP.org>
Subject: RE: naming contexts
Cc: <steven.legg@adacel.com.au>, <ietf-ldup@imc.org>
In-Reply-To: <5.1.0.14.0.20011207134734.016c2808@127.0.0.1>
References: <sc10e1a0.009@smtp.oncalldba.com>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


Guess I hit that send button a little early on that last post...

I believe the LDUP Technical Specification needs to say
something like:
        LDAP naming contexts [Section 17 of X.501] are not used
        by LDUP implementations as their definition does not
        address the implications of multi-master replication.

        Implementations of LDUP will populate the namingContext
        attribute with LDUP Foo contexts, not LDAP/X.500
        naming contexts.

        A LDUP Foo context is a ....

Substitute 'Foo' with something appropriate and provide an
adequate definition of what a 'Foo Context' is. 

Kurt

At 03:06 PM 2001-12-07, Kurt D. Zeilenga wrote:

>At 02:34 PM 2001-12-07, Ed Reed wrote:
>>What do you mean "how"?  I'm just not sure what you're
>>asking the docs to describe - I have been assuming that the
>>management operations doc would say something like...
>
>LDAP [RFC 2251] says "namingContexts: naming contexts held in
>the server. Naming contexts are defined in section 17 of X.501 [6]."
>
>This, IMO, provides an adequate technical specification (the
>"how") for the namingContexts attribute.
>
>However, as we've discussed on this list, this definition
>is inadequate in the face of LDUP multi-master replication.
>Hence, LDUP needs to provide an adequate technical specification
>(the "how") for the namingContexts attribute.
>
>Currently, the LDUP Architecture seems to define "naming context"
>as:
>        A Naming Context is a subtree of entries in the
>        Directory Information Tree (DIT).  There may be
>        multiple Naming Contexts stored on a single server.
>        Naming Contexts are defined in section 17 of [X501].
>
>Though the first sentence I find a little misleading as a
>Naming Context is a subtree of entries with very particular
>properties, the second sentence indicates that the definition
>of naming context is that in section 17 of [X501].  The
>document then goes on to define "replication contexts".
>
>The Information Model seems to equate a "naming context"
>with a "replication context".
>
>
>> [lots of stuff about how replication areas are created]
>
>>Also, no server "X" should ACCEPT changes to replica "B" from any server
>>"Y" that isn't represented in "X"'s set of replicaSubentries for "B".  The
>>special control mentioned above should provide a specially
>>access controlled mechanism to "persuade" "X" to receive
>>the initial replica topology information about a new replica "B"
>>context from the primary replica of the naming context.
>
>Do you mean really mean "naming context" here?



From owner-ietf-ldup@mail.imc.org  Fri Dec  7 19:35:23 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id TAA03555
	for <ldup-archive@odin.ietf.org>; Fri, 7 Dec 2001 19:35:22 -0500 (EST)
Received: by above.proper.com (8.11.6/8.11.3) id fB80Jeo23943
	for ietf-ldup-bks; Fri, 7 Dec 2001 16:19:40 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB80Jd223936
	for <ietf-ldup@imc.org>; Fri, 7 Dec 2001 16:19:39 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Fri, 07 Dec 2001 19:10:30 -0700
Message-Id: <sc111426.012@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Fri, 07 Dec 2001 19:10:24 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <Kurt@OpenLDAP.org>
Cc: <steven.legg@adacel.com.au>, <ietf-ldup@imc.org>
Subject: RE: naming contexts
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fB80Jd223938
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


No - I meant replica context.

I'll confess that when I started on LDUP I didn't understand what
naming contexts were, and I frankly thought they were just a list
of all the partitions replicated onto a DSA.  I've since learned
differently, but that confusion led to our initially talking about
naming contexts, and when I finally got it, we changed and
began talking about replication areas and replica contexts.  Some
of the earlier confusion may not have been wrung out of the info
draft, for which I'm sorry.

I'll make a special effort in the next round of doc cleanup to get
the two terms consistently used throughout the docs.

Ed

>>> "Kurt D. Zeilenga" <Kurt@OpenLDAP.org> 12/07/01 06:06PM >>>
At 02:34 PM 2001-12-07, Ed Reed wrote:
>What do you mean "how"?  I'm just not sure what you're
>asking the docs to describe - I have been assuming that the
>management operations doc would say something like...

LDAP [RFC 2251] says "namingContexts: naming contexts held in
the server. Naming contexts are defined in section 17 of X.501 [6]."

This, IMO, provides an adequate technical specification (the
"how") for the namingContexts attribute.

However, as we've discussed on this list, this definition
is inadequate in the face of LDUP multi-master replication.
Hence, LDUP needs to provide an adequate technical specification
(the "how") for the namingContexts attribute.

Currently, the LDUP Architecture seems to define "naming context"
as:
        A Naming Context is a subtree of entries in the
        Directory Information Tree (DIT).  There may be
        multiple Naming Contexts stored on a single server.
        Naming Contexts are defined in section 17 of [X501].

Though the first sentence I find a little misleading as a
Naming Context is a subtree of entries with very particular
properties, the second sentence indicates that the definition
of naming context is that in section 17 of [X501].  The
document then goes on to define "replication contexts".

The Information Model seems to equate a "naming context"
with a "replication context".


> [lots of stuff about how replication areas are created]

>Also, no server "X" should ACCEPT changes to replica "B" from any server
>"Y" that isn't represented in "X"'s set of replicaSubentries for "B".  The
>special control mentioned above should provide a specially
>access controlled mechanism to "persuade" "X" to receive
>the initial replica topology information about a new replica "B"
>context from the primary replica of the naming context.

Do you mean really mean "naming context" here?




From owner-ietf-ldup@mail.imc.org  Fri Dec  7 20:11:37 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id UAA03984
	for <ldup-archive@odin.ietf.org>; Fri, 7 Dec 2001 20:11:36 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fB80v4d26159
	for ietf-ldup-bks; Fri, 7 Dec 2001 16:57:04 -0800 (PST)
Received: from tconl91223.tconl.com (tconl91223.tconl.com [204.26.91.223])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fB80v2226154
	for <ietf-ldup@imc.org>; Fri, 7 Dec 2001 16:57:02 -0800 (PST)
Received: (from jayhawk@localhost)
	by tconl91223.tconl.com (8.11.0/8.11.0) id fB80rIZ01038;
	Fri, 7 Dec 2001 18:53:18 -0600
Date: Fri, 7 Dec 2001 18:53:17 -0600
From: Ryan Moats <rmoats@lemurnetworks.net>
To: Steven Legg <steven.legg@adacel.com.au>
Cc: ietf-ldup@imc.org
Subject: Re: Supporting Partial Replication
Message-ID: <20011207185317.G956@localhost.localdomain>
References: <000c01c17e08$6a867780$a518200a@osmium.mtwav.adacel.com.au>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.2.5i
In-Reply-To: <000c01c17e08$6a867780$a518200a@osmium.mtwav.adacel.com.au>; from steven.legg@adacel.com.au on Thu, Dec 06, 2001 at 02:45:14PM +1100
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


On Thu, Dec 06, 2001 at 02:45:14PM +1100, Steven Legg wrote:
| 
| 
| Folks,
| 
| A while ago I promised to write up my thoughts on changes to the LDUP
| architecture to support partial replication. Well this is part one of
| that write up, which discusses changes to the architecture to make it
| more amenable to replication topologies involving partial replicas.
| 
| 
| Consider the following replication topology:
| 
|   S1 ====== S2
|     \      /
|      \    /
|       \  /
|        S3
| 
| Servers S1 & S2 hold full copies of replication area R1. S3 holds
| replication area R2, a subset of R1. R2 could be a subtree of R1, a
| sparse replica or a fractional replica. The exact details don't matter
| at this stage. It is enough to recognize that R2 is a subset of the
| information in R1.
| 
| Suppose that there are two successive update operations, U1 & U2, performed
| at S2, where U1 affects information in R1 but wholly outside of R2 and U2
| is wholly within R2 (and thus also within R1). The CSN alloted to U1 is
| less than the CSN alloted to U2.
| 
| Suppose S3 and S2 establish replication sessions to exchange updates.
| S3 has no changes to send. S2 will send U2 because it is within the scope
| of the replication agreement S3 has with S2, but will not send U1.
| 
| S3 and S1 then establish replication sessions. S1 has no changes to send.
| S3 sends U2 since the CSN for U2 is more recent than the CSN corresponding
| to S2 in S1's update vector. S1 will set the CSN corresponding to S2 in
| its update vector to be the CSN for U2.
| 
| Now, if S2 establishes a replication session with S1 it will send no
| updates. In particular, it won't send U1 because the CSN corresponding to
| S2 in S1's update vector is already greater than the CSN for U1. In fact,
| S1 will never receive U1, so the requirement for all replicas to converge
| will not be satisfied. In general, the current LDUP architecture only
| works if the replication topology has no cycles, or where there are
| cycles, if the replicas in each cycle have replication agreements for
| exactly the same area of replication.
|

Hold on...  I'm deleting the rest of this message because you've lost me
here.  I thought that (a) we had a separate CSN vector for each other server
and that (b) that CSN vector was to the level of attribute and entry.  Thus,
I don't see the problem.

Ryan


From owner-ietf-ldup@mail.imc.org  Mon Dec 10 09:38:49 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id JAA02343
	for <ldup-archive@odin.ietf.org>; Mon, 10 Dec 2001 09:38:48 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBAEJEr05394
	for ietf-ldup-bks; Mon, 10 Dec 2001 06:19:14 -0800 (PST)
Received: from e1.ny.us.ibm.com (e1.ny.us.ibm.com [32.97.182.101])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBAEJ7205388
	for <ietf-ldup@imc.org>; Mon, 10 Dec 2001 06:19:07 -0800 (PST)
Received: from northrelay02.pok.ibm.com (northrelay02.pok.ibm.com [9.117.200.22])
	by e1.ny.us.ibm.com (8.9.3/8.9.3) with ESMTP id JAA264776;
	Mon, 10 Dec 2001 09:15:36 -0500
Received: from d27ml001.rchland.ibm.com (d27ml001.rchland.ibm.com [9.5.39.28])
	by northrelay02.pok.ibm.com (8.11.1m3/NCO v5.01) with ESMTP id fBAEIK581030;
	Mon, 10 Dec 2001 09:18:20 -0500
Subject: Re: Supporting Partial Replication
To: Ryan Moats <rmoats@lemurnetworks.net>
Cc: ietf-ldup@imc.org, Steven Legg <steven.legg@adacel.com.au>
X-Mailer: Lotus Notes Release 5.0.9  November 16, 2001
Message-ID: <OF3AC3E5B6.C390151E-ON86256B1E.004964B4@rchland.ibm.com>
From: "John McMeeking" <jmcmeek@us.ibm.com>
Date: Mon, 10 Dec 2001 08:22:08 -0600
X-MIMETrack: Serialize by Router on d27ml001/27/M/IBM(Build M10_08082001 Beta 3|August
 08, 2001) at 12/10/2001 08:22:10 AM
MIME-Version: 1.0
Content-type: multipart/alternative; 
	Boundary="0__=09BBE18DDFDAE2248f9e8a93df938690918c09BBE18DDFDAE224"
Content-Disposition: inline
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


--0__=09BBE18DDFDAE2248f9e8a93df938690918c09BBE18DDFDAE224
Content-type: text/plain; charset=US-ASCII

I agree with Ryan on the first part of the note (I haven't gotten through
the flurry of LDUP notes).  But, we may have a problem with update vectors
under partial/fractional replication.

Update vectors are defined per replica and per replication context
(actually, defined per replica, where a replica is a replicated instance of
a replication context).

If R2 is a subtree of R1, separate update vectors are maintained for both
R2 and R1, and the problem Steven describes doesn't occur.  For either S1
or S2 to replicate U1 to S3, replication context R1 must be added to S3,
and the replication context properly initialized on S3 -- either via a full
update replication session, or via some other means (i.e. LDIF).  At that
point, U1 (and the rest of the entries in R1) are present on S3 and
replication continues normally.  Replication of R1 is independent of R2.

If R2 is a sparse/fractional replica of R1, R2 would not be considered a
separate replication context.  In this case, sparse/fractional replication
is an attribute of the replicaSubentry for S3.  If U1 falls within the
attributes and/or entries specified for S3, it will be replicated under the
replication agreements targeting S3 under R1, and the UV for S3 updated
accordingly.

What happens when S3 is a fractional replica, and U1 does not contain any
attributes replicated to S3?  draft-ietf-ldup-model-06, section 8.2,
specifies "When fully populating or incrementally bringing up to date a
Fractional Replica each of the Replication Updates must only
contain updates to the attributes in the Fractional Entry Specification."
This implies that S3 will never see U1, and thus not fully update its
update vector until such time as it receives an update originating at the
same server.  Steven's example described U1 and U2 as successive updates
originating at S2.  Suppose U1 originated at S1 and U2 originated at S2:

Initially, the update vector for S3 (UV3) looks like < Tx#1#0#0, Ty#2#0#0,
null > (latest CSNs from S1 and S2, no updates originating at S3)

At time T15, on server 1, client performs update U1: CSN = T15#1#0#0.

This is replicated to S2, and eventually S2 replicates to S3.  But since no
attributes in U1 are present in S3's fractional entry specification, no
replication occurs.

Update vector for S3 remains < Tx#1#0#0, Ty#2#0#0, null >

At time T16, on server 2, client performs update U2: CSN = T16#2#0#0.

U2 is replicated to S3.

Update vector for S3 is changed to < Tx#1#0#0, T16#2#0#0, null >.  What we
want to see here, though is: < T15#1#0#0, T16#2#0#0, null >

Since T15-1 never appears in the update vector for S3, two things happen:
1) purge vectors never advance to include U1 until a subsequent change on
S1 results in an update vector change for S3
2) Servers replicating to S3 must examine update U1 (possibly similar
updates) at each replication session to determine that it should not be
sent to S3, as it appears that U1 has never been seen by S3.

Solution?

Can the update protocol support replication primitives that, in effect,
contain nothing but the entry UUID and CSN information?  For example, an
"add attribute value primitive" that consist of:
csn=T15-1
type=null
value=null

Or should there be a new primitive that serves only to inform the replica
of a CSN?


John  McMeeking



                                                                                                                           
                      Ryan Moats                                                                                           
                      <rmoats@lemurnetw        To:       Steven Legg <steven.legg@adacel.com.au>                           
                      orks.net>                cc:       ietf-ldup@imc.org                                                 
                      Sent by:                 Subject:  Re: Supporting Partial Replication                                
                      owner-ietf-ldup@m                                                                                    
                      ail.imc.org                                                                                          
                                                                                                                           
                                                                                                                           
                      12/07/2001 06:53                                                                                     
                      PM                                                                                                   
                                                                                                                           
                                                                                                                           




On Thu, Dec 06, 2001 at 02:45:14PM +1100, Steven Legg wrote:
|
|
| Folks,
|
| A while ago I promised to write up my thoughts on changes to the LDUP
| architecture to support partial replication. Well this is part one of
| that write up, which discusses changes to the architecture to make it
| more amenable to replication topologies involving partial replicas.
|
|
| Consider the following replication topology:
|
|   S1 ====== S2
|     \      /
|      \    /
|       \  /
|        S3
|
| Servers S1 & S2 hold full copies of replication area R1. S3 holds
| replication area R2, a subset of R1. R2 could be a subtree of R1, a
| sparse replica or a fractional replica. The exact details don't matter
| at this stage. It is enough to recognize that R2 is a subset of the
| information in R1.
|
| Suppose that there are two successive update operations, U1 & U2,
performed
| at S2, where U1 affects information in R1 but wholly outside of R2 and U2
| is wholly within R2 (and thus also within R1). The CSN alloted to U1 is
| less than the CSN alloted to U2.
|
| Suppose S3 and S2 establish replication sessions to exchange updates.
| S3 has no changes to send. S2 will send U2 because it is within the scope
| of the replication agreement S3 has with S2, but will not send U1.
|
| S3 and S1 then establish replication sessions. S1 has no changes to send.
| S3 sends U2 since the CSN for U2 is more recent than the CSN
corresponding
| to S2 in S1's update vector. S1 will set the CSN corresponding to S2 in
| its update vector to be the CSN for U2.
|
| Now, if S2 establishes a replication session with S1 it will send no
| updates. In particular, it won't send U1 because the CSN corresponding to
| S2 in S1's update vector is already greater than the CSN for U1. In fact,
| S1 will never receive U1, so the requirement for all replicas to converge
| will not be satisfied. In general, the current LDUP architecture only
| works if the replication topology has no cycles, or where there are
| cycles, if the replicas in each cycle have replication agreements for
| exactly the same area of replication.
|

Hold on...  I'm deleting the rest of this message because you've lost me
here.  I thought that (a) we had a separate CSN vector for each other
server
and that (b) that CSN vector was to the level of attribute and entry.
Thus,
I don't see the problem.

Ryan



--0__=09BBE18DDFDAE2248f9e8a93df938690918c09BBE18DDFDAE224
Content-type: text/html; charset=US-ASCII
Content-Disposition: inline

<html><body>
<p>I agree with Ryan on the first part of the note (I haven't gotten through the flurry of LDUP notes).  But, we may have a problem with update vectors under partial/fractional replication.<br>
<br>
Update vectors are defined per replica and per replication context (actually, defined per replica, where a replica is a replicated instance of a replication context).<br>
<br>
If R2 is a subtree of R1, separate update vectors are maintained for both R2 and R1, and the problem Steven describes doesn't occur.  For either S1 or S2 to replicate U1 to S3, replication context R1 must be added to S3, and the replication context properly initialized on S3 -- either via a full update replication session, or via some other means (i.e. LDIF).  At that point, U1 (and the rest of the entries in R1) are present on S3 and replication continues normally.  Replication of R1 is independent of R2.<br>
<br>
If R2 is a sparse/fractional replica of R1, R2 would not be considered a separate replication context.  In this case, sparse/fractional replication is an attribute of the replicaSubentry for S3.  If U1 falls within the attributes and/or entries specified for S3, it will be replicated under the replication agreements targeting S3 under R1, and the UV for S3 updated accordingly.<br>
<br>
What happens when S3 is a fractional replica, and U1 does not contain any attributes replicated to S3?  draft-ietf-ldup-model-06, section 8.2, specifies &quot;When fully populating or incrementally bringing up to date a Fractional Replica each of the Replication Updates must only <br>
contain updates to the attributes in the Fractional Entry Specification.&quot;  This implies that S3 will never see U1, and thus not fully update its update vector until such time as it receives an update originating at the same server.  Steven's example described U1 and U2 as successive updates originating at S2.  Suppose U1 originated at S1 and U2 originated at S2:<br>
<br>
Initially, the update vector for S3 (UV3) looks like &lt; Tx#1#0#0, Ty#2#0#0, null &gt; (latest CSNs from S1 and S2, no updates originating at S3)<br>
<br>
At time T15, on server 1, client performs update U1: CSN = T15#1#0#0.<br>
<br>
This is replicated to S2, and eventually S2 replicates to S3.  But since no attributes in U1 are present in S3's fractional entry specification, no replication occurs.<br>
<br>
Update vector for S3 remains &lt; Tx#1#0#0, Ty#2#0#0, null &gt;<br>
<br>
At time T16, on server 2, client performs update U2: CSN = T16#2#0#0.<br>
<br>
U2 is replicated to S3.<br>
<br>
Update vector for S3 is changed to &lt; Tx#1#0#0, T16#2#0#0, null &gt;.  What we want to see here, though is: &lt; T15#1#0#0, T16#2#0#0, null &gt;<br>
<br>
Since T15-1 never appears in the update vector for S3, two things happen:<br>
1) purge vectors never advance to include U1 until a subsequent change on S1 results in an update vector change for S3<br>
2) Servers replicating to S3 must examine update U1 (possibly similar updates) at each replication session to determine that it should not be sent to S3, as it appears that U1 has never been seen by S3.<br>
<br>
Solution?<br>
<br>
Can the update protocol support replication primitives that, in effect, contain nothing but the entry UUID and CSN information?  For example, an &quot;add attribute value primitive&quot; that consist of:<br>
csn=T15-1<br>
type=null<br>
value=null<br>
<br>
Or should there be a new primitive that serves only to inform the replica of a CSN?<br>
<br>
<br>
John  McMeeking<br>
<br>
<img src="/icons/graycol.gif">Ryan Moats &lt;rmoats@lemurnetworks.net&gt;<br>
<br>
<br>

<table V5DOTBL=true width="100%" border="0" cellspacing="0" cellpadding="0">
<tr valign="top"><td width="1%"><img src="/icons/ecblank.gif" border="0" height="1" width="72" alt=""><br>
</td><td style="background-image:url(/mail2.box/StdNotesLtrGateway?OpenImageResource); background-repeat: no-repeat;" width="1%"><img src="/icons/ecblank.gif" border="0" height="1" width="225" alt=""><br>
<ul><ul><ul><ul><b><font size="2">Ryan Moats &lt;rmoats@lemurnetworks.net&gt;</font></b><br>
<font size="2">Sent by: owner-ietf-ldup@mail.imc.org</font>
<p><font size="2">12/07/2001 06:53 PM</font></ul></ul></ul></ul></td><td width="100%"><img src="/icons/ecblank.gif" border="0" height="1" width="1" alt=""><br>
<font size="1" face="Arial">	</font><br>
<font size="2">	To:	</font><font size="2">Steven Legg &lt;steven.legg@adacel.com.au&gt;</font><br>
<font size="2">	cc:	</font><font size="2">ietf-ldup@imc.org</font><br>
<font size="2">	Subject:	</font><font size="2">Re: Supporting Partial Replication</font><br>
<br>
<font size="1" face="Arial">       </font></td></tr>
</table>
<br>
<font face="Courier New"><br>
On Thu, Dec 06, 2001 at 02:45:14PM +1100, Steven Legg wrote:<br>
| <br>
| <br>
| Folks,<br>
| <br>
| A while ago I promised to write up my thoughts on changes to the LDUP<br>
| architecture to support partial replication. Well this is part one of<br>
| that write up, which discusses changes to the architecture to make it<br>
| more amenable to replication topologies involving partial replicas.<br>
| <br>
| <br>
| Consider the following replication topology:<br>
| <br>
|   S1 ====== S2<br>
|     \      /<br>
|      \    /<br>
|       \  /<br>
|        S3<br>
| <br>
| Servers S1 &amp; S2 hold full copies of replication area R1. S3 holds<br>
| replication area R2, a subset of R1. R2 could be a subtree of R1, a<br>
| sparse replica or a fractional replica. The exact details don't matter<br>
| at this stage. It is enough to recognize that R2 is a subset of the<br>
| information in R1.<br>
| <br>
| Suppose that there are two successive update operations, U1 &amp; U2, performed<br>
| at S2, where U1 affects information in R1 but wholly outside of R2 and U2<br>
| is wholly within R2 (and thus also within R1). The CSN alloted to U1 is<br>
| less than the CSN alloted to U2.<br>
| <br>
| Suppose S3 and S2 establish replication sessions to exchange updates.<br>
| S3 has no changes to send. S2 will send U2 because it is within the scope<br>
| of the replication agreement S3 has with S2, but will not send U1.<br>
| <br>
| S3 and S1 then establish replication sessions. S1 has no changes to send.<br>
| S3 sends U2 since the CSN for U2 is more recent than the CSN corresponding<br>
| to S2 in S1's update vector. S1 will set the CSN corresponding to S2 in<br>
| its update vector to be the CSN for U2.<br>
| <br>
| Now, if S2 establishes a replication session with S1 it will send no<br>
| updates. In particular, it won't send U1 because the CSN corresponding to<br>
| S2 in S1's update vector is already greater than the CSN for U1. In fact,<br>
| S1 will never receive U1, so the requirement for all replicas to converge<br>
| will not be satisfied. In general, the current LDUP architecture only<br>
| works if the replication topology has no cycles, or where there are<br>
| cycles, if the replicas in each cycle have replication agreements for<br>
| exactly the same area of replication.<br>
|<br>
<br>
Hold on...  I'm deleting the rest of this message because you've lost me<br>
here.  I thought that (a) we had a separate CSN vector for each other server<br>
and that (b) that CSN vector was to the level of attribute and entry.  Thus,<br>
I don't see the problem.<br>
<br>
Ryan<br>
</font><br>
<br>
</body></html>
--0__=09BBE18DDFDAE2248f9e8a93df938690918c09BBE18DDFDAE224--



From owner-ietf-ldup@mail.imc.org  Mon Dec 10 17:29:55 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id RAA13374
	for <ldup-archive@lists.ietf.org>; Mon, 10 Dec 2001 17:29:54 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBALsxV03286
	for ietf-ldup-bks; Mon, 10 Dec 2001 13:54:59 -0800 (PST)
Received: from out003pub.verizon.net (out003pub.verizon.net [206.46.170.103])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBALsw203282
	for <ietf-ldup@imc.org>; Mon, 10 Dec 2001 13:54:58 -0800 (PST)
Received: from D7ST2111 (pool-141-151-10-229.phil.east.verizon.net [141.151.10.229])
	by out003pub.verizon.net  with ESMTP
	for <ietf-ldup@imc.org>; id fBALoHP24080
	Mon, 10 Dec 2001 15:50:17 -0600 (CST)
Reply-To: <christopher.apple@verizon.net>
From: "Chris Apple" <christopher.apple@verizon.net>
To: <ietf-ldup@imc.org>
Subject: Reminder to all LDUP document editors
Date: Mon, 10 Dec 2001 16:54:11 -0600
Message-ID: <000001c181cd$985c2920$0200a8c0@D7ST2111>
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="----=_NextPart_000_0001_01C1819B.4DC1B920"
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook, Build 10.0.3311
Importance: Normal
X-MS-TNEF-Correlator: 00000000473A54737459124EBB63D153C6B1DF5244212100
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


This is a multi-part message in MIME format.

------=_NextPart_000_0001_01C1819B.4DC1B920
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

If you have a document which shows up on the WG Agenda and are not able =
to
attend the WG meeting in person this Thursday, please be sure to send =
John
and I input so that I can discuss the document status on your behalf. =
Please
disregard this request if you've already sent us e-mail as a few of you =
have
done.

Chris Apple

------=_NextPart_000_0001_01C1819B.4DC1B920
Content-Type: application/ms-tnef;
	name="winmail.dat"
Content-Disposition: attachment;
	filename="winmail.dat"
Content-Transfer-Encoding: base64

eJ8+IhAWAQaQCAAEAAAAAAABAAEAAQeQBgAIAAAA5AQAAAAAAADoAAEIgAcAGAAAAElQTS5NaWNy
b3NvZnQgTWFpbC5Ob3RlADEIAQ2ABAACAAAAAgACAAEGgAMADgAAANEHDAAKABAANgAAAAEANQEB
A5AGAOwFAAAnAAAACwACAAEAAAALACMAAAAAAAMAJgAAAAAACwApAAAAAAADADYAAAAAAB4AcAAB
AAAAJgAAAFJlbWluZGVyIHRvIGFsbCBMRFVQIGRvY3VtZW50IGVkaXRvcnMAAAACAXEAAQAAABYA
AAABwYHNlP4AHcxCRh5Ay7p4eh+uQoFfAAACAR0MAQAAACMAAABTTVRQOkNIUklTVE9QSEVSLkFQ
UExFQFZFUklaT04uTkVUAAALAAEOAAAAAEAABg4AtF+OzYHBAQIBCg4BAAAAGAAAAAAAAABHOlRz
dFkSTrtj0VPGsd9SwoAAAAMAFA4AAAAACwAfDgEAAAACAQkQAQAAAJ4BAACaAQAAUwIAAExaRnXi
7qLMAwAKAHJjcGcxMjXiMgNDdGV4BUEBAwH3/wqAAqQD5AcTAoAP8wBQBFY/CFUHshElDlEDAQIA
Y2jhCsBzZXQyBgAGwxEl9jMERhO3MBIsETMI7wn3tjsYHw4wNREiDGBjAFAzCwkBZDM2FlALpiBJ
KGYgeQhgIBPgdmWAIGEgZG9jdQeAEQIwIHdoDeBoIHOIaG93BCB1cCACIEggdGgdkFdHEMBnbQnw
ZB2wAHBkHaAYICAsbm8FQAGgbB2QdG9/HaACQCARH3YHgBQgC4Bn0iALgCBwBJBzH1MEACggVGgI
cHMgMHksSyMgIUBhFBAgYh2Qc48IcCFTFBAgcUpvaAOg5SBiSSLxcHUFQCNgH3HrIaAmgWMDkWQE
AB3wBBHrH4Id13MBkHQoMB9CHSEHBcAlABPgbGYuIFB3JKQn8RggZwsRI5QYIHGvClApQCLwHQMn
HYJsGCA4YWR5JaIFQCmBZS2vAMADESTAHaFmB9FvHQlxHdBuZS4KogqECoBDtmgFEBCxcCSRMER9
MiAAAAMA3j+fTgAAAwAJWQEAAAADAEBlAAAAAAsAE4AIIAYAAAAAAMAAAAAAAABGAAAAAAOFAAAA
AAAAAwAVgAggBgAAAAAAwAAAAAAAAEYAAAAAEIUAAAAAAAALABiACCAGAAAAAADAAAAAAAAARgAA
AAAUhQAAAAAAAAMAG4AIIAYAAAAAAMAAAAAAAABGAAAAAFKFAACPkwEAAwAigAggBgAAAAAAwAAA
AAAAAEYAAAAAAYUAAAAAAAAeAEaACCAGAAAAAADAAAAAAAAARgAAAABUhQAAAQAAAAUAAAAxMC4w
AAAAAAsAR4AIIAYAAAAAAMAAAAAAAABGAAAAAAaFAAAAAAAACwBLgAggBgAAAAAAwAAAAAAAAEYA
AAAADoUAAAAAAAADAE6ACCAGAAAAAADAAAAAAAAARgAAAAAYhQAAAAAAAAsAZYAIIAYAAAAAAMAA
AAAAAABGAAAAAIKFAAABAAAAAgH4DwEAAAAQAAAARzpUc3RZEk67Y9FTxrHfUgIB+g8BAAAAEAAA
AEc6VHN0WRJOu2PRU8ax31ICAfsPAQAAAJkAAAAAAAAAOKG7EAXlEBqhuwgAKypWwgAAbXNwc3Qu
ZGxsAAAAAABOSVRB+b+4AQCqADfZbgAAAEM6XERvY3VtZW50cyBhbmQgU2V0dGluZ3NcQ2hyaXMg
QXBwbGVcTG9jYWwgU2V0dGluZ3NcQXBwbGljYXRpb24gRGF0YVxNaWNyb3NvZnRcT3V0bG9va1xP
dXRsb29rLnBzdAAAAAADAP4PBQAAAAMADTT9NwIAAgEUNAEAAAAQAAAATklUQfm/uAEAqgA32W4A
AAIBfwABAAAAMQAAADAwMDAwMDAwNDczQTU0NzM3NDU5MTI0RUJCNjNEMTUzQzZCMURGNTI0NDIx
MjEwMAAAAAADAAYQiXjB1gMABxD/AAAAAwAQEAAAAAADABEQAAAAAB4ACBABAAAAZQAAAElGWU9V
SEFWRUFET0NVTUVOVFdISUNIU0hPV1NVUE9OVEhFV0dBR0VOREFBTkRBUkVOT1RBQkxFVE9BVFRF
TkRUSEVXR01FRVRJTkdJTlBFUlNPTlRISVNUSFVSU0RBWSxQTEUAAAAAcEcCApAGAA4AAAABAP//
//8gACAAAAAAAD0EAhCAAQAUAAAAVW50aXRsZWQgQXR0YWNobWVudAByBwITgAMADgAAANEHDAAK
ABAANgAQAAEARQECD4AGAFkAAABUaGlzIGF0dGFjaG1lbnQgaXMgYSBNQVBJIDEuMCBlbWJlZGRl
ZCBtZXNzYWdlIGFuZCBpcyBub3Qgc3VwcG9ydGVkIGJ5IHRoaXMgbWFpbCBzeXN0ZW0uAPIeAhGA
BgC4DQAAAQAJAAAD3AYAAAAAIQYAAAAABQAAAAkCAAAAAAUAAAABAv///wClAAAAQQvGAIgAIAAg
AAAAAAAgACAAAAAAACgAAAAgAAAAQAAAAAEAAQAAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
////AAAAAAAAAAAAAQAAAAAAAAAAAAAAYB1wNQAAAAAAAAAAAAAAAAAAAAAAAAAAmB5wNQEAAAB4
vXg1MOYDAFzrEwCTEgFuDKFwNfzqEwA5EgXGKF8EAxAAAAAAAAAAAQAAACAAAAAgAAAAAAAAAAEA
IACAAAAAIAAAACAAAAAAAAAA//////////////////////////8AAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAP////////////////////8hBgAAQQtGAGYAIAAgAAAAAAAgACAAAAAA
ACgAAAAgAAAAIAAAAAEAGAAAAAAAAAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAgICAgICAgICA
gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA
gICAgICAgICAgICAgICAgICAgICAgICAgIAAAACAgIDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDA
wMDAwMDAwMDAwMD/////////////////////////////////////////////////////////////
//////+AgIAAAACAgIDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMD/////////
//////////////////////////////////////////////////////////+AgIAAAACAgIDAwMDA
wMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMD/////////////////////////////////
//////////////////////////////////+AgIAAAACAgIDAwMDAwMDAwMDAwMDAwMDAwMDAwMDA
wMDAwMDAwMDAwMDAwMD///+AgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA
gICAgID///+AgIAAAACAgIDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMD/////
//////////////////////////////////////////////////////////////+AgIAAAACAgIDA
wMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMD///+AgICAgICAgICAgICAgICAgICA
gICAgICAgICAgICAgICAgICAgICAgICAgID///+AgIAAAACAgIDAwMDAwMDAwMDAwMDAwMDAwMDA
wMDAwMDAwMDAwMDAwMDAwMD/////////////////////////////////////////////////////
//////////////+AgIAAAACAgIDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMD/
//+AgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgID///+AgIAAAACA
gIDAwMCAgICAgICAgICAgICAgICAgICAgICAgICAgIDAwMDAwMD/////////////////////////
//////////////////////////////////////////+AgIAAAACAgIDAwMCAgIAAAACAgACAgAAA
AAAAAACAgAAAAACAgIDAwMDAwMD/////////////////////////////////////////////////
//////////////////+AgIAAAACAgIDAwMCAgICAgIAAAAAAAAD///////8AAACAgICAgIDAwMDA
wMD///////////////////////////////////////////////////////////////////+AgIAA
AACAgIDAwMCAgICAgIAAAAD///////8AAAAAAACAgICAgIDAwMDAwMD///+AAACAAACAAACAAACA
AACAAACAAACAAACAAACAAACAAACAAACAAACAAACAAAD///+AgIAAAACAgIDAwMCAgIAAAAAAAAAA
AAD///////////8AAACAgIDAwMDAwMD///+AAACAAACAAACAAACAAACAAACAAACAAACAAACAAACA
AACAAACAAACAAACAAAD///+AgIAAAACAgIDAwMAAAADAwMDAwMD///////////////8AAACAgIDA
wMDAwMD///////////////////////////////////////////////////////////////////+A
gIAAAACAgIDAwMAAAADAwMDAwMAAAAD///////////////8AAADAwMDAwMD/////////////////
//////////////////////////////////////////////////+AgIAAAACAgIDAwMAAAADAwMDA
wMAAAAD///////8AAAAAAACAgIDAwMDAwMD/////////////////////////////////////////
//////////////////////////+AgIAAAACAgIDAwMAAAADAwMDAwMDAwMAAAAD///////8AAACA
gIDAwMDAwMD/////////////////////////////////////////////////////////////////
//+AgIAAAACAgIDAwMCAgIAAAADAwMDAwMDAwMDAwMDAwMAAAACAgIDAwMDAwMD/////////////
//////////////////////////////////////////////////////+AgIAAAACAgIDAwMCAgICA
gIAAAAAAAAAAAAAAAAAAAACAgICAgIDAwMDAwMD/////////////////////////////////////
//////////////////////////////+AgIAAAACAgIDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDA
wMDAwMDAwMDAwMD/////////////////////////////////////////////////////////////
//////+AgIAAAACAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA
gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAAAAAAByBwIFkAYA7BAAABEAAAAD
ACAO8xMAAB4AATABAAAAEgAAAENocmlzdG9waGVyIEFwcGxlAAAADQABNwEAAABvDwAABwMCAAAA
AADAAAAAAAAARnifPiIQFgEGkAgABAAAAAAAAQABAAEHkAYACAAAAOQEAAAAAAAA6AABCIAHAAwA
AABJUE0uQ29udGFjdADgAwEEgAEAEgAAAENocmlzdG9waGVyIEFwcGxlAJ0GAQWAAwAOAAAA0QcL
ABsADwAXACMAAgBJAQEAgAAANgAAAAQANgAMABoAQ2hyaXMgQXBwbGUAU01UUDppbWNhcHBsZUBo
b3RtYWlsLmNvbQAAAAAAAAAAAM8NAQaAAwAOAAAA0QcLABsADwAXACMAAgBJAQEHgAYAAQAAACEh
AAEggAMADgAAANEHDAAEAAsAHwAJAAIAHQEBCYABACEAAABBNjM2QzdDMjFEMzdFMTRBQTkxNzYy
RkU0MUY1MDkyMwAcBwEDkAYANA4AAFcAAABAADkAgD8Bxol3wQECATsAAQAAABoAAABTTVRQOklN
Q0FQUExFQEhPVE1BSUwuQ09NAAAAAgFBAAEAAAA+AAAAAAAAAIErH6S+oxAZnW4A3QEPVAIAAAAA
Q2hyaXMgQXBwbGUAU01UUABpbWNhcHBsZUBob3RtYWlsLmNvbQAAAB4AQgABAAAADAAAAENocmlz
IEFwcGxlAB4AZAABAAAABQAAAFNNVFAAAAAAHgBlAAEAAAAVAAAAaW1jYXBwbGVAaG90bWFpbC5j
b20AAAAAHgBwAAEAAAASAAAAQ2hyaXN0b3BoZXIgQXBwbGUAAAACAXEAAQAAABYAAAABwXeJxgFb
S2bcC8BGdKT/kl3TvBCoAAAeABoMAQAAAAwAAABDaHJpcyBBcHBsZQACAR0MAQAAABoAAABTTVRQ
OklNQ0FQUExFQEhPVE1BSUwuQ09NAAAAHgAeDAEAAAAFAAAAU01UUAAAAAAeAB8MAQAAABUAAABp
bWNhcHBsZUBob3RtYWlsLmNvbQAAAABAAAYOgD8Bxol3wQEDAAgOAQYAAAsAHw4AAAAAAwCAEAAC
AAAeAAEwAQAAABsAAABDaHJpc3RvcGhlciBBcHBsZSAoRS1tYWlsKQAAQAAHMOCRvoiJd8EBQAAI
MJAnEnbpfMEBHgAGOgEAAAAMAAAAQ2hyaXN0b3BoZXIAHgAIOgEAAAABAAAAAAAAAB4ACToBAAAA
DwAAACgyMTUpIDg3My0wODUwAAAeABE6AQAAAAYAAABBcHBsZQAAAB4AFToBAAAAMAAAADIxNCBO
ZXcgU3RyZWV0LCBBcHQgNC1ODQpQaGlsYWRlbHBoaWEsIFBBIDE5MTA2AB4AFjoBAAAAAQAAAAAA
AAAeABc6AQAAAAEAAAAAAAAAHgAcOgEAAAAPAAAAKDYxMCkgNTg1LTQyNDEAAB4AJDoBAAAAAQAA
AAAAAAAeACY6AQAAABkAAABVbml0ZWQgU3RhdGVzIG9mIEFtZXJpY2EAAAAAHgAnOgEAAAANAAAA
UGhpbGFkZWxwaGlhAAAAAB4AKDoBAAAAAwAAAFBBAAAeACk6AQAAABgAAAAyMTQgTmV3IFN0cmVl
dCwgQXB0IDQtTgAeACo6AQAAAAYAAAAxOTEwNgAAAAMA3j+vbwAACwATgAggBgAAAAAAwAAAAAAA
AEYAAAAAA4UAAAAAAAADABWACCAGAAAAAADAAAAAAAAARgAAAAAQhQAAEAAAAAMAG4AIIAYAAAAA
AMAAAAAAAABGAAAAAFKFAACPkwEAHgAygAQgBgAAAAAAwAAAAAAAAEYAAAAApIAAAAEAAAAVAAAA
aW1jYXBwbGVAaG90bWFpbC5jb20AAAAAHgAzgAQgBgAAAAAAwAAAAAAAAEYAAAAAlIAAAAEAAAAT
AAAAaW1jYXBwbGVAeWFob28uY29tAAAeADSABCAGAAAAAADAAAAAAAAARgAAAACEgAAAAQAAAB4A
AABjaHJpc3RvcGhlci5hcHBsZUB2ZXJpem9uLm5ldAAAAB4ANYAEIAYAAAAAAMAAAAAAAABGAAAA
AAWAAAABAAAAEwAAAEFwcGxlLCBDaHJpc3RvcGhlcgAAHgBGgAggBgAAAAAAwAAAAAAAAEYAAAAA
VIUAAAEAAAAFAAAAMTAuMAAAAAALAEuACCAGAAAAAADAAAAAAAAARgAAAAAOhQAAAAAAAAMAToAI
IAYAAAAAAMAAAAAAAABGAAAAABiFAAAAAAAAAxC4gAQgBgAAAAAAwAAAAAAAAEYAAAAAJoAAAAUA
AAAXgAAAN4AAABY6AAAZgAAAGIAAAAMQuYAEIAYAAAAAAMAAAAAAAABGAAAAACeAAAADAAAAgIAA
AJCAAACggAAAAxC6gAQgBgAAAAAAwAAAAAAAAEYAAAAAKIAAAAMAAAAAAAAAAQAAAAIAAAADALuA
BCAGAAAAAADAAAAAAAAARgAAAAApgAAABwAAAAMAwIAEIAYAAAAAAMAAAAAAAABGAAAAAAaAAAAX
gAAAAxDFgAQgBgAAAAAAwAAAAAAAAEYAAAAAB4AAAAYAAAACAAAAoIAAAAg6AAAJOgAAJDoAABw6
AAALAMeABCAGAAAAAADAAAAAAAAARgAAAAAlgAAAAAAAAB4AyYAEIAYAAAAAAMAAAAAAAABGAAAA
ABuAAAABAAAAMAAAADIxNCBOZXcgU3RyZWV0LCBBcHQgNC1ODQpQaGlsYWRlbHBoaWEsIFBBIDE5
MTA2AB4AyoAEIAYAAAAAAMAAAAAAAABGAAAAAEWAAAABAAAAGAAAADIxNCBOZXcgU3RyZWV0LCBB
cHQgNC1OAB4Ay4AEIAYAAAAAAMAAAAAAAABGAAAAAEaAAAABAAAADQAAAFBoaWxhZGVscGhpYQAA
AAAeAMyABCAGAAAAAADAAAAAAAAARgAAAABHgAAAAQAAAAMAAABQQQAAHgDNgAQgBgAAAAAAwAAA
AAAAAEYAAAAASIAAAAEAAAAGAAAAMTkxMDYAAAAeAM6ABCAGAAAAAADAAAAAAAAARgAAAABJgAAA
AQAAABkAAABVbml0ZWQgU3RhdGVzIG9mIEFtZXJpY2EAAAAAAwDRgAQgBgAAAAAAwAAAAAAAAEYA
AAAAIoAAAAIAAAADANKABCAGAAAAAADAAAAAAAAARgAAAAAjgAAAAAEAAB4A1oAEIAYAAAAAAMAA
AAAAAABGAAAAAICAAAABAAAALAAAAENocmlzIEFwcGxlIChjaHJpc3RvcGhlci5hcHBsZUB2ZXJp
em9uLm5ldCkAHgDYgAQgBgAAAAAAwAAAAAAAAEYAAAAAgoAAAAEAAAAFAAAAU01UUAAAAAAeANmA
BCAGAAAAAADAAAAAAAAARgAAAACDgAAAAQAAAB4AAABjaHJpc3RvcGhlci5hcHBsZUB2ZXJpem9u
Lm5ldAAAAAIB2oAEIAYAAAAAAMAAAAAAAABGAAAAAIWAAAABAAAAtgAAAAAAAACBKx+kvqMQGZ1u
AN0BD1QCAAABgEMAaAByAGkAcwAgAEEAcABwAGwAZQAgACgAYwBoAHIAaQBzAHQAbwBwAGgAZQBy
AC4AYQBwAHAAbABlAEAAdgBlAHIAaQB6AG8AbgAuAG4AZQB0ACkAAABTAE0AVABQAAAAYwBoAHIA
aQBzAHQAbwBwAGgAZQByAC4AYQBwAHAAbABlAEAAdgBlAHIAaQB6AG8AbgAuAG4AZQB0AAAAAAAe
ANyABCAGAAAAAADAAAAAAAAARgAAAACQgAAAAQAAACEAAABDaHJpcyBBcHBsZSAoaW1jYXBwbGVA
eWFob28uY29tKQAAAAAeAN6ABCAGAAAAAADAAAAAAAAARgAAAACSgAAAAQAAAAUAAABTTVRQAAAA
AB4A34AEIAYAAAAAAMAAAAAAAABGAAAAAJOAAAABAAAAEwAAAGltY2FwcGxlQHlhaG9vLmNvbQAA
AgHggAQgBgAAAAAAwAAAAAAAAEYAAAAAlYAAAAEAAACKAAAAAAAAAIErH6S+oxAZnW4A3QEPVAIA
AAGAQwBoAHIAaQBzACAAQQBwAHAAbABlACAAKABpAG0AYwBhAHAAcABsAGUAQAB5AGEAaABvAG8A
LgBjAG8AbQApAAAAUwBNAFQAUAAAAGkAbQBjAGEAcABwAGwAZQBAAHkAYQBoAG8AbwAuAGMAbwBt
AAAAAAAeAOKABCAGAAAAAADAAAAAAAAARgAAAACggAAAAQAAACkAAABDaHJpc3RvcGhlciBBcHBs
ZSAoaW1jYXBwbGVAaG90bWFpbC5jb20pAAAAAB4A5IAEIAYAAAAAAMAAAAAAAABGAAAAAKKAAAAB
AAAABQAAAFNNVFAAAAAAHgDlgAQgBgAAAAAAwAAAAAAAAEYAAAAAo4AAAAEAAAAVAAAAaW1jYXBw
bGVAaG90bWFpbC5jb20AAAAAAgHmgAQgBgAAAAAAwAAAAAAAAEYAAAAApYAAAAEAAACeAAAAAAAA
AIErH6S+oxAZnW4A3QEPVAIAAAGAQwBoAHIAaQBzAHQAbwBwAGgAZQByACAAQQBwAHAAbABlACAA
KABpAG0AYwBhAHAAcABsAGUAQABoAG8AdABtAGEAaQBsAC4AYwBvAG0AKQAAAFMATQBUAFAAAABp
AG0AYwBhAHAAcABsAGUAQABoAG8AdABtAGEAaQBsAC4AYwBvAG0AAAAAAB4A8YAEIAYAAAAAAMAA
AAAAAABGAAAAAMKAAAABAAAABAAAAEZBWAAeAPKABCAGAAAAAADAAAAAAAAARgAAAADDgAAAAQAA
AAEAAAAAAAAAHgDzgAQgBgAAAAAAwAAAAAAAAEYAAAAAxIAAAAEAAAABAAAAAAAAAB4APQABAAAA
AQAAAAAAAAAeAAIOAQAAAAEAAAAAAAAAHgADDgEAAAABAAAAAAAAAB4ABA4BAAAAAQAAAAAAAAAL
ABsOAAAAAB4AHQ4BAAAAEgAAAENocmlzdG9waGVyIEFwcGxlAAAAAwD0DwIAAAADAPcPAAAAAAIB
+A8BAAAAEAAAAEc6VHN0WRJOu2PRU8ax31ICAfoPAQAAABAAAABHOlRzdFkSTrtj0VPGsd9SAgH7
DwEAAACZAAAAAAAAADihuxAF5RAaobsIACsqVsIAAG1zcHN0LmRsbAAAAAAATklUQfm/uAEAqgA3
2W4AAABDOlxEb2N1bWVudHMgYW5kIFNldHRpbmdzXENocmlzIEFwcGxlXExvY2FsIFNldHRpbmdz
XEFwcGxpY2F0aW9uIERhdGFcTWljcm9zb2Z0XE91dGxvb2tcT3V0bG9vay5wc3QAAAAAAwD+DwUA
AAADAA00/TcCADV1AAIBAjcBAAAAAAAAAAMABTcFAAAAAwALN/////8DABQ3AAAAAAMA+n8AAAAA
QAD7fwBA3aNXRbMMQAD8fwBA3aNXRbMMAwD9fwAAAAALAP5/AAAAAAMAIQ4FywAAAgH4DwEAAAAQ
AAAARzpUc3RZEk67Y9FTxrHfUgIB+g8BAAAAEAAAAEc6VHN0WRJOu2PRU8ax31ICAfsPAQAAAJkA
AAAAAAAAOKG7EAXlEBqhuwgAKypWwgAAbXNwc3QuZGxsAAAAAABOSVRB+b+4AQCqADfZbgAAAEM6
XERvY3VtZW50cyBhbmQgU2V0dGluZ3NcQ2hyaXMgQXBwbGVcTG9jYWwgU2V0dGluZ3NcQXBwbGlj
YXRpb24gRGF0YVxNaWNyb3NvZnRcT3V0bG9va1xPdXRsb29rLnBzdAAAAAADAP4PBwAAAAsO

------=_NextPart_000_0001_01C1819B.4DC1B920--




From owner-ietf-ldup@mail.imc.org  Tue Dec 11 17:19:47 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id RAA26199
	for <ldup-archive@odin.ietf.org>; Tue, 11 Dec 2001 17:19:45 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBBLxmZ18989
	for ietf-ldup-bks; Tue, 11 Dec 2001 13:59:48 -0800 (PST)
Received: from mailgw2a.lmco.com (mailgw2a.lmco.com [192.91.147.7])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBBLxk218985
	for <ietf-ldup@imc.org>; Tue, 11 Dec 2001 13:59:46 -0800 (PST)
Received: from emss03g01.ems.lmco.com (emss03g01.ems.lmco.com [141.240.4.144])
	by mailgw2a.lmco.com (8.8.8/8.8.8) with ESMTP id QAA00992
	for <ietf-ldup@imc.org>; Tue, 11 Dec 2001 16:59:48 -0500 (EST)
Received: from CONVERSION-DAEMON by lmco.com (PMDF V5.2-33 #38888) id <0GO700B018W3UP@lmco.com> for ietf-ldup@imc.org; Tue,
 11 Dec 2001 16:59:47 -0500 (EST)
Received: from emss03i00.ems.lmco.com ([141.240.31.211]) by lmco.com (PMDF V5.2-33 #38888)
 with ESMTP id <0GO700DJF8VXTC@lmco.com> for ietf-ldup@imc.org; Tue, 11 Dec 2001 16:55:09 -0500 (EST)
Received: by emss03i00.orl.lmco.com with Internet Mail Service (5.5.2653.19)	id <YR7KBQM6>; Tue, 11 Dec 2001 16:55:08 -0500
Content-return: allowed
Date: Tue, 11 Dec 2001 16:55:08 -0500
From: "Slone, Skip" <skip.slone@lmco.com>
Subject: Follow-up on Naming of Subentries
To: "LDUP Mailing List (ietf-ldup@imc.org)" <ietf-ldup@imc.org>
Message-id: <B23207A86E7BD411A7000008C7E6693C780065@emss03m03.orl.lmco.com>
MIME-version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
Content-type: multipart/mixed; boundary="Boundary_(ID_Qy1yqQOVRgAIj67vUKcYUQ)"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--Boundary_(ID_Qy1yqQOVRgAIj67vUKcYUQ)
Content-type: multipart/alternative;
 boundary="Boundary_(ID_MYjvf3mnzxljvgq52YZt3Q)"


--Boundary_(ID_MYjvf3mnzxljvgq52YZt3Q)
Content-type: text/plain
Content-Transfer-Encoding: 7BIT

Greetings to all,
 
Although I'm not able to make the SLC meeting this week, I do have some
comments to report back as a follow-up to the discussion we had in London.
At the close of the meeting, there were two subentry naming issues
outstanding that I was to take to the next X.500 meeting -- subordination of
subentries and use of subentry naming attributes other than cn.  
 
We discussed these topics at the X.500 meeting last month.  The consensus of
the group was that although we're not necessarily opposed to either of these
changes, we need to better understand the requirements, and we want to make
sure that by making the changes, we're not creating unintended additional
semantics. 
 
Bottom line -- we're open to making these changes as long as we understand
the requirements and we can make them in a way that avoids negative side
effects.  Our next meeting is in late February / early March (prior to IETF
53), so if we can get an articulation of the requirements before then, we
should be able to have a clear direction by the Minneapolis meeting.
 
Regards,
 
 -- Skip Slone
 

--Boundary_(ID_MYjvf3mnzxljvgq52YZt3Q)
Content-type: text/html
Content-Transfer-Encoding: 7BIT

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=US-ASCII">
<TITLE>Message</TITLE>

<META content="MSHTML 5.50.4807.2300" name=GENERATOR></HEAD>
<BODY>
<DIV><SPAN class=232471120-11122001><FONT face=Arial size=2>Greetings to 
all,</FONT></SPAN></DIV>
<DIV><SPAN class=232471120-11122001><FONT face=Arial 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=232471120-11122001><FONT face=Arial size=2>Although I'm not 
able to make the SLC meeting this week, I do have some comments to report back 
as a follow-up to the discussion we had in London.&nbsp; At the close of the 
meeting, there were two subentry naming issues outstanding that I was to take to 
the next X.500 meeting -- subordination of subentries and use of subentry naming 
attributes other than cn.&nbsp; </FONT></SPAN></DIV>
<DIV><SPAN class=232471120-11122001><FONT face=Arial 
size=2></FONT></SPAN>&nbsp;</DIV>
<DIV><SPAN class=232471120-11122001><FONT face=Arial size=2>We discussed these 
topics at the X.500 meeting last month.&nbsp; The consensus of the group was 
that although we're not necessarily opposed to either of these changes,&nbsp;we 
need to better understand the requirements, and&nbsp;we want to make sure that 
by making the changes, we're not creating unintended additional semantics. 
</FONT></SPAN></DIV>
<DIV><FONT face=Arial size=2></FONT>&nbsp;</DIV>
<DIV align=left><FONT face=Arial size=2><SPAN class=232471120-11122001>Bottom 
line -- we're open to making these changes as long as we understand the 
requirements and we can make them in a way that avoids negative side 
effects.&nbsp; Our next meeting is in late February / early March (prior to IETF 
53), so if we can get an articulation of the requirements before then, we should 
be able to have a clear direction by the Minneapolis 
meeting.</SPAN></FONT></DIV>
<DIV align=left><FONT face=Arial size=2><SPAN 
class=232471120-11122001></SPAN></FONT>&nbsp;</DIV>
<DIV align=left><FONT face=Arial size=2><SPAN 
class=232471120-11122001>Regards,</SPAN></FONT></DIV>
<DIV align=left><FONT face=Arial size=2><SPAN 
class=232471120-11122001></SPAN></FONT>&nbsp;</DIV>
<DIV align=left><FONT face=Arial size=2><SPAN class=232471120-11122001>&nbsp;-- 
Skip Slone</SPAN></FONT></DIV>
<DIV><FONT face=Arial size=2></FONT>&nbsp;</DIV></BODY></HTML>

--Boundary_(ID_MYjvf3mnzxljvgq52YZt3Q)--

--Boundary_(ID_Qy1yqQOVRgAIj67vUKcYUQ)
Content-type: application/octet-stream; name="Skip Slone.vcf"
Content-disposition: attachment; filename="Skip Slone.vcf"
Content-transfer-encoding: BASE64
Content-Transfer-Encoding: BASE64

QkVHSU46VkNBUkQNClZFUlNJT046Mi4xDQpOOlNsb25lO1NraXANCkZOOlNraXAg
U2xvbmUNCk9SRzpMb2NraGVlZCBNYXJ0aW47Q2hpZWYgVGVjaG5vbG9neSBPZmZp
Y2UNClRJVExFOkNoaWVmIEFyY2hpdGVjdCwgRGlyZWN0b3J5IGFuZCBOYW1pbmcg
U2VydmljZXMNCk5PVEU7RU5DT0RJTkc9UVVPVEVELVBSSU5UQUJMRTo9MEQ9MEEN
ClRFTDtXT1JLO1ZPSUNFOisxICg0MDcpIDMwNi03MTAyDQpURUw7Q0VMTDtWT0lD
RTorMSAoNDA3KSAyNTctMjQ1Mg0KVEVMO1dPUks7RkFYOisxICg0MDcpIDMwNi0x
MzkyDQpURUw7SE9NRTtGQVg6KzEgKDIwOCkgNTQ1LTk2NzUNCkFEUjtXT1JLOjs7
MTI1MDYgTGFrZSBVbmRlcmhpbGwgUmQuICwgTVAgODQ1O09ybGFuZG87Rkw7MzI4
MjU7VVNBDQpMQUJFTDtXT1JLO0VOQ09ESU5HPVFVT1RFRC1QUklOVEFCTEU6MTI1
MDYgTGFrZSBVbmRlcmhpbGwgUmQuICwgTVAgODQ1PTBEPTBBT3JsYW5kbywgRkwg
MzI4MjU9MEQ9MEFVU0ENCkVNQUlMO1BSRUY7SU5URVJORVQ6c2tpcC5zbG9uZUBs
bWNvLmNvbQ0KUkVWOjIwMDEwNDA1VDE5MDkzM1oNCkVORDpWQ0FSRA0K

--Boundary_(ID_Qy1yqQOVRgAIj67vUKcYUQ)--


From owner-ietf-ldup@mail.imc.org  Tue Dec 11 18:11:26 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id SAA27184
	for <ldup-archive@odin.ietf.org>; Tue, 11 Dec 2001 18:11:25 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBBMrKQ23679
	for ietf-ldup-bks; Tue, 11 Dec 2001 14:53:20 -0800 (PST)
Received: from patan.sun.com (patan.Sun.COM [192.18.98.43])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBBMrJ223672
	for <ietf-ldup@imc.org>; Tue, 11 Dec 2001 14:53:19 -0800 (PST)
Received: from rowlf.Central.Sun.COM ([129.153.131.70])
	by patan.sun.com (8.9.3+Sun/8.9.3) with ESMTP id PAA21754;
	Tue, 11 Dec 2001 15:53:03 -0700 (MST)
Received: from sun.com (vpn-129-147-152-74.Central.Sun.COM [129.147.152.74])
	by rowlf.Central.Sun.COM (8.10.2+Sun/8.10.2/ENSMAIL,v2.1p1) with ESMTP id fBBMrJl24614;
	Tue, 11 Dec 2001 16:53:20 -0600 (CST)
Message-ID: <3C168C47.62EC6736@sun.com>
Date: Tue, 11 Dec 2001 16:44:23 -0600
From: Mark Wahl <Mark.Wahl@Sun.COM>
X-Mailer: Mozilla 4.77 [en] (WinNT; U)
X-Accept-Language: en
MIME-Version: 1.0
To: "Slone, Skip" <skip.slone@lmco.com>
CC: "LDUP Mailing List (ietf-ldup@imc.org)" <ietf-ldup@imc.org>
Subject: Re: Follow-up on Naming of Subentries
References: <B23207A86E7BD411A7000008C7E6693C780065@emss03m03.orl.lmco.com>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit


> "Slone, Skip" wrote:
>
> We discussed these topics at the X.500 meeting last month.  The consensus of
> the group was that although we're not necessarily opposed to either of these
> changes, we need to better understand the requirements, and we want to make
> sure that by making the changes, we're not creating unintended additional
> semantics.

Having typed attributes in X.500/LDAP is one of the main advantages of 
these directories over more primitive data models such as DNS. However,
since I believe Subentries are Readable in X.500, the presence of a subentry
would prevent an object entry existing in the server with the same name.  It
is unfortunate that cn is the naming attribute for subentries and not something
more 'operational', since cn is frequently used as a naming attribute for 
object entries.  This means that an application which wants to create an object
entry named by cn must not only check to see whether there is an object 
entry with the same name, but also use a subentry-aware operation to check 
that there is no subentry with the same name.  This isn't worthwhile since 
applications which are bulk loading in object entries may not care about 
subentries if they are gateways from other protocols or data models.  
It would be FAR better to leverage this advantage and allow a more
administrator/operational-oriented naming attribute such as 'subentryname'
for naming subentries.  This attribute would be intended exclusively for
use by subentries and so would never conflict with object entries named
by cn.

Mark Wahl
Sun Microsystems Inc.


From owner-ietf-ldup@mail.imc.org  Wed Dec 12 00:29:45 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id AAA04879
	for <ldup-archive@odin.ietf.org>; Wed, 12 Dec 2001 00:29:44 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBC58HN12986
	for ietf-ldup-bks; Tue, 11 Dec 2001 21:08:17 -0800 (PST)
Received: from hotmail.com (f75.law14.hotmail.com [64.4.21.75])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBC58F212982
	for <ietf-ldup@imc.org>; Tue, 11 Dec 2001 21:08:15 -0800 (PST)
Received: from mail pickup service by hotmail.com with Microsoft SMTPSVC;
	 Tue, 11 Dec 2001 21:08:14 -0800
Received: from 12.131.198.124 by lw14fd.law14.hotmail.msn.com with HTTP;
	Wed, 12 Dec 2001 05:08:13 GMT
X-Originating-IP: [12.131.198.124]
From: "Steven Legg" <stevenlegg@hotmail.com>
To: rmoats@lemurnetworks.net
Cc: ietf-ldup@imc.org, steven.legg@adacel.com.au
Subject: Re: Supporting Partial Replication
Date: Wed, 12 Dec 2001 05:08:13 
Mime-Version: 1.0
Content-Type: text/plain; format=flowed
Message-ID: <F75FiPNl4iZWw8gRzpP00000636@hotmail.com>
X-OriginalArrivalTime: 12 Dec 2001 05:08:14.0460 (UTC) FILETIME=[00B023C0:01C182CB]
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>



Ryan,

Please disregard my previous reply to you (which didn't make it to the 
mailing list).
More below.

Ryan Moats wrote:
>On Thu, Dec 06, 2001 at 02:45:14PM +1100, Steven Legg wrote:
>| Folks,
>| A while ago I promised to write up my thoughts on changes to the LDUP
>| architecture to support partial replication. Well this is part one of
>| that write up, which discusses changes to the architecture to make it
>| more amenable to replication topologies involving partial replicas.
>| Consider the following replication topology:
>|   S1 ====== S2
>|     \      /
>|      \    /
>|       \  /
>|        S3
>| Servers S1 & S2 hold full copies of replication area R1. S3 holds
>| replication area R2, a subset of R1. R2 could be a subtree of R1, a
>| sparse replica or a fractional replica. The exact details don't matter
>| at this stage. It is enough to recognize that R2 is a subset of the
>| information in R1.
>| Suppose that there are two successive update operations, U1 & U2, 
>performed
>| at S2, where U1 affects information in R1 but wholly outside of R2 and U2
>| is wholly within R2 (and thus also within R1). The CSN alloted to U1 is
>| less than the CSN alloted to U2.
>| Suppose S3 and S2 establish replication sessions to exchange updates.
>| S3 has no changes to send. S2 will send U2 because it is within the scope
>| of the replication agreement S3 has with S2, but will not send U1.
>| S3 and S1 then establish replication sessions. S1 has no changes tosend.
>| S3 sends U2 since the CSN for U2 is more recent than the CSNcorresponding
>| to S2 in S1's update vector. S1 will set the CSN corresponding to S2 in
>| its update vector to be the CSN for U2.
>| Now, if S2 establishes a replication session with S1 it will send no
>| updates. In particular, it won't send U1 because the CSN corresponding to
>| S2 in S1's update vector is already greater than the CSN for U1. In fact,
>| S1 will never receive U1, so the requirement for all replicas to converge
>| will not be satisfied. In general, the current LDUP architecture only
>| works if the replication topology has no cycles, or where there are
>| cycles, if the replicas in each cycle have replication agreements for
>| exactly the same area of replication.
>|
>
>Hold on...  I'm deleting the rest of this message because you've lost me
>here.

Sorry. I was operating under a false assumption that a server can have only 
one replication
context and therefore only one update vector. I'll correct that shortly to 
remove some
of the confusion.

>I thought that (a) we had a separate CSN vector for each other
>server

You're thinking of the purge vector. In the current architecture, each 
replication
context a server maintains contains a replica subentry that holds a single
CSN vector which is its own update vector. There is also a replica subentry
for each other server, which holds a copy of the other server's update 
vector
in the same replication context.

The combination of a server's own update vector and the update vectors
received from all the other servers constitute the purge vector for that 
replication
context. The purge vector tells a server which of the CSNs it holds are old
enough to be discarded, but plays no part in deciding which updates are 
propagated.

>and that (b) that CSN vector was to the level of attribute and entry.

A CSN in an update vector is a replication context wide value. It is the CSN 
of
the most recent update in the associated replication context received from 
the
corresponding server .

>Thus,
>I don't see the problem.

The critical bit I missed was that in the case where R2 is a complete 
subordinate
subtree of R1 the current architecture avoids losing updates by making R2 a 
separate
replication context. However, R2 can also be a fractional replica or, in 
future, a sparse
replica (although the latest version of the information model effectively 
allows sparse
replicas by allowing non-trivial subtree specifications).

If R2 is a fractional replica then R1 and R2 share the same replication 
context and
therefore there is only one update vector for each of S1, S2 and S3 in my 
example.
Consider the example again, but assume R2 is a fractional replica of  R1.
I've appended the example with corrections as required.

Steven

=======================================================

Consider the following replication topology:

  S1 ====== S2
    \      /
     \    /
      \  /
       S3

Servers S1 & S2 hold full copies of replication area R1, which can be 
considered
to be the entire contents of a particular replication context. S3 holds
replication area R2, a subset of R1, in the same replication context.
R2 could be a sparse replica or a fractional replica. The exact details 
don't matter
at this stage. It is enough to recognize that R2 is a subset of the
information in R1, and that both are in the same replication context.

Suppose that there are two successive update operations, U1 & U2, performed
at S2, where U1 affects information in R1 but wholly outside of R2 and U2
is wholly within R2 (and thus also within R1). The CSN alloted to U1 is
less than the CSN alloted to U2.

Suppose S3 and S2 establish replication sessions to exchange updates.
S3 has no changes to send. S2 will send U2 because it is within the scope
of the replication agreement S3 has with S2, but will not send U1.

S3 and S1 then establish replication sessions. S1 has no changes to send.
S3 sends U2 since the CSN for U2 is more recent than the CSN corresponding
to S2 in S1's update vector. S1 will set the CSN corresponding to S2 in
its update vector to be the CSN for U2.

Now, if S2 establishes a replication session with S1 it will send no
updates. In particular, it won't send U1 because the CSN corresponding to
S2 in S1's update vector is already greater than the CSN for U1. In fact,
S1 will never receive U1, so the requirement for all replicas to converge
will not be satisfied. In general, the current LDUP architecture only
works if the replication topology with respect to a particular replication
context has no cycles, or where there are cycles, if the replicas in each
cycle have replication agreements for exactly the same area of replication.

However we can get around this restriction by maintaining an update vector
and replica ID per replication area (per replication context) for which a 
server
has a replication agreement, instead of a single update vector and single
replica ID per replication context per server.

A single replication context is assumed in what follows.
Let UV(S,R) be a reference to the update vector maintained by
server S for replication area R. It becomes convenient at this point to
have global unique identifiers for replication areas, e.g. R.
ASIDE: It also makes sense to have replication area descriptions as distinct
managed objects, and for replication agreement objects to just reference
a replication area by its unique identifier, instead of itself describing
the information to be replicated.

Suppose there is a server, S, with replication agreements for replication
area, R. We require a replica ID to uniquely identify the copy of the
information in R maintained by S. It is convenient for the purposes of
this discussion to use the notation S.R for that replica ID.

Let T be some other server and let Q be a replication area maintained by T.
An element in UV(S,R) for replica T.Q with the CSN value, C, is an assertion
that S has received from T.Q all updates to R with CSNs less than or equal
to C.

If all such updates have been received then it is also true that S has
received from T.Q all updates (with CSNs less than or equal to C) to
every replication area P, where P is a subset of R.

All the client updates processed by T.Q must be within replication area Q,
so if Q is a subset of, or the same as, R then S has received from T.Q all
updates (with CSNs less than or equal to C) to every replication area P,
where P is a superset of R.

We can use these results to obtain the following rule for maintaining
multiple update vectors in the one server, which for the sake of argument
I will call the update vector cascade rule:

  Given that S is receiving updates for replication area R, when S
  receives an update with a CSN containing a replica ID of T.Q it shall
  revise the CSN corresponding to T.Q in UV(S,R) and in every UV(S,P)
  where P is a subset of R. If Q is a subset of, or the same as, R then
  S shall revise the CSN corresponding to T.Q in every UV(S,P) where P
  is a superset of R.

For each update we need to be able to determine the replication area to
which it has been applied. Provided the replica and replication area
administrative objects are available then a lookup using the replica ID
in the CSN associated with update can give us the replication area. We also
need to be able to determine the supersets and subsets of the replication
area. This can be precalculated and cached from examination of the
replication area objects.

Now I'll show how the new architecture supports the topology of the
original example. Conceptually, S1 will now hold two replicas S1.R1 and
S1.R2, and two update vectors UV(S1,R1) and UV(S1,R2). S2 will now hold
two replicas S2.R1 and S2.R2, and two update vectors UV(S2,R1) and
UV(S2,R2). S3 holds only one replica S3.R2 and one update vector UV(S3,R2).

The update U1 is within R1 but outside R2 so this update is necessarily
applied to the replica S2.R1. The replica ID in the CSN for U1 will be
S2.R1.

In applying the update U2, S2 has a choice between replicas S2.R1 and
S2.R2 since U2 is within both R1 and R2. The detailed steps following
each choice are different, but the final outcome is always the same.
Note that S2 doesn't hold duplicates of all the entries and attributes
in R2. U2 acts on the same instance of the target entry and its attributes
regardless of the selected replica. The only material difference is the
replica ID that goes into the CSN generated for U2.

Firstly, I'll run through what happens if S2 chooses to apply U2 within R2.
The replica ID in the CSN for U2 will be S2.R2. S2 will set the CSN
corresponding to S2.R2 in UV(S2,R2) to be the CSN for U2. It will also
set the CSN corresponding to S2.R2 in UV(S2,R1) to be the CSN for U2.
This is the result of S2 applying the cascade rule to itself (R = Q = R2,
S = T = S2).

S3 and S2 establish replication sessions to exchange updates to replication
area R2. As before, S3 has no changes to send, and S2 will send U2 but
will not send U1. S3 sets the CSN corresponding to S2.R2 in UV(S3,R2) to
the CSN for U2.

S3 and S1 establish replication sessions to exchange updates to replication
area R2. S1 has no changes to send, as before. S3 sends U2 since the CSN
on U2 is more recent than the CSN corresponding to S2.R2 in UV(S1,R2).
S1 will set the CSN corresponding to S2.R2 in UV(S1,R2) to be the CSN for
U2. S1 will also set the CSN corresponding to S2.R2 in UV(S1,R1) by
application of the cascade rule (S = S1, T = S2, R = Q = R2).

If S2 establishes a replication session with S1 to send updates to
replication area R1 it will obtain UV(S1,R1). S2 will send U1 to S1 since
the CSN for U1 is greater than the CSN corresponding to S2.R1 in UV(S1,R1).
It won't send U2 since the CSN corresponding to S2.R2 in UV(S1,R1) already
has the value of the CSN for U2.

So S3 gets U2 and S1 gets both U1 and U2, exactly as it should be.

Now, I'll run through what happens if S2 chooses to apply U2 within R1.
The replica ID in the CSN for U2 will be S2.R1. S2 will set the CSN
corresponding to S2.R1 in UV(S2,R1) to be the CSN for U2. It will also
set the CSN corresponding to S2.R1 in UV(S2,R2) to be the CSN for U2.
This is the result of S2 applying the cascade rule to itself (R = Q = R1,
S = T = S2).

S3 and S2 establish replication sessions to exchange updates to replication
area R2. As before, S3 has no changes to send, and S2 will send U2 but
will not send U1. S3 sets the CSN corresponding to S2.R1 in UV(S3,R2) to
the CSN for U2.

S3 and S1 establish replication sessions to exchange updates to replication
area R2. S1 has no changes to send, as before. S3 sends U2 since the CSN
on U2 is more recent than the CSN corresponding to S2.R1 in UV(S1,R2).
S1 will set the CSN corresponding to S2.R1 in UV(S1,R2) to be the CSN for
U2. Application of the cascade rule (S = S1, T = S2, R = R2, Q = R1)
results in NO changes to UV(S1,R1). We now have the situation that U2 is
notionally present in the replica S1.R2 but not in the replica S1.R1. As
I indicated earlier, a server doesn't hold duplicates of entries and
attributes that are in multiple replication areas. If S1 engages in any
replication sessions with other servers for the replication area R1 it
must exclude any changes with CSNs greater than the relevant CSN in
UV(S1,R1). This includes U2 when it has been originally applied within R1
and so far only received via S3.

If S2 establishes a replication session with S1 to send updates to
replication area R1 it will obtain UV(S1,R1). S2 will send both U1 and U2
to S1 since the CSNs for U1 and U2 are greater than the CSN corresponding
to S2.R1 in UV(S1,R1). S1 will receive U2 twice but URP will quickly
ignore the duplicate. Importantly, S1 will set the CSN corresponding to
S2.R1 in UV(S1,R1) to be the CSN for U2.

We can avoid the duplication if we arrange for S2 to obtain both UV(S1,R1)
and UV(S1,R2) (in general, any UV(S1,P) where P is a subset of R1) at the
start of the replication session, but we must still make some provision
for UV(S1,R1) to be revised correctly. It will be easier to just accept
that there may be some harmless duplication.

This example is too simple and narrow to show it but in general, applying
an update to the smallest subset replication area (e.g. R2 instead of R1)
allows it to propagate through more paths, more quickly and with less
duplication.

The following topology is more interesting but I'll leave example
walkthroughs as an exercise for the reader.

       S0
      /  \
     /    \
    /      \
  S1        S2
    \      /
     \    /
      \  /
       S3

Servers S0, S1 and S2 hold full copies of replication area R1 in some 
replication context.
S3 holds replication area R2, a subset of R1, in the same replication 
context.

In this situation S0 only has replication agreements for R1 and therefore
only needs to maintain one update vector UV(S0,R1). It doesn't need to
bother with R2. This is a particularly useful result because it means that
a server, having entered into a replication agreement with some peer
server, isn't signficantly affected by the replication agreements the peer
server might make with yet other servers.


The extended architecture described here also provides a mechanism for
supporting expedited changes, which aren't possible in the current
architecture. The information, changes to which are to be expedited, is
set up as a subset replication area. Additional replication agreements
are established with the peer servers for this subset replication area,
presumably with on-change replication schedules. Updates to the subset
area get propagated immediately, while other updates propagate less
frequently, but eventually all updates get through.

The extended architecture can also handle replication areas that are
subordinate subtrees in a replication context without needing to make
the subordinate subtree a separate replication context.




_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp.



From owner-ietf-ldup@mail.imc.org  Wed Dec 12 10:18:18 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id KAA08223
	for <ldup-archive@odin.ietf.org>; Wed, 12 Dec 2001 10:18:18 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBCF0Qt07916
	for ietf-ldup-bks; Wed, 12 Dec 2001 07:00:26 -0800 (PST)
Received: from mailgate.rdg.opengroup.org (mailgate.rdg.opengroup.org [192.153.166.4])
	by above.proper.com (8.11.6/8.11.3) with SMTP id fBCF0P207911
	for <ietf-ldup@imc.org>; Wed, 12 Dec 2001 07:00:25 -0800 (PST)
Received: by mailgate.rdg.opengroup.org; id AA07015; Wed, 12 Dec 2001 15:00:36 GMT
Received: from dhcp192-153-166-48.rdg.opengroup.org [192.153.166.48] by smtp.opengroup.org via smtpd  V1.38 (00/07/25 13:18:13) for <ietf-ldup@imc.org> ; Wed Dec 12 15:00 GMT 2001
Message-Id: <5.1.0.14.1.20011212143833.00a9b460@mailhome.rdg.opengroup.org>
X-Sender: cjh@mailhome.rdg.opengroup.org
X-Mailer: QUALCOMM Windows Eudora Version 5.1
Date: Wed, 12 Dec 2001 14:38:35 +0000
To: ietf-ldapext@netscape.com, ietf-ldapbis@OpenLDAP.org, ietf-ldup@imc.org
From: Chris Harding <c.harding@opengroup.org>
Subject: Identity Management: Building Integrated Information
  Infrastructure
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"; format=flowed
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


Hi -

The Open Group's Directory Interoperability Forum, EMA Forum, Mobile 
Messaging Forum and Security Forum, Present:

             Identity Management: Building Integrated Information 
Infrastructure

Join The Open Group January 21-25, 2002 in Anaheim, CA as we focus on 
Integrated Information Infrastructure.

Over Days 1 and 2, the topic will be explored through general sessions, 
business scenarios, standards and certifications.

On Day 3, the Directory Interoperability Forum (DIF), in conjunction with 
the EMA Forum, the Mobile Messaging Forum and the Security Forum, will 
further examine the specific role of Identity Management in creating an 
Integrated Information Infrastructure through sessions with end-users, 
vendors and panel discussions. The agenda is at 
http://www.opengroup.org/dif/dirday13/idmgmt.htm

Day 4 is comprised of working group sessions, which are available by 
invitation.

On Sunday January 20th, immediately prior to the conference, there will be 
a 1-day seminar: Enterprise Directories - Preparing For The Next Generation.

Learn about the latest integration and identity management solutions and 
network with end-users and vendors from leading organizations around the world.

Register before December 21, 2001 to receive an early bird discount on 
conference fees.

To learn more about The Open Group and its forums, please visit us on the 
web at http://www.opengroup.org.

Where: Hilton Anaheim, Anaheim CA
When: January 23, 2002 (as part of the Open Group Conference January 21-25, 
2002)

Building on the learning of Day's One and Two, we will focus specifically 
on the role of Identity Management in creating an Integrated Information 
Infrastructure on Day Three of the Conference.

Through a full day of open sessions, end-user companies will discuss the 
identity management issues they are facing today, vendors will present the 
ways they are addressing them and a business case will be presented. 
Further issues in identity management will be explored by a panel of 
end-users and vendors. Learn the latest about identity management and 
connect with other Directory professionals from the leading vendor and 
end-user organizations.

Some of the topics that will be covered include:
    - How do you control access, based on people's roles?
    - How do you keep track of people who are always on the move?
    - How do you deliver facilities to meet people's personal needs and 
preferences?
    - Will Passport or Liberty Alliance work?
    - Are traditional X.500 and LDAP solutions appropriate?
    - Should administration be shared vs delegated?
    - How do you leveraging legacy and "incompatible" data architectures?
    - Is there such a thing as 100% security?

Register Now!

Early Bird Discounts are available until December 21, 2001.

The DIF will be holding working group sessions on January 24, 2002, 
addressing ongoing efforts that are influencing emerging industry 
standards. Want to learn more or join one of the groups? Contact Chris 
Harding at +44 118 9508311 x 2262 or send an email to c.harding@opengroup.org.

Regards,

Chris
+++++

========================================================================
            Dr. Christopher J. Harding
   T H E    Executive Director for the Directory Interoperability Forum
  O P E N   Apex Plaza, Forbury Road, Reading RG1 1AX, UK
G R O U P  Mailto:c.harding@opengroup.org Phone:  +44 118 950 8311 x2262
            WWW: http://www.opengroup.org Mobile: +44 774 063 1520
========================================================================
The Open Access Conference Series
In3 - Integrated Information Infrastructure - 2002
Hilton Anaheim, California, USA, 21-25 January 2002
http://www.opengroup.org/Anaheim2002
========================================================================



From owner-ietf-ldup@mail.imc.org  Wed Dec 12 11:01:38 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id LAA11611
	for <ldup-archive@odin.ietf.org>; Wed, 12 Dec 2001 11:01:37 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBCFk5j09385
	for ietf-ldup-bks; Wed, 12 Dec 2001 07:46:05 -0800 (PST)
Received: from gorilla.mchh.siemens.de (gorilla.mchh.siemens.de [194.138.158.18])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBCFk2209377
	for <ietf-ldup@imc.org>; Wed, 12 Dec 2001 07:46:02 -0800 (PST)
Received: from moody.mchh.siemens.de (mail2.mchh.siemens.de [194.138.158.226])
	by gorilla.mchh.siemens.de (8.9.3/8.9.3) with ESMTP id QAA15588;
	Wed, 12 Dec 2001 16:46:01 +0100 (MET)
Received: from mchh273e.demchh201e.icn.siemens.de ([139.21.200.83])
	by moody.mchh.siemens.de (8.9.1/8.9.1) with ESMTP id QAA20473;
	Wed, 12 Dec 2001 16:46:01 +0100 (MET)
Received: by MCHH273E with Internet Mail Service (5.5.2653.19)
	id <X35MNYTD>; Wed, 12 Dec 2001 16:46:00 +0100
Message-ID: <1D82815C322BD41196EA00508B951F7B01B40CDA@MCHH265E>
From: Fantou Patrick <patrick.fantou@icn.siemens.de>
To: "'Mark Wahl'" <Mark.Wahl@Sun.COM>, "Slone, Skip" <skip.slone@lmco.com>
Cc: "LDUP Mailing List (ietf-ldup@imc.org)" <ietf-ldup@imc.org>
Subject: AW: Follow-up on Naming of Subentries
Date: Wed, 12 Dec 2001 16:46:00 +0100
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
Content-Type: text/plain;
	charset="ISO-8859-1"
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fBCFk3209380
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


Hi Mark,

I cannot follow too much the reason of your requirement
to use another naming attribute as cn for a subentry.
In fact a good administrator will choose a name for a subentry which says something
about the content of the subentry, for instance
cn=subschemaSubentry or cn=accesscontrolSubentry,
independantly of the naming attribute used, and this , more than the naming attribute itself
would avoid conflicts.

Patrick Fantou
Siemens

> -----Ursprüngliche Nachricht-----
> Von: Mark Wahl [mailto:Mark.Wahl@Sun.COM]
> Gesendet: Dienstag, 11. Dezember 2001 23:44
> An: Slone, Skip
> Cc: LDUP Mailing List (ietf-ldup@imc.org)
> Betreff: Re: Follow-up on Naming of Subentries
> 
> 
> 
> > "Slone, Skip" wrote:
> >
> > We discussed these topics at the X.500 meeting last month.  
> The consensus of
> > the group was that although we're not necessarily opposed 
> to either of these
> > changes, we need to better understand the requirements, and 
> we want to make
> > sure that by making the changes, we're not creating 
> unintended additional
> > semantics.
> 
> Having typed attributes in X.500/LDAP is one of the main 
> advantages of 
> these directories over more primitive data models such as 
> DNS. However,
> since I believe Subentries are Readable in X.500, the 
> presence of a subentry
> would prevent an object entry existing in the server with the 
> same name.  It
> is unfortunate that cn is the naming attribute for subentries 
> and not something
> more 'operational', since cn is frequently used as a naming 
> attribute for 
> object entries.  This means that an application which wants 
> to create an object
> entry named by cn must not only check to see whether there is 
> an object 
> entry with the same name, but also use a subentry-aware 
> operation to check 
> that there is no subentry with the same name.  This isn't 
> worthwhile since 
> applications which are bulk loading in object entries may not 
> care about 
> subentries if they are gateways from other protocols or data models.  
> It would be FAR better to leverage this advantage and allow a more
> administrator/operational-oriented naming attribute such as 
> 'subentryname'
> for naming subentries.  This attribute would be intended 
> exclusively for
> use by subentries and so would never conflict with object 
> entries named
> by cn.
> 
> Mark Wahl
> Sun Microsystems Inc.
> 


From owner-ietf-ldup@mail.imc.org  Wed Dec 12 12:19:28 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA13563
	for <ldup-archive@odin.ietf.org>; Wed, 12 Dec 2001 12:19:28 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBCGrvb18212
	for ietf-ldup-bks; Wed, 12 Dec 2001 08:53:57 -0800 (PST)
Received: from patan.sun.com (patan.Sun.COM [192.18.98.43])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBCGru218207
	for <ietf-ldup@imc.org>; Wed, 12 Dec 2001 08:53:56 -0800 (PST)
Received: from rowlf.Central.Sun.COM ([129.153.131.70])
	by patan.sun.com (8.9.3+Sun/8.9.3) with ESMTP id JAA01817;
	Wed, 12 Dec 2001 09:53:39 -0700 (MST)
Received: from sun.com (vpn-129-147-152-135.Central.Sun.COM [129.147.152.135])
	by rowlf.Central.Sun.COM (8.10.2+Sun/8.10.2/ENSMAIL,v2.1p1) with ESMTP id fBCGrsl06578;
	Wed, 12 Dec 2001 10:53:55 -0600 (CST)
Message-ID: <3C178949.A7D9CEB3@sun.com>
Date: Wed, 12 Dec 2001 10:43:54 -0600
From: Mark Wahl <Mark.Wahl@Sun.COM>
X-Mailer: Mozilla 4.77 [en] (WinNT; U)
X-Accept-Language: en
MIME-Version: 1.0
To: Fantou Patrick <patrick.fantou@icn.siemens.de>
CC: "Slone, Skip" <skip.slone@lmco.com>,
        "LDUP Mailing List (ietf-ldup@imc.org)" <ietf-ldup@imc.org>
Subject: Re: AW: Follow-up on Naming of Subentries
References: <1D82815C322BD41196EA00508B951F7B01B40CDA@MCHH265E>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit




Fantou Patrick wrote:
> 
> Hi Mark,
> 
> I cannot follow too much the reason of your requirement
> to use another naming attribute as cn for a subentry.
> In fact a good administrator will choose a name for a subentry which says something
> about the content of the subentry, for instance
> cn=subschemaSubentry or cn=accesscontrolSubentry,
> independantly of the naming attribute used, and this , more than the naming attribute itself
> would avoid conflicts.

The difficulty comes when subentries and object entries are being 
automatically created by two different applications.  These applications
would need to have some algorithmic ways of coming up with attribute values.
But this doesn't seem appropriate to use 'cn' for such things if there is 
some other attribute type that better describes what the application wants
to use.  Forcing them to use cn with a value which is perhaps some string
encoding of a GUID/UUID seems to counter the benefit of having typed RDNs.


Mark Wahl
Sun Microsystems Inc.



From owner-ietf-ldup@mail.imc.org  Thu Dec 13 19:06:47 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id TAA02513
	for <ldup-archive@lists.ietf.org>; Thu, 13 Dec 2001 19:06:47 -0500 (EST)
Received: by above.proper.com (8.11.6/8.11.3) id fBDNmOY28425
	for ietf-ldup-bks; Thu, 13 Dec 2001 15:48:24 -0800 (PST)
Received: from prv-mail20.provo.novell.com (prv-mail20.provo.novell.com [137.65.81.122])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBDNmN228416
	for <ietf-ldup@imc.org>; Thu, 13 Dec 2001 15:48:23 -0800 (PST)
Received: from INET-PRV-MTA by prv-mail20.provo.novell.com
	with Novell_GroupWise; Thu, 13 Dec 2001 16:49:42 -0700
Message-Id: <sc18dc26.058@prv-mail20.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 6.0.1
Date: Thu, 13 Dec 2001 16:44:53 -0700
From: "Jim Sermersheim" <JIMSE@novell.com>
To: <ietf-ldup@imc.org>
Subject: Referrals and draft-ietf-ldup-lcup-02.txt
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit


We need to state that the client may receive a referral when the search
base is a subordinate reference, and point out that this will end the
operation.


From owner-ietf-ldup@mail.imc.org  Thu Dec 13 19:27:55 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id TAA02796
	for <ldup-archive@lists.ietf.org>; Thu, 13 Dec 2001 19:27:54 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBE0CbW29347
	for ietf-ldup-bks; Thu, 13 Dec 2001 16:12:37 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBE0CY229337
	for <ietf-ldup@imc.org>; Thu, 13 Dec 2001 16:12:34 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Thu, 13 Dec 2001 19:02:38 -0700
Message-Id: <sc18fb4e.094@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Thu, 13 Dec 2001 19:02:30 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <ietf-ldup@imc.org>
Subject: My slide
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=_BEE396AE.680964CE"
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=_BEE396AE.680964CE
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Here's the slide I used...

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585


--=_BEE396AE.680964CE
Content-Type: application/vnd.ms-powerpoint; name="LDUP Model (Arch).ppt"
Content-Disposition: attachment; filename="LDUP Model (Arch).ppt"
Content-Transfer-Encoding: base64

0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAABAAAAAQAAAAAAAAAA
EAAADAAAAAEAAAD+////AAAAAAAAAAD/////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////9
////EwAAAAMAAAAEAAAABQAAAAYAAAAHAAAACAAAAAkAAAAKAAAACwAAAP7////+////DgAAAA8A
AAAQAAAAEQAAABIAAAD+/////v//////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////1IA
bwBvAHQAIABFAG4AdAByAHkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAWAAUA//////////8BAAAAEI2BZJtPzxGG6gCqALkp6AAAAAAAAAAAAAAAADAshKcDhMEB
DQAAAIALAAAAAAAAUABvAHcAZQByAFAAbwBpAG4AdAAgAEQAbwBjAHUAbQBlAG4AdAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAACgAAgECAAAAAwAAAP////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAACAAAAqxMAAAAAAAAFAFMAdQBtAG0AYQByAHkASQBuAGYAbwByAG0AYQB0
AGkAbwBuAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKAACAQQAAAD//////////wAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACQCAAAAAAAAAUARABvAGMAdQBtAGUAbgB0
AFMAdQBtAG0AYQByAHkASQBuAGYAbwByAG0AYQB0AGkAbwBuAAAAAAAAAAAAAAA4AAIB////////
////////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIwAAACgCAAAAAAAADwDo
A+cGAAABAOkDKAAAAIAWAADgEAAA4BAAAIAWAAAFAAAACgAAAAAAAAAAAAAAAQAAAAAAAAEPAPID
FgEAAC8AyA8MAAAAMADSDwQAAAABAAAADwDVB0wAAAAAALcPRAAAAFQAaQBtAGUAcwAgAE4AZQB3
ACAAUgBvAG0AYQBuAAAADDa6AJzaEgCE2hIAdscLMAgAAAAAAAAAnNoSACjdDTAAAAQAAACkDwoA
AACAAGAAAAD/////AAClDwwAAAAAAAAILgAAAAcAAAAAAKkPCgAAAAcAAAACAAkEAABAAKMPbgAA
AAUA//0/AAAAIiAAAGQAAAAAAAAAZAAAAAAAAAAAAEACAAAAAAcAAAD//+8AAAAAAP///////xgA
AAAAAQAAAAUAACABIAEAAAAAAAUAAEACQAIAAAAAAAUAAGADYAMAAAAAAAUAAIAEgAQAAAAADwAL
BHwAAAAPAADwdAAAAAAABvAoAAAABAwAAAQAAAAMAAAAAwAAAAEAAAAHAAAAAgAAAAQAAAADAAAA
BAAAAGMAC/AkAAAAgQEEAAAIgwEAAAAIvwEQABAAwAEBAAAI/wEIAAgAAQICAAAIQAAe8RAAAAAE
AAAIAQAACAIAAAj3AAAQHwDwDxwAAAAAAPMDFAAAAAIAAAAAAAAAAAAAAAAAAIAAAAAADwDQBxsB
AAAfABQEHAAAAAAAFQQUAAAAuh117ADKmjsyTs3JAMqaOwEBAAEPAPoDZwAAAAAA/gMDAAAAAAEA
AAD9AzQAAABDAAAAZAAAAEMAAABkAAAAdscLMAgAAAAAAAAAkNoSAAAAAAAAAAAArP///1L///8B
AAAAcAD7AwgAAAAAAAAAcAgAAHAA+wMIAAAAAQAAAEALAAAfAAcEPAAAAAAA/QM0AAAAIQAAAGQA
AAAhAAAAZAAAAGDzDTBM3hIAAAAAAPg2ugAAAAAAAAAAAAAAAAAAAAAAAAESAB8AEwQ8AAAAAAD9
AzQAAABkAAAAZAAAAGQAAABkAAAAYPMNMEzeEgAAAAAA+Da6AAAAAAAAAAAAAAAAAAAAAAAAARIA
DwDwD74DAAAAAPMDFAAAAAMAAAAAAAAAAgAAAAABAAAAAAAAAACfDwQAAAAGAAAAAACoDy4AAABM
RFVQIE1vZGVsIChBcmNoKQtSZXF1aXJlbWVudHMgQ292ZXJhZ2UgTWF0cml4AAChDxwAAAAvAAAA
AAAAAAAAEgAAAAAAAAAdAAAAAAACACQAEACfDwQAAAAFAAAAAACoDxkAAABFZCBSZWVkDUVlckBv
bmNhbGxkYmEuY29tAACqDywAAAAIAAAAAAAAAAMAAAABAAAAAwABAAAAAAAAAAkAAAABAAAAAwAF
AAAAAAAAAAAA8wMUAAAABAAAAAAAAAACAAAAAQEAAAAAAAAAAJ8PBAAAAAAAAAAAAKgPEwAAAFJl
c3VsdHMgb2YgQW5hbHlzaXMQAJ8PBAAAAAEAAAAAAKAPaAIAAEUAeABjAGUAbAAgAHMAcAByAGUA
YQBkAHMAaABlAGUAdAAgAGgAbwBsAGQAcwAgAGQAYQB0AGEALAAgAHAAdQBiAGwAaQBzAGgAZQBk
ACAAdgBpAGEAIABBAGMAcgBvAGIAYQB0ACAAUABEAEYAIABmAGkAbABlACAAZgBvAHIAIAByAGUA
dgBpAGUAdwBlAHIAcwANAE0AbwBkAGUAbAAgAEQAbwBjAHUAbQBlAG4AdAAgAGkAcwAgABwgYgBl
AGgAaQBuAGQAIAB0AGgAZQAgAHQAaQBtAGUAcwAdIA0AVABoAGUAcgBlACAAYQByAGUAIAByAGUA
cQB1AGkAcgBlAG0AZQBuAHQAcwAgAG4AbwB0ACAAbQBlAHQAIABiAHkAIABNAG8AZABlAGwADQBU
AGgAZQByAGUAIABhAHIAZQAgAHIAZQBxAHUAaQByAGUAbQBlAG4AdABzACAAdABoAGEAdAAgAHMA
dABhAHQAZQAtAGIAYQBzAGUAZAAgAHIAZQBwAGwAaQBjAGEAdABpAG8AbgAgAHcAaQBsAGwAIABu
AG8AdAAgAG0AZQBlAHQADQBUAGgAZQByAGUAIABhAHIAZQAgAHIAZQBxAHUAaQByAGUAbQBlAG4A
dABzACAAdABoAGEAdAAgAGwAbwBnAC0AYgBhAHMAZQBkACAAcgBlAHAAbABpAGMAYQB0AGkAbwBu
ACAAdwBpAGwAbAAgAG4AZQBlAGQAIABjAGEAcgBlAGYAdQBsACAAaQBtAHAAbABlAG0AZQBuAHQA
YQB0AGkAbwBuACAAdABvACAAbQBlAGUAdAAAAKEPFAAAADUBAAAAAAAAAAA1AQAAAAACABwAAADq
AwAAAAAPAPgDnAgAAAIA7wMYAAAAAQAAAAECBwkIAAAAAAAAAAAAAAAAAAAAYADwByAAAAAAAP8A
////AAAAAAD//wAA/5kAAAD//wD/AAAAlpaWAGAA8AcgAAAA////AAAAAACAgIAAAAAAAADMmQAz
M8wAzMz/ALKysgBgAPAHIAAAAP///wAAAAAAMzMzAAAAAADd3d0AgICAAE1NTQDq6uoAYADwByAA
AAD//8wAAAAAAGZmMwCAgAAAM5kzAIAAAAAAM8wA/8xmAGAA8AcgAAAA////AAAAAACAgIAAAAAA
AP/MZgAAAP8AzADMAMDAwABgAPAHIAAAAP///wAAAAAAgICAAAAAAADAwMAAAGb/AP8AAAAAmQAA
YADwByAAAAD///8AAAAAAICAgAAAAAAAM5n/AJn/zADMAMwAsrKyAAAAow8+AAAAAQD//T8AAAAi
IAAAZAAAAAAAAQBkAAAAAAAAAAAAQAIAAAAABwAAAP//7wAAAAAA////////LAAAAAADAAAQAKMP
fAAAAAUA//0/AAEAIiAAAGQAAAAAAAAAZAAUAAAA2AAAAEACAAAAAAcAAAD//+8AAAAAAP//////
/yAAAAAAAQAAgAUAABMg1AEgAQAAAgAcAIAFAAAiINACQAIAAAIAGACABQAAEyDwA2ADAAACABQA
gAUAALsAEAWABAAAAAAgAKMPbgAAAAUA//0/AAAAIiAAAGQAAAAAAAAAZAAeAAAAAAAAAEACAAAA
AAcAAAD//+8AAAAAAP///////wwAAAAAAQAAAAUAACABIAEAAAAAAAUAAEACQAIAAAAAAAUAAGAD
YAMAAAAAAAUAAIAEgAQAAAAAUACjD1IAAAAFAAAAAQkAAAAAAQAAAAAAAAABAAEJAAAAAAEAIAEA
AAAAAgABCQAAAAABAEACAAAAAAMAAQkAAAAAAQBgAwAAAAAEAAEJAAAAAAEAgAQAAAAAYACjDwwA
AAABAAAAAAAAAAAAAABwAKMPPgAAAAUAAAAAAAAAAAACABwAAQAAAAAAAAACABgAAgAAAAAAAAAC
ABQAAwAAAAAAAAACABIABAAAAAAAAAACABIAgACjDz4AAAAFAAAAAAAAAAAAAgAYAAEAAAAAAAAA
AgAUAAIAAAAAAAAAAgASAAMAAAAAAAAAAgAQAAQAAAAAAAAAAgAQAA8ADATWBAAADwAC8M4EAAAQ
AAjwCAAAAAYAAAAGBAAADwAD8GYEAAAPAATwKAAAAAEACfAQAAAAAAAAAAAAAAAAAAAAAAAAAAIA
CvAIAAAAAAQAAAUAAAAPAATw0gAAABIACvAIAAAAAgQAAAAKAACTAAvwNgAAAH8AAQAFAIAA/GO6
AIcAAQAAAIEBBAAACIMBAAAACL8BAQARAMABAQAACP8BAQAJAAECAgAACAAAEPAIAAAAgAGwAdAU
UAQPABHwEAAAAAAAwwsIAAAAAAAAAAEAugAPAA3wVAAAAAAAnw8EAAAAAAAAAAAAqA8gAAAAQ2xp
Y2sgdG8gZWRpdCBNYXN0ZXIgdGl0bGUgc3R5bGUAAKIPBgAAACEAAAAAAAAAqg8KAAAAIQAAAAEA
AAAAAA8ABPAWAQAAEgAK8AgAAAADBAAAAAoAAIMAC/AwAAAAfwABAAUAgAB4iboAgQEEAAAIgwEA
AAAIvwEBABEAwAEBAAAI/wEBAAkAAQICAAAIAAAQ8AgAAADgBLAB0BQADw8AEfAQAAAAAADDCwgA
AAABAAAAAgC6AA8ADfCeAAAAAACfDwQAAAABAAAAAACoD1IAAABDbGljayB0byBlZGl0IE1hc3Rl
ciB0ZXh0IHN0eWxlcw1TZWNvbmQgbGV2ZWwNVGhpcmQgbGV2ZWwNRm91cnRoIGxldmVsDUZpZnRo
IGxldmVsAACiDx4AAAAhAAAAAAANAAAAAQAMAAAAAgANAAAAAwAMAAAABAAAAKoPCgAAAFMAAAAB
AAAAAAAPAATwtgAAABIACvAIAAAABAQAAAAKAACDAAvwMAAAAH8AAQAFAIAALJG6AIEBBAAACIMB
AAAACL8BAQARAMABAQAACP8BAQAJAAECAgAACAAAEPAIAAAAYA+wAWAGgBAPABHwEAAAAAAAwwsI
AAAAAgAAAAcBugAPAA3wPgAAAAAAnw8EAAAABAAAAAAAoA8CAAAAKgAAAKEPFAAAAAIAAAAAAAAA
AAACAAAAAAACAA4AAAD4DwQAAAAAAAAADwAE8LgAAAASAArwCAAAAAUEAAAACgAAgwAL8DAAAAB/
AAEABQCAALSQugCBAQQAAAiDAQAAAAi/AQEAEQDAAQEAAAj/AQEACQABAgIAAAgAABDwCAAAAGAP
sAfQDoAQDwAR8BAAAAAAAMMLCAAAAAMAAAAJAroADwAN8EAAAAAAAJ8PBAAAAAQAAAAAAKAPAgAA
ACoAAAChDxYAAAACAAAAAAAACAAAAQACAAAAAAACAA4AAAD6DwQAAAAAAAAADwAE8LgAAAASAArw
CAAAAAYEAAAACgAAgwAL8DAAAAB/AAEABQCAAPigugCBAQQAAAiDAQAAAAi/AQEAEQDAAQEAAAj/
AQEACQABAgIAAAgAABDwCAAAAGAPIBDQFIAQDwAR8BAAAAAAAMMLCAAAAAQAAAAIAroADwAN8EAA
AAAAAJ8PBAAAAAQAAAAAAKAPAgAAACoAAAChDxYAAAACAAAAAAAACAAAAgACAAAAAAACAA4AAADY
DwQAAAAAAAAADwAE8EgAAAASAArwCAAAAAEEAAAADAAAgwAL8DAAAACBAQAAAAiDAQUAAAiTAY6f
iwCUAd69aAC/ARIAEgD/AQAACAAEAwkAAAA/AwEAAQAQAPAHIAAAAP///wAAAAAAgICAAAAAAAAA
zJkAMzPMAMzM/wCysrIAIAC6DxwAAABEAGUAZgBhAHUAbAB0ACAARABlAHMAaQBnAG4ADwDuA+QB
AAACAO8DGAAAAAAAAAAPEAAAAAAAAAAAAIAAAAAABwAAAA8ADASUAQAADwAC8IwBAAAgAAjwCAAA
AAMAAAADCAAADwAD8CQBAAAPAATwKAAAAAEACfAQAAAAAAAAAAAAAAAAAAAAAAAAAAIACvAIAAAA
AAgAAAUAAAAPAATwcgAAABIACvAIAAAAAggAACACAABTAAvwHgAAAH8AAAAEAIAAtJ+6AL8BAAAB
AP8BAAABAAEDAgQAAAAAEPAIAAAAoAWwAdAUcAgPABHwEAAAAAAAwwsIAAAAAAAAAA8AugAPAA3w
DAAAAAAAng8EAAAAAAAAAA8ABPByAAAAEgAK8AgAAAADCAAAIAIAAFMAC/AeAAAAfwAAAAQAgACY
47oAvwEAAAEA/wEAAAEAAQMDBAAAAAAQ8AgAAACQCWADIBPgDQ8AEfAQAAAAAADDCwgAAAABAAAA
EAC6AA8ADfAMAAAAAACeDwQAAAABAAAADwAE8EgAAAASAArwCAAAAAEIAAAADAAAgwAL8DAAAACB
AQAAAAiDAQUAAAiTAY6fiwCUAd69aAC/ARIAEgD/AQAACAAEAwkAAAA/AwEAAQAQAPAHIAAAAP//
/wAAAAAAgICAAAAAAAAAzJkAMzPMAMzM/wCysrIADwDuA+QBAAACAO8DGAAAAAEAAAANDgAAAAAA
AAAAAIAAAAAABwAAAA8ADASUAQAADwAC8IwBAAAwAAjwCAAAAAMAAAADDAAADwAD8CQBAAAPAATw
KAAAAAEACfAQAAAAAAAAAAAAAAAAAAAAAAAAAAIACvAIAAAAAAwAAAUAAAAPAATwcgAAABIACvAI
AAAAAgwAACACAABTAAvwHgAAAH8AAAAEAIAAxCZCAb8BAAABAP8BAAABAAEDAgQAAAAAEPAIAAAA
gAGwAdAUUAQPABHwEAAAAAAAwwsIAAAAAAAAAA0AQgEPAA3wDAAAAAAAng8EAAAAAAAAAA8ABPBy
AAAAEgAK8AgAAAADDAAAIAIAAFMAC/AeAAAAfwAAAAQAgAAQ2boAvwEAAAEA/wEAAAEAAQMDBAAA
AAAQ8AgAAADgBLAB0BQADw8AEfAQAAAAAADDCwgAAAABAAAADgC6AA8ADfAMAAAAAACeDwQAAAAB
AAAADwAE8EgAAAASAArwCAAAAAEMAAAADAAAgwAL8DAAAACBAQAAAAiDAQUAAAiTAY6fiwCUAd69
aAC/ARIAEgD/AQAACAAEAwkAAAA/AwEAAQAQAPAHIAAAAP///wAAAAAAgICAAAAAAAAAzJkAMzPM
AMzM/wCysrIAAAByFxQAAAABAEAAAAAAAO8GAACTDwAAfxEAAAAA9Q8cAAAAAQEAAJwKAAMAAAAA
axMAAAEAAAAEAAAAAQDRAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAgAAAAMAAAAE
AAAABQAAAAYAAAAHAAAACAAAAAkAAAAKAAAACwAAAAwAAAANAAAADgAAAA8AAAAQAAAAEQAAABIA
AAATAAAAFAAAABUAAAAWAAAAFwAAABgAAAAZAAAAGgAAABsAAAAcAAAAHQAAAB4AAAAfAAAAIAAA
ACEAAAAiAAAA/v///yQAAAAlAAAAJgAAACcAAAAoAAAAKQAAACoAAAArAAAA/v///y0AAAD+////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////7/AAAFAAIAAAAAAAAA
AAAAAAAAAAAAAAEAAADghZ/y+U9oEKuRCAArJ7PZMAAAAGAIAAALAAAAAQAAAGAAAAACAAAAaAAA
AAQAAACgAAAACAAAALgAAAAJAAAA0AAAABIAAADcAAAACgAAAPwAAAAMAAAACAEAAA0AAAAUAQAA
DwAAACABAAARAAAAKAEAAAIAAADkBAAAHgAAAC8AAABMRFVQIE1vZGVsIChBcmNoKSBSZXF1aXJl
bWVudHMgQ292ZXJhZ2UgTWF0cml4AGUeAAAADwAAAEVkd2FyZHMgRSBSZWVkAGgeAAAADwAAAEVk
d2FyZHMgRSBSZWVkAGgeAAAAAgAAADEAd2EeAAAAFQAAAE1pY3Jvc29mdCBQb3dlclBvaW50AHVp
ckAAAADg9wUEAAAAAEAAAADQ5GujA4TBAUAAAABwBHunA4TBAQMAAAA9AAAARwAAAC4HAAD/////
AwAAAAgAiRBnDAAAAQAJAAADjwMAAAYAMQAAAAAAEQAAACYGDwAYAP////8AABAAAAAAAAAAAADA
AwAA0AIAAAkAAAAmBg8ACAD/////AgAAABcAAAAmBg8AIwD/////BAAbAFROUFAUAKAAugAyAAAA
//9PABQAAABNAGkAAAAKAAAAJgYPAAoAVE5QUAAAAgD0AwkAAAAmBg8ACAD/////AwAAAA8AAAAm
Bg8AFABUTlBQBAAMAAEAAAABAAAAAAAAAAUAAAALAgAAAAAFAAAADALQAsADBQAAAAQBDQAAAAcA
AAD8AgAA////AAAABAAAAC0BAAAIAAAA+gIFAAEAAAAAAAAABAAAAC0BAQAEAAAALQEAAAkAAAAd
BiEA8ADQAsADAAAAAAQAAAAtAQAABwAAAPwCAAD///8AAAAEAAAALQECAAQAAADwAQAACAAAAPoC
AAAAAAAAAAAAAAQAAAAtAQAAEAAAACYGDwAWAP////8AAEcAAACPAgAAEQEAAMECAAAIAAAAJgYP
AAYA/////wEAHAAAAPsCAAAAAAAAAAAAAAAAAAAAAAAAAM0SAJJx9XdAAAAAFAYKVkxT9XdVU/V3
AQAAAAAAMAAEAAAALQEDAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAAIBAgAAABAAAAAmBg8AFgD/
////AABHAQAAjwIAAHkCAADBAgAACAAAACYGDwAGAP////8BAAUAAAAJAgAAAAIFAAAAFAIAAAAA
BQAAAAIBAgAAAAcAAAD8AgEAAAAAAAAABAAAAC0BBAAEAAAALQEBAAcAAAAbBGkBeQPwAEgABAAA
AC0BAgAEAAAALQEAAAUAAAAJAgAAAAIFAAAAFAIAAAAAHAAAAPsCxf8AAAAAAACQAQAAAAAAQAAA
VGltZXMgTmV3IFJvbWFuAExT9XdVU/V3AQAAAAAAMAAEAAAALQEFAAQAAADwAQMABQAAAAkCAAAA
AgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAAACEAAAAyCiQB6gARAAAATERVUCBNb2RlbCAo
QXJjaCkAIgArACsAIQAPADQAHQAeABoADwAPABQAKgAUABoAHgAUAAUAAAAuAQEAAAAFAAAAAgEC
AAAAHAAAAPsC0P8AAAAAAACQAQAAAAAAQAAAVGltZXMgTmV3IFJvbWFuAExT9XdVU/V3AQAAAAAA
MAAEAAAALQEDAAQAAADwAQUABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAA
ADEAAAAyCmEBsgAcAAAAUmVxdWlyZW1lbnRzIENvdmVyYWdlIE1hdHJpeCAAFQAYABkADAAQABUA
JgAWABcADQATAAwAIQAYABcAFQAQABUAGQAVAAwAKwAVAA0AEQAMABgABQAAAC4BAQAAAAUAAAAC
AQIAAAAFAAAAAgECAAAABAAAAC0BBAAEAAAALQEBAAcAAAAbBFECMQOYAZAABAAAAC0BAgAEAAAA
LQEAAAUAAAAJAgAAAAIFAAAAFAIAAAAAHAAAAPsC1f8AAAAAAACQAQAAAAAAQAAAVGltZXMgTmV3
IFJvbWFuAExT9XdVU/V3AQAAAAAAMAAEAAAALQEFAAQAAADwAQMABQAAAAkCAAAAAgUAAAAUAgAA
AAAFAAAALgEYAAAABQAAAAIBAQAAABIAAAAyCsYBmAEHAAAARWQgUmVlZAIaABUACwAdABIAEwAV
AAUAAAAuAQEAAAAFAAAAAgECAAAABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIB
AQAAAAwAAAAyCgMCMgEDAAAARWVyWRoAEgAOAAUAAAAuAQEAAAAFAAAAAgECAAAABQAAAAkCAAAA
AgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAAAAkAAAAyCgMCbAEBAAAAQAAoAAUAAAAuAQEA
AAAFAAAAAgECAAAABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAAABUAAAAy
CgMClAEJAAAAb25jYWxsZGJhABUAFgASABMADAAMABUAFgATAAUAAAAuAQEAAAAFAAAAAgECAAAA
BQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAAAA0AAAAyCgMCOgIEAAAALmNv
bQsAEgAWACEABQAAAC4BAQAAAAUAAAACAQIAAAAFAAAAAgECAAAABAAAAC0BAQAEAAAALQEEABwA
AAD7AhAABwAAAAAAvAIAAAAAAQICIlN5c3RlbQAAAAAKAAAABAAAAAAAAwAAAAEAAAAAADAABAAA
AC0BAwAEAAAA8AEFAA8AAAAmBg8AFABUTlBQBAAMAAAAAAAAAAAAAAAAAAkAAAAmBg8ACAD/////
AQAAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAA/v8AAAUAAgAAAAAAAAAAAAAAAAAAAAAAAQAAAALVzdWcLhsQk5cIACss+a4wAAAA+AEAABAA
AAABAAAAiAAAAAMAAACQAAAADwAAAKgAAAAEAAAAxAAAAAYAAADMAAAABwAAANQAAAAIAAAA3AAA
AAkAAADkAAAACgAAAOwAAAAXAAAA9AAAAAsAAAD8AAAAEAAAAAQBAAATAAAADAEAABYAAAAUAQAA
DQAAABwBAAAMAAAAlgEAAAIAAADkBAAAHgAAAA8AAABPbi1zY3JlZW4gU2hvdwAAHgAAABQAAABS
ZWVkLU1hdHRoZXdzLCBJbmMuAAMAAACrEwAAAwAAAAkAAAADAAAAAgAAAAMAAAAAAAAAAwAAAAAA
AAADAAAAAAAAAAMAAACgCgkACwAAAAAAAAALAAAAAAAAAAsAAAAAAAAACwAAAAAAAAAeEAAABAAA
ABAAAABUaW1lcyBOZXcgUm9tYW4ADwAAAERlZmF1bHQgRGVzaWduAC8AAABMRFVQIE1vZGVsIChB
cmNoKSBSZXF1aXJlbWVudHMgQ292ZXJhZ2UgTWF0cml4ABQAAABSZXN1bHRzIG9mIEFuYWx5c2lz
AAwQAAAGAAAAHgAAAAsAAABGb250cyBVc2VkAAMAAAABAAAAHgAAABAAAABEZXNpZ24gVGVtcGxh
dGUAAwAAAAEAAAAeAAAADQAAAFNsaWRlIFRpdGxlcwADAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAD2DyYAAAAUAAAAX8CR44cTAAAOAPQDAwBCAUVkd2FyZHMgRSBSZWVkCAAAAEUA
ZAB3AGEAcgBkAHMAIABFACAAUgBlAGUAZAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEMAdQByAHIAZQBuAHQAIABVAHMA
ZQByAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaAAIA////////////
////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALAAAAEoAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAD///////////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAP///////////////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA////////////////AAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

--=_BEE396AE.680964CE--


From owner-ietf-ldup@mail.imc.org  Thu Dec 13 21:34:48 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id VAA05080
	for <ldup-archive@lists.ietf.org>; Thu, 13 Dec 2001 21:34:47 -0500 (EST)
Received: by above.proper.com (8.11.6/8.11.3) id fBE2FSn07079
	for ietf-ldup-bks; Thu, 13 Dec 2001 18:15:28 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBE2FR207071
	for <ietf-ldup@imc.org>; Thu, 13 Dec 2001 18:15:27 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Thu, 13 Dec 2001 21:05:33 -0700
Message-Id: <sc19181d.098@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Thu, 13 Dec 2001 21:05:25 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <ietf-ldup@imc.org>
Subject: Is State-based LDUP needed?
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fBE2FR207074
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


I asked this question at the ldup meeting on Thursday, and agreed to post the question to the distribution list.

Is anyone planning to implement state-based ldup?  If not - that is, if there are not going to be at least two interoperable implementations of the proposed specification, should we not remove it from the ldup design now, rather than later?

The protocol will support it, but there are certainly places in the architecture and other documents where the different handling of change information required by the state-based scheme adds unnecessary text if noone is actually going to use it.

This is a pragmatic decision - I personally like state based schemes, even though there are things (like transaction replication) that I doubt they'll ever be able to handle well.  Also, all the implementers I know are focused on the log-based scheme, instead.  It seems easier for them to get their heads around, for some reason...

So - I don't think it's appropriate for me to be the only one championing it, and have reached the conclusion that if we can't find even two implementers to build it, we should not bother including it in further work.

If you're planning to build it, speak up.  If not, silence may well be taken as assent to remove references to it from the various protocol documents.

Best regards,
Ed

Ps - yeah, I know, you told me so...

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585



From owner-ietf-ldup@mail.imc.org  Fri Dec 14 07:43:31 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id HAA28558
	for <ldup-archive@lists.ietf.org>; Fri, 14 Dec 2001 07:43:30 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBECRDG24354
	for ietf-ldup-bks; Fri, 14 Dec 2001 04:27:13 -0800 (PST)
Received: from e1.ny.us.ibm.com (e1.ny.us.ibm.com [32.97.182.101])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBECRC224350
	for <ietf-ldup@imc.org>; Fri, 14 Dec 2001 04:27:12 -0800 (PST)
Received: from northrelay03.pok.ibm.com (northrelay03.pok.ibm.com [9.117.200.23])
	by e1.ny.us.ibm.com (8.9.3/8.9.3) with ESMTP id HAA271278
	for <ietf-ldup@imc.org>; Fri, 14 Dec 2001 07:24:18 -0500
Received: from d01mlc96.pok.ibm.com (d01mlc96.pok.ibm.com [9.117.250.33])
	by northrelay03.pok.ibm.com (8.11.1m3/NCO v5.01) with ESMTP id fBECR6p130732
	for <ietf-ldup@imc.org>; Fri, 14 Dec 2001 07:27:07 -0500
To: ietf-ldup@imc.org
MIME-Version: 1.0
Subject: Re: Is State-based LDUP needed?
X-Mailer: Lotus Notes Release 5.0.7  March 21, 2001
From: "Timothy Hahn" <hahnt@us.ibm.com>
Message-ID: <OF7FE9DD8D.E99E192C-ON85256B22.0043982C@pok.ibm.com>
Date: Fri, 14 Dec 2001 07:27:02 -0500
X-MIMETrack: Serialize by Router on D01MLC96/01/M/IBM(Release 5.0.9 |November 26, 2001) at
 12/14/2001 07:27:07 AM,
	Serialize complete at 12/14/2001 07:27:07 AM
Content-Type: multipart/alternative; boundary="=_alternative 00440FA385256B22_="
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


This is a multipart message in MIME format.
--=_alternative 00440FA385256B22_=
Content-Type: text/plain; charset="us-ascii"

Ed,

First, I'll say that we've only been looking at log-based implementations 
- probably because we see benefits in doing this way.

Second, it occurs to me that LDUP really doesn't need to have two 
interoperable "state-based" implementations - it just needs to have two 
interoperable implementations (regardless of state-based/log-based). Thus, 
I don't quite follow your argument that we'd need at least two state-based 
implementations to leave "state-based" references in the specs.  If/when 
we get two interoperable implementations (state-based or log-based), that 
would be enough I would think.

Either way though, if just concentrating on "log-based" moves the specs 
along quicker, I'm all for it.

Regards,
Tim Hahn

Internet: hahnt@us.ibm.com
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681





"Ed Reed" <eer@OnCallDBA.COM>
Sent by: owner-ietf-ldup@mail.imc.org
12/13/2001 11:05 PM

 
        To:     <ietf-ldup@imc.org>
        cc: 
        Subject:        Is State-based LDUP needed?

 


I asked this question at the ldup meeting on Thursday, and agreed to post 
the question to the distribution list.

Is anyone planning to implement state-based ldup?  If not - that is, if 
there are not going to be at least two interoperable implementations of 
the proposed specification, should we not remove it from the ldup design 
now, rather than later?

The protocol will support it, but there are certainly places in the 
architecture and other documents where the different handling of change 
information required by the state-based scheme adds unnecessary text if 
noone is actually going to use it.

This is a pragmatic decision - I personally like state based schemes, even 
though there are things (like transaction replication) that I doubt 
they'll ever be able to handle well.  Also, all the implementers I know 
are focused on the log-based scheme, instead.  It seems easier for them to 
get their heads around, for some reason...

So - I don't think it's appropriate for me to be the only one championing 
it, and have reached the conclusion that if we can't find even two 
implementers to build it, we should not bother including it in further 
work.

If you're planning to build it, speak up.  If not, silence may well be 
taken as assent to remove references to it from the various protocol 
documents.

Best regards,
Ed

Ps - yeah, I know, you told me so...

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585




--=_alternative 00440FA385256B22_=
Content-Type: text/html; charset="us-ascii"


<br><font size=2 face="sans-serif">Ed,</font>
<br>
<br><font size=2 face="sans-serif">First, I'll say that we've only been looking at log-based implementations - probably because we see benefits in doing this way.</font>
<br>
<br><font size=2 face="sans-serif">Second, it occurs to me that LDUP really doesn't need to have two interoperable &quot;state-based&quot; implementations - it just needs to have two interoperable implementations (regardless of state-based/log-based). &nbsp;Thus, I don't quite follow your argument that we'd need at least two state-based implementations to leave &quot;state-based&quot; references in the specs. &nbsp;If/when we get two interoperable implementations (state-based or log-based), that would be enough I would think.</font>
<br>
<br><font size=2 face="sans-serif">Either way though, if just concentrating on &quot;log-based&quot; moves the specs along quicker, I'm all for it.<br>
</font>
<br><font size=2 face="sans-serif">Regards,<br>
Tim Hahn<br>
<br>
Internet: hahnt@us.ibm.com<br>
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)<br>
phone: 607.752.6388 &nbsp; &nbsp; tie-line: 8/852.6388<br>
fax: 607.752.3681<br>
</font>
<br>
<br>
<br>
<table width=100%>
<tr valign=top>
<td>
<td><font size=1 face="sans-serif"><b>&quot;Ed Reed&quot; &lt;eer@OnCallDBA.COM&gt;</b></font>
<br><font size=1 face="sans-serif">Sent by: owner-ietf-ldup@mail.imc.org</font>
<p><font size=1 face="sans-serif">12/13/2001 11:05 PM</font>
<br>
<td><font size=1 face="Arial">&nbsp; &nbsp; &nbsp; &nbsp; </font>
<br><font size=1 face="sans-serif">&nbsp; &nbsp; &nbsp; &nbsp; To: &nbsp; &nbsp; &nbsp; &nbsp;&lt;ietf-ldup@imc.org&gt;</font>
<br><font size=1 face="sans-serif">&nbsp; &nbsp; &nbsp; &nbsp; cc: &nbsp; &nbsp; &nbsp; &nbsp;</font>
<br><font size=1 face="sans-serif">&nbsp; &nbsp; &nbsp; &nbsp; Subject: &nbsp; &nbsp; &nbsp; &nbsp;Is State-based LDUP needed?</font>
<br>
<br><font size=1 face="Arial">&nbsp; &nbsp; &nbsp; &nbsp;</font></table>
<br>
<br><font size=2 face="Courier New"><br>
I asked this question at the ldup meeting on Thursday, and agreed to post the question to the distribution list.<br>
<br>
Is anyone planning to implement state-based ldup? &nbsp;If not - that is, if there are not going to be at least two interoperable implementations of the proposed specification, should we not remove it from the ldup design now, rather than later?<br>
<br>
The protocol will support it, but there are certainly places in the architecture and other documents where the different handling of change information required by the state-based scheme adds unnecessary text if noone is actually going to use it.<br>
<br>
This is a pragmatic decision - I personally like state based schemes, even though there are things (like transaction replication) that I doubt they'll ever be able to handle well. &nbsp;Also, all the implementers I know are focused on the log-based scheme, instead. &nbsp;It seems easier for them to get their heads around, for some reason...<br>
<br>
So - I don't think it's appropriate for me to be the only one championing it, and have reached the conclusion that if we can't find even two implementers to build it, we should not bother including it in further work.<br>
<br>
If you're planning to build it, speak up. &nbsp;If not, silence may well be taken as assent to remove references to it from the various protocol documents.<br>
<br>
Best regards,<br>
Ed<br>
<br>
Ps - yeah, I know, you told me so...<br>
<br>
=================<br>
Ed Reed<br>
Reed-Matthews, Inc.<br>
+1 585 624 2402<br>
http://www.Reed-Matthews.COM<br>
Note: &nbsp;Area code is 585<br>
<br>
</font>
<br>
<br>
--=_alternative 00440FA385256B22_=--


From owner-ietf-ldup@mail.imc.org  Sat Dec 15 16:45:24 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id QAA07932
	for <ldup-archive@lists.ietf.org>; Sat, 15 Dec 2001 16:45:23 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBFLR6m27780
	for ietf-ldup-bks; Sat, 15 Dec 2001 13:27:06 -0800 (PST)
Received: from rly-ip01.mx.aol.com (rly-ip01.mx.aol.com [205.188.156.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBFLR4227776
	for <ietf-ldup@imc.org>; Sat, 15 Dec 2001 13:27:05 -0800 (PST)
Received: from logs-tq.proxy.aol.com (logs-tq.proxy.aol.com [152.163.201.5])
	  by rly-ip01.mx.aol.com (8.8.8/8.8.8/AOL-5.0.0)
	  with ESMTP id QAA26784 for <ietf-ldup@imc.org>;
	  Sat, 15 Dec 2001 16:26:05 -0500 (EST)
Received: from D7ST2111 (AC95F964.ipt.aol.com [172.149.249.100])
	by logs-tq.proxy.aol.com (8.10.0/8.10.0) with ESMTP id fBFLM2O246771
	for <ietf-ldup@imc.org>; Sat, 15 Dec 2001 16:22:07 -0500 (EST)
Reply-To: <christopher.apple@verizon.net>
From: "Chris Apple" <christopher.apple@verizon.net>
To: <ietf-ldup@imc.org>
Subject: Profile Draft Slides
Date: Sat, 15 Dec 2001 15:14:03 -0600
Message-ID: <000201c185ad$757f54c0$836197ac@D7ST2111>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook, Build 10.0.3311
Importance: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
X-Apparently-From: Cwa29@aol.com
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit


PowerPoint file of the slide presented during the WG meeting is
attached.

Chris Apple

Christopher.apple@verizon.net



From owner-ietf-ldup@mail.imc.org  Sat Dec 15 16:45:33 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id QAA07943
	for <ldup-archive@lists.ietf.org>; Sat, 15 Dec 2001 16:45:32 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBFLS6D27797
	for ietf-ldup-bks; Sat, 15 Dec 2001 13:28:06 -0800 (PST)
Received: from rly-ip01.mx.aol.com (rly-ip01.mx.aol.com [205.188.156.49])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBFLS5227793
	for <ietf-ldup@imc.org>; Sat, 15 Dec 2001 13:28:05 -0800 (PST)
Received: from logs-tq.proxy.aol.com (logs-tq.proxy.aol.com [152.163.201.5])
	  by rly-ip01.mx.aol.com (8.8.8/8.8.8/AOL-5.0.0)
	  with ESMTP id QAA27501 for <ietf-ldup@imc.org>;
	  Sat, 15 Dec 2001 16:26:23 -0500 (EST)
Received: from D7ST2111 (AC95F964.ipt.aol.com [172.149.249.100])
	by logs-tq.proxy.aol.com (8.10.0/8.10.0) with ESMTP id fBFLMBO256990
	for <ietf-ldup@imc.org>; Sat, 15 Dec 2001 16:22:12 -0500 (EST)
Reply-To: <christopher.apple@verizon.net>
From: "Chris Apple" <christopher.apple@verizon.net>
To: <ietf-ldup@imc.org>
Subject: Mandatory Replica Management charts
Date: Sat, 15 Dec 2001 15:14:03 -0600
Message-ID: <000601c185ad$799f3d40$836197ac@D7ST2111>
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="----=_NextPart_000_0007_01C1857B.2F04CD40"
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook, Build 10.0.3311
Importance: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
X-Apparently-From: Cwa29@aol.com
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


This is a multi-part message in MIME format.

------=_NextPart_000_0007_01C1857B.2F04CD40
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

I found these after the WG meeting had concluded. My apologies.
Apparently my mail filtering rules
still aren't quite right.

PowerPoint slides attached.

Chris Apple

Christopher.apple@verizon.net

------=_NextPart_000_0007_01C1857B.2F04CD40
Content-Type: application/vnd.ms-powerpoint;
	name="mrm.ppt"
Content-Disposition: attachment;
	filename="mrm.ppt"
Content-Transfer-Encoding: base64

0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAABAAAAFQAAAAAAAAAA
EAAAFwAAAAEAAAD+////AAAAABYAAAD/////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////9
////EwAAAAMAAAAEAAAABQAAAAYAAAAHAAAACAAAAAkAAAAKAAAACwAAAP7////+////DgAAAA8A
AAAQAAAAEQAAABIAAAAUAAAA/v////7/////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////1IA
bwBvAHQAIABFAG4AdAByAHkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAWAAUA//////////8BAAAAEI2BZJtPzxGG6gCqALkp6AAAAAAAAAAAAAAAAADC+B60gcEB
DQAAAIAMAAAAAAAAUABvAHcAZQByAFAAbwBpAG4AdAAgAEQAbwBjAHUAbQBlAG4AdAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAACgAAgECAAAAAwAAAP////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAACAAAA7xIAAAAAAAAFAFMAdQBtAG0AYQByAHkASQBuAGYAbwByAG0AYQB0
AGkAbwBuAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKAACAQQAAAD//////////wAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADUCQAAAAAAAAUARABvAGMAdQBtAGUAbgB0
AFMAdQBtAG0AYQByAHkASQBuAGYAbwByAG0AYQB0AGkAbwBuAAAAAAAAAAAAAAA4AAIB////////
////////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKAAAAAACAAAAAAAADwDo
AysGAAABAOkDKAAAAIAWAADgEAAA4BAAAIAWAAAFAAAACgAAAAAAAAAAAAAAAQAAAAAAAAEPAPID
FgEAAC8AyA8MAAAAMADSDwQAAAABAAAADwDVB0wAAAAAALcPRAAAAFQAaQBtAGUAcwAgAE4AZQB3
ACAAUgBvAG0AYQBuAAAApLW4AAzZYgD02GIAYBIHMAgAAAAAAAAADNliAPWjBjAAAAQAAACkDwoA
AACAAGAAAAD/////AAClDwwAAAAAAAAILgAAAAcAAAAAAKkPCgAAAAcAAAACAAkEAABAAKMPbgAA
AAUA//0/AAAAIiAAAGQAAAAAAAAAZAAAAAAAAAAAAEACAAAAAAcAAAD//+8AAAAAAP///////xgA
AAAAAQAAAAUAACABIAEAAAAAAAUAAEACQAIAAAAAAAUAAGADYAMAAAAAAAUAAIAEgAQAAAAADwAL
BHwAAAAPAADwdAAAAAAABvAoAAAABAwAAAQAAAAMAAAAAwAAAAEAAAAHAAAAAgAAAAQAAAADAAAA
BAAAAGMAC/AkAAAAgQEEAAAIgwEAAAAIvwEQABAAwAEBAAAI/wEIAAgAAQICAAAIQAAe8RAAAAAE
AAAIAQAACAIAAAj3AAAQHwDwDxwAAAAAAPMDFAAAAAIAAAAAAAAAAAAAAAAAAIAAAAAADwDQBxsB
AAAfABQEHAAAAAAAFQQUAAAAuh117ADKmjsyTs3JAMqaOwEBAAEPAPoDZwAAAAAA/gMDAAAAAAEA
AAD9AzQAAABAAAAAZAAAAEAAAABkAAAAYBIHMAgAAAAAAAAAANliAAAAAAAAAAAApv///2T///8B
AAAAcAD7AwgAAAAAAAAAcAgAAHAA+wMIAAAAAQAAAEALAAAfAAcEPAAAAAAA/QM0AAAAIQAAAGQA
AAAhAAAAZAAAAFe8BjC83GIAAAAAAJC2uAAAAAAAAAAAAAAAAAAAAAAAAAFiAB8AEwQ8AAAAAAD9
AzQAAABkAAAAZAAAAGQAAABkAAAAV7wGMLzcYgAAAAAAkLa4AAAAAAAAAAAAAAAAAAAAAAAAAWIA
DwDwDwIDAAAAAPMDFAAAAAMAAAAAAAAAAgAAAAABAAAAAAAAAACfDwQAAAAAAAAAAACoDxwAAABN
YW5kYXRvcnkgUmVwbGljYSBNYW5hZ2VtZW50EACfDwQAAAABAAAAAACoD44AAABNYW5kYXRvcnkg
UmVwbGljYXRpb24gbWFuYWdlbWVudCBmdW5jdGlvbnMNUHJlZmVycyB0byB1c2UgZXhpc3Rpbmcg
TERBUCBvcHMNV2lsbCBkZWZpbmUgbmV3IG9wcy9kYXRhIGRlZmluaXRpb25zIGlmIG5lY2Vzc2Fy
eQ1FeHRlbmRlZCBPdXRsaW5lAACqDxIAAACOAAAAAAAAAAEAAAABAAAAAAAAAPMDFAAAAAQAAAAA
AAAAAgAAAAEBAAAAAAAAAACfDwQAAAAAAAAAAACoDw0AAABEZXNpcmVkIElucHV0EACfDwQAAAAB
AAAAAACoD1UBAABGZWVkYmFjayBvbiBzZWN0aW9ucyB0byBiZSBhZGRlZCBvciByZW1vdmVkDUZl
ZWRiYWNrIG9uIGlzc3VlcyByYWlzZWQgc28gZmFyIGluIHRoZSBkcmFmdDoNQWJpbGl0eSB0byBj
b3B5IGFsbCBvcGVyYXRpb25hbCBhdHRyaWJ1dGVzPw1BYmlsaXR5IHRvIHNldCBlbnRyeVVVSUQ/
DUhvdyB0byBjYXJyeSBtZXRhLWRhdGEgaW4gb3V0LW9mLWJhbmQgdHJhbnNwb3J0PyBMRElGIGV4
dGVuc2lvbj8NU2hvdWxkIGNvbXBsZXggb3BlcmF0aW9ucyB0aGF0IGNhbiBiZSBkb25lIGluIG11
bHRpcGxlIExEQVAgb3BlcmF0aW9ucyBiZSBidW5kbGVkIGludG8gYSBzaW5nbGUgZXh0ZW5kZWQg
b3BlcmF0aW9uPwAAoQ8qAAAAWwAAAAAAAAAAAPsAAAABAAAAAABbAAAAAAACABwA+wAAAAAEAgAA
BBgAAACqDxoAAACWAAAAAAAAAAkAAAABAAAAAwC3AAAAAAAAAAAA6gMAAAAADwD4A5wIAAACAO8D
GAAAAAEAAAABAgcJCAAAAAAAAAAAAAAAAAAAAGAA8AcgAAAAAAD/AP///wAAAAAA//8AAP+ZAAAA
//8A/wAAAJaWlgBgAPAHIAAAAP///wAAAAAAgICAAAAAAAAAzJkAMzPMAMzM/wCysrIAYADwByAA
AAD///8AAAAAADMzMwAAAAAA3d3dAICAgABNTU0A6urqAGAA8AcgAAAA///MAAAAAABmZjMAgIAA
ADOZMwCAAAAAADPMAP/MZgBgAPAHIAAAAP///wAAAAAAgICAAAAAAAD/zGYAAAD/AMwAzADAwMAA
YADwByAAAAD///8AAAAAAICAgAAAAAAAwMDAAABm/wD/AAAAAJkAAGAA8AcgAAAA////AAAAAACA
gIAAAAAAADOZ/wCZ/8wAzADMALKysgAAAKMPPgAAAAEA//0/AAAAIiAAAGQAAAAAAAEAZAAAAAAA
AAAAAEACAAAAAAcAAAD//+8AAAAAAP///////ywAAAAAAwAAEACjD3wAAAAFAP/9PwABACIgAABk
AAAAAAAAAGQAFAAAANgAAABAAgAAAAAHAAAA///vAAAAAAD///////8gAAAAAAEAAIAFAAATINQB
IAEAAAIAHACABQAAIiDQAkACAAACABgAgAUAABMg8ANgAwAAAgAUAIAFAAC7ABAFgAQAAAAAIACj
D24AAAAFAP/9PwAAACIgAABkAAAAAAAAAGQAHgAAAAAAAABAAgAAAAAHAAAA///vAAAAAAD/////
//8MAAAAAAEAAAAFAAAgASABAAAAAAAFAABAAkACAAAAAAAFAABgA2ADAAAAAAAFAACABIAEAAAA
AFAAow9SAAAABQAAAAEJAAAAAAEAAAAAAAAAAQABCQAAAAABACABAAAAAAIAAQkAAAAAAQBAAgAA
AAADAAEJAAAAAAEAYAMAAAAABAABCQAAAAABAIAEAAAAAGAAow8MAAAAAQAAAAAAAAAAAAAAcACj
Dz4AAAAFAAAAAAAAAAAAAgAcAAEAAAAAAAAAAgAYAAIAAAAAAAAAAgAUAAMAAAAAAAAAAgASAAQA
AAAAAAAAAgASAIAAow8+AAAABQAAAAAAAAAAAAIAGAABAAAAAAAAAAIAFAACAAAAAAAAAAIAEgAD
AAAAAAAAAAIAEAAEAAAAAAAAAAIAEAAPAAwE1gQAAA8AAvDOBAAAEAAI8AgAAAAGAAAABgQAAA8A
A/BmBAAADwAE8CgAAAABAAnwEAAAAAAAAAAAAAAAAAAAAAAAAAACAArwCAAAAAAEAAAFAAAADwAE
8NIAAAASAArwCAAAAAIEAAAACgAAkwAL8DYAAAB/AAEABQCAADziuACHAAEAAACBAQQAAAiDAQAA
AAi/AQEAEQDAAQEAAAj/AQEACQABAgIAAAgAABDwCAAAAIABsAHQFFAEDwAR8BAAAAAAAMMLCAAA
AAAAAAABALgADwAN8FQAAAAAAJ8PBAAAAAAAAAAAAKgPIAAAAENsaWNrIHRvIGVkaXQgTWFzdGVy
IHRpdGxlIHN0eWxlAACiDwYAAAAhAAAAAAAAAKoPCgAAACEAAAABAAAAAAAPAATwFgEAABIACvAI
AAAAAwQAAAAKAACDAAvwMAAAAH8AAQAFAIAAbAl0AoEBBAAACIMBAAAACL8BAQARAMABAQAACP8B
AQAJAAECAgAACAAAEPAIAAAA4ASwAdAUAA8PABHwEAAAAAAAwwsIAAAAAQAAAAIAdAIPAA3wngAA
AAAAnw8EAAAAAQAAAAAAqA9SAAAAQ2xpY2sgdG8gZWRpdCBNYXN0ZXIgdGV4dCBzdHlsZXMNU2Vj
b25kIGxldmVsDVRoaXJkIGxldmVsDUZvdXJ0aCBsZXZlbA1GaWZ0aCBsZXZlbAAAog8eAAAAIQAA
AAAADQAAAAEADAAAAAIADQAAAAMADAAAAAQAAACqDwoAAABTAAAAAQAAAAAADwAE8LYAAAASAArw
CAAAAAQEAAAACgAAgwAL8DAAAAB/AAEABQCAADwRdAKBAQQAAAiDAQAAAAi/AQEAEQDAAQEAAAj/
AQEACQABAgIAAAgAABDwCAAAAGAPsAFgBoAQDwAR8BAAAAAAAMMLCAAAAAIAAAAHAXQCDwAN8D4A
AAAAAJ8PBAAAAAQAAAAAAKAPAgAAACoAAAChDxQAAAACAAAAAAAAAAAAAgAAAAAAAgAOAAAA+A8E
AAAAAAAAAA8ABPC4AAAAEgAK8AgAAAAFBAAAAAoAAIMAC/AwAAAAfwABAAUAgADEEHQCgQEEAAAI
gwEAAAAIvwEBABEAwAEBAAAI/wEBAAkAAQICAAAIAAAQ8AgAAABgD7AH0A6AEA8AEfAQAAAAAADD
CwgAAAADAAAACQJ0Ag8ADfBAAAAAAACfDwQAAAAEAAAAAACgDwIAAAAqAAAAoQ8WAAAAAgAAAAAA
AAgAAAEAAgAAAAAAAgAOAAAA+g8EAAAAAAAAAA8ABPC4AAAAEgAK8AgAAAAGBAAAAAoAAIMAC/Aw
AAAAfwABAAUAgABcH3QCgQEEAAAIgwEAAAAIvwEBABEAwAEBAAAI/wEBAAkAAQICAAAIAAAQ8AgA
AABgDyAQ0BSAEA8AEfAQAAAAAADDCwgAAAAEAAAACAJ0Ag8ADfBAAAAAAACfDwQAAAAEAAAAAACg
DwIAAAAqAAAAoQ8WAAAAAgAAAAAAAAgAAAIAAgAAAAAAAgAOAAAA2A8EAAAAAAAAAA8ABPBIAAAA
EgAK8AgAAAABBAAAAAwAAIMAC/AwAAAAgQEAAAAIgwEFAAAIkwGOn4sAlAHevWgAvwESABIA/wEA
AAgABAMJAAAAPwMBAAEAEADwByAAAAD///8AAAAAAICAgAAAAAAAAMyZADMzzADMzP8AsrKyACAA
ug8cAAAARABlAGYAYQB1AGwAdAAgAEQAZQBzAGkAZwBuAA8A7gPkAQAAAgDvAxgAAAABAAAADQ4A
AAAAAAAAAACAAAAAAAcAAAAPAAwElAEAAA8AAvCMAQAAIAAI8AgAAAADAAAAAwgAAA8AA/AkAQAA
DwAE8CgAAAABAAnwEAAAAAAAAAAAAAAAAAAAAAAAAAACAArwCAAAAAAIAAAFAAAADwAE8HIAAAAS
AArwCAAAAAIIAAAgAgAAUwAL8B4AAAB/AAAABACAAPxldAK/AQAAAQD/AQAAAQABAwIEAAAAABDw
CAAAAIABsAHQFFAEDwAR8BAAAAAAAMMLCAAAAAAAAAANAHQCDwAN8AwAAAAAAJ4PBAAAAAAAAAAP
AATwcgAAABIACvAIAAAAAwgAACACAABTAAvwHgAAAH8AAAAEAIAAXGp0Ar8BAAABAP8BAAABAAED
AwQAAAAAEPAIAAAA4ASwAdAUAA8PABHwEAAAAAAAwwsIAAAAAQAAAA4AdAIPAA3wDAAAAAAAng8E
AAAAAQAAAA8ABPBIAAAAEgAK8AgAAAABCAAAAAwAAIMAC/AwAAAAgQEAAAAIgwEFAAAIkwGOn4sA
lAHevWgAvwESABIA/wEAAAgABAMJAAAAPwMBAAEAEADwByAAAAD///8AAAAAAICAgAAAAAAAAMyZ
ADMzzADMzP8AsrKyAA8A7gPkAQAAAgDvAxgAAAABAAAADQ4AAAAAAAAAAACAAAAAAAcAAAAPAAwE
lAEAAA8AAvCMAQAAMAAI8AgAAAADAAAAAwwAAA8AA/AkAQAADwAE8CgAAAABAAnwEAAAAAAAAAAA
AAAAAAAAAAAAAAACAArwCAAAAAAMAAAFAAAADwAE8HIAAAASAArwCAAAAAIMAAAgAgAAUwAL8B4A
AAB/AAAABACAAJg/zAK/AQAAAQD/AQAAAQABAwIEAAAAABDwCAAAAIABsAHQFFAEDwAR8BAAAAAA
AMMLCAAAAAAAAAANAMwCDwAN8AwAAAAAAJ4PBAAAAAAAAAAPAATwcgAAABIACvAIAAAAAwwAACAC
AABTAAvwHgAAAH8AAAAEAIAAUGPMAr8BAAABAP8BAAABAAEDAwQAAAAAEPAIAAAA4ASwAdAUAA8P
ABHwEAAAAAAAwwsIAAAAAQAAAA4AzAIPAA3wDAAAAAAAng8EAAAAAQAAAA8ABPBIAAAAEgAK8AgA
AAABDAAAAAwAAIMAC/AwAAAAgQEAAAAIgwEFAAAIkwGOn4sAlAHevWgAvwESABIA/wEAAAgABAMJ
AAAAPwMBAAEAEADwByAAAAD///8AAAAAAICAgAAAAAAAAMyZADMzzADMzP8AsrKyAAAAchcUAAAA
AQBAAAAAAAAzBgAA1w4AAMMQAAAAAPUPHAAAAAEBAADzEwADAAAAAK8SAAABAAAABAAAAAEAlL0A
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAgAAAAMAAAAE
AAAABQAAAAYAAAAHAAAACAAAAAkAAAAKAAAACwAAAAwAAAANAAAADgAAAA8AAAAQAAAAEQAAABIA
AAATAAAAFAAAABUAAAAWAAAAFwAAABgAAAAZAAAAGgAAABsAAAAcAAAAHQAAAB4AAAAfAAAAIAAA
ACEAAAAiAAAAIwAAACQAAAAlAAAAJgAAACcAAAD+////KQAAACoAAAArAAAALAAAAC0AAAAuAAAA
LwAAAP7///8xAAAA/v//////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////7/AAAECgIAAAAAAAAA
AAAAAAAAAAAAAAEAAADghZ/y+U9oEKuRCAArJ7PZMAAAAKQJAAALAAAAAQAAAGAAAAACAAAAaAAA
AAQAAACQAAAACAAAAKwAAAAJAAAAyAAAABIAAADUAAAACgAAAPQAAAAMAAAAAAEAAA0AAAAMAQAA
DwAAABgBAAARAAAAIAEAAAIAAADkBAAAHgAAAB0AAABNYW5kYXRvcnkgUmVwbGljYSBNYW5hZ2Vt
ZW50AAB0AB4AAAARAAAAVGhlIE1vYXRzIEZhbWlseQAgTWEeAAAAEQAAAFRoZSBNb2F0cyBGYW1p
bHkAIE1hHgAAAAIAAAAxAGUgHgAAABUAAABNaWNyb3NvZnQgUG93ZXJQb2ludABhZ2VAAAAAIENS
BAAAAABAAAAAQI6MGrSBwQFAAAAAQJrvHrSBwQEDAAAAUgAAAEcAAAB8CAAA/////wMAAAAIAIkQ
ZwwAAAEACQAAAzYEAAAGAEMAAAAAABEAAAAmBg8AGAD/////AAAQAAAAAAAAAAAAwAMAANACAAAJ
AAAAJgYPAAgA/////wIAAAAXAAAAJgYPACMA/////wQAGwBUTlBQFAAgALgAMgAAAP//TwAUAAAA
TQBpADAACgAAACYGDwAKAFROUFAAAAIA9AMJAAAAJgYPAAgA/////wMAAAAPAAAAJgYPABQAVE5Q
UAQADAABAAAAAQAAAAAAAAAFAAAACwIAAAAABQAAAAwC0ALAAwQAAAAEAQ0ABwAAAPwCAAD///8A
AAAEAAAALQEAAAkAAAD6AgUAAAAAAP///wAiAAQAAAAtAQEABAAAAC0BAAAJAAAAHQYhAPAA0ALA
AwAAAAAEAAAALQEAAAQAAAAtAQAACQAAAPoCAAAAAAAAAAAAACIABAAAAC0BAgAQAAAAJgYPABYA
/////wAARwAAAI8CAAARAQAAwQIAAAgAAAAmBg8ABgD/////AQANAAAA+wIAAAAAAAAAAAAAAAAA
AQAAAAAAAAQAAAAtAQMABQAAAAkCAAAAAgUAAAAUAgAAAAAEAAAAAgECABAAAAAmBg8AFgD/////
AABHAQAAjwIAAHkCAADBAgAACAAAACYGDwAGAP////8BAAUAAAAJAgAAAAIFAAAAFAIAAAAABAAA
AAIBAgAHAAAA/AIBAAAAAAAAAAQAAAAtAQQABAAAAC0BAQAHAAAAGwS5AHkDQABIAAQAAAAtAQAA
BAAAAC0BAgAFAAAACQIAAAACBQAAABQCAAAAABUAAAD7AsX/AAAAAAAAkAEAAAAAAEAAAFRpbWVz
IE5ldyBSb21hbgAAAAQAAAAtAQUABAAAAPABAwAFAAAACQIAAAACBQAAABQCAAAAAAQAAAAuARgA
BAAAAAIBAQAxAAAAMgqRAF4AHAAAAE1hbmRhdG9yeSBSZXBsaWNhIE1hbmFnZW1lbnQ0ABoAHgAe
ABoAEAAdABQAHAAPACcAGgAeAA8AEAAaABoADwA0ABoAHgAaAB0AGgAvABoAHgAQAAQAAAAuAQEA
BAAAAAIBAgAEAAAAAgECAAQAAAAtAQQABAAAAC0BAQAHAAAAGwSBAnkD0ABIAAQAAAAtAQAABAAA
AC0BAgAFAAAACQIAAAACBQAAABQCAAAAABUAAAD7AtX/AAAAAAAAkAEAAAAAAEAAAFRpbWVzIE5l
dyBSb21hbgAAAAQAAAAtAQMABAAAAPABBQAFAAAACQIAAAACBQAAABQCAAAAAAQAAAAuARgABAAA
AAIBAQAJAAAAMgr+AFIAAQAAAJUADwAEAAAALgEBAAQAAAACAQIABQAAAAkCAAAAAgUAAAAUAgAA
AAAEAAAALgEYAAQAAAACAQEAOQAAADIK/gB2ACEAAABNYW5kYXRvcnkgUmVwbGljYXRpb24gbWFu
YWdlbWVudCAAJgATABUAFQATAAwAFQAPABQACwAdABIAFgAMAAwAEgATAAwADAAWABUADAAgABMA
FQATABYAEwAhABMAFQAMAAsABAAAAC4BAQAEAAAAAgECAAUAAAAJAgAAAAIFAAAAFAIAAAAABAAA
AC4BGAAEAAAAAgEBABUAAAAyCjEBdgAJAAAAZnVuY3Rpb25zAA0AFgAWABIADAAMABUAFQARAAQA
AAAuAQEABAAAAAIBAgAFAAAACQIAAAACBQAAABQCAAAAAAQAAAAuARgABAAAAAIBAQAJAAAAMgpv
AVIAAQAAAJUADwAEAAAALgEBAAQAAAACAQIABQAAAAkCAAAAAgUAAAAUAgAAAAAEAAAALgEYAAQA
AAACAQEANwAAADIKbwF2ACAAAABQcmVmZXJzIHRvIHVzZSBleGlzdGluZyBMREFQIG9wcxgADgAT
AA4AEgAOABEACwAMABUACwAVABEAEgAMABIAFQAMABEADAAMABUAFQALABoAHwAeABgACwAVABYA
EQAEAAAALgEBAAQAAAACAQIABQAAAAkCAAAAAgUAAAAUAgAAAAAEAAAALgEYAAQAAAACAQEACQAA
ADIKrAFSAAEAAACVAA8ABAAAAC4BAQAEAAAAAgECAAUAAAAJAgAAAAIFAAAAFAIAAAAABAAAAC4B
GAAEAAAAAgEBAEMAAAAyCqwBdgAoAAAAV2lsbCBkZWZpbmUgbmV3IG9wcy9kYXRhIGRlZmluaXRp
b25zIGlmICkADAAMAAsACwAVABMADQAMABYAEgALABYAEwAeAAsAFQAWABEADAAVABMADAATAAsA
FQATAA0ADAAVAAwADAAMABUAFQARAAsADQANAAsABAAAAC4BAQAEAAAAAgECAAUAAAAJAgAAAAIF
AAAAFAIAAAAABAAAAC4BGAAEAAAAAgEBABUAAAAyCt8BdgAJAAAAbmVjZXNzYXJ5ABUAEwATABIA
EQARABMADwAUAAQAAAAuAQEABAAAAAIBAgAFAAAACQIAAAACBQAAABQCAAAAAAQAAAAuARgABAAA
AAIBAQAJAAAAMgocAlIAAQAAAJUADwAEAAAALgEBAAQAAAACAQIABQAAAAkCAAAAAgUAAAAUAgAA
AAAEAAAALgEYAAQAAAACAQEAHwAAADIKHAJ2ABAAAABFeHRlbmRlZCBPdXRsaW5lGgAVAAwAEgAW
ABYAEgAVAAsAHwAVAAwADAAMABYAEgAEAAAALgEBAAQAAAACAQIABAAAAAIBAgAEAAAALQEBAAQA
AAAtAQQAEAAAAPsCEAAHAAAAAAC8AgAAAAABAgIiU3lzdGVtAAAEAAAALQEFAAQAAADwAQMADwAA
ACYGDwAUAFROUFAEAAwAAAAAAAAAAAAAAAAACQAAACYGDwAIAP////8BAAAAAwAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD+/wAABAoCAAAAAAAAAAAAAAAA
AAAAAAABAAAAAtXN1ZwuGxCTlwgAKyz5rjAAAADQAQAAEAAAAAEAAACIAAAAAwAAAJAAAAAPAAAA
qAAAAAQAAAC0AAAABgAAALwAAAAHAAAAxAAAAAgAAADMAAAACQAAANQAAAAKAAAA3AAAABcAAADk
AAAACwAAAOwAAAAQAAAA9AAAABMAAAD8AAAAFgAAAAQBAAANAAAADAEAAAwAAABuAQAAAgAAAOQE
AAAeAAAADwAAAE9uLXNjcmVlbiBTaG93AAAeAAAAAQAAAABuLXMDAAAA7xIAAAMAAAAMAAAAAwAA
AAIAAAADAAAAAAAAAAMAAAAAAAAAAwAAAAAAAAADAAAAMhEJAAsAAAAAAAAACwAAAAAAAAALAAAA
AAAAAAsAAAAAAAAAHhAAAAQAAAAQAAAAVGltZXMgTmV3IFJvbWFuAA8AAABEZWZhdWx0IERlc2ln
bgAdAAAATWFuZGF0b3J5IFJlcGxpY2EgTWFuYWdlbWVudAAOAAAARGVzaXJlZCBJbnB1dAAMEAAA
BgAAAB4AAAALAAAARm9udHMgVXNlZAADAAAAAQAAAB4AAAAQAAAARGVzaWduIFRlbXBsYXRlAAMA
AAABAAAAHgAAAA0AAABTbGlkZSBUaXRsZXMAAwAAAAIAAAAAAEMAdQByAHIAZQBuAHQAIABVAHMA
ZQByAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaAAIA////////////
////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAAAAFAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAD///////////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAP///////////////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA////////////////AAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD2DygAAAAUAAAAX8CR48sSAAAQ
APQDAwB0AlRoZSBNb2F0cyBGYW1pbHkIAAAAVABoAGUAIABNAG8AYQB0AHMAIABGAGEAbQBpAGwA
eQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABSAG8AbwB0ACAARQBuAHQAcgB5AAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFgAFAP//////////AQAA
ABCNgWSbT88RhuoAqgC5KegAAAAAAAAAAAAAAAAgE10qp4XBAQ0AAACADgAAAAAAAFAAbwB3AGUA
cgBQAG8AaQBuAHQAIABEAG8AYwB1AG0AZQBuAHQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAo
AAIBAgAAAAMAAAD/////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAO8S
AAAAAAAABQBTAHUAbQBtAGEAcgB5AEkAbgBmAG8AcgBtAGEAdABpAG8AbgAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAACgAAgEEAAAA//////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAA1AkAAAAAAAAFAEQAbwBjAHUAbQBlAG4AdABTAHUAbQBtAGEAcgB5AEkAbgBm
AG8AcgBtAGEAdABpAG8AbgAAAAAAAAAAAAAAOAACAf///////////////wAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAACgAAADUAwAAAAAAAP//////////AwAAAAQAAAAFAAAABgAA
AAcAAAAIAAAACQAAAAoAAAALAAAA/v////////8OAAAADwAAABAAAAARAAAAGQAAAP/////+////
/////xMAAAD9/////v////7///8aAAAAGAAAAP//////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////AQAAAAIAAAADAAAABAAAAAUAAAAGAAAA
BwAAAAgAAAAJAAAACgAAAAsAAAAMAAAADQAAAA4AAAAPAAAAEAAAABEAAAASAAAAEwAAABQAAAAV
AAAAFgAAABcAAAAYAAAAGQAAABoAAAAbAAAAHAAAAB0AAAAeAAAAHwAAACAAAAAhAAAAIgAAACMA
AAAkAAAAJQAAACYAAAAnAAAA/v///ykAAAAqAAAAKwAAACwAAAAtAAAALgAAAC8AAAAyAAAAMQAA
AP7///8zAAAANAAAADUAAAA2AAAANwAAADgAAAA5AAAA/v//////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////////9zAHQAbwBwAGgAZQByAC4AYQBwAHAAbABl
AEAAdgBlAHIAaQB6AG8AbgAuAG4AZQB0AAAAHwAAAAwAAABDAGgAcgBpAHMAIABBAHAAcABsAGUA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP7/AAAECgIAAAAAAAAAAAAAAAAAAAAAAAIA
AAAC1c3VnC4bEJOXCAArLPmuRAAAAAXVzdWcLhsQk5cIACss+a4UAgAA0AEAABAAAAABAAAAiAAA
AAMAAACQAAAADwAAAKgAAAAEAAAAtAAAAAYAAAC8AAAABwAAAMQAAAAIAAAAzAAAAAkAAADUAAAA
CgAAANwAAAAXAAAA5AAAAAsAAADsAAAAEAAAAPQAAAATAAAA/AAAABYAAAAEAQAADQAAAAwBAAAM
AAAAbgEAAAIAAADkBAAAHgAAAA8AAABPbi1zY3JlZW4gU2hvdwAAHgAAAAEAAAAAbi1zAwAAAO8S
AAADAAAADAAAAAMAAAACAAAAAwAAAAAAAAADAAAAAAAAAAMAAAAAAAAAAwAAADIRCQALAAAAAAAA
AAsAAAAAAAAACwAAAAAAAAALAAAAAAAAAB4QAAAEAAAAEAAAAFRpbWVzIE5ldyBSb21hbgAPAAAA
RGVmYXVsdCBEZXNpZ24AHQAAAE1hbmRhdG9yeSBSZXBsaWNhIE1hbmFnZW1lbnQADgAAAERlc2ly
ZWQgSW5wdXQADBAAAAYAAAAeAAAACwAAAEZvbnRzIFVzZWQAAwAAAAEAAAAeAAAAEAAAAERlc2ln
biBUZW1wbGF0ZQADAAAAAQAAAB4AAAANAAAAU2xpAAD2DygAAAAUAAAAX8CR48sSAAAQAPQDAwB0
AlRoZSBNb2F0cyBGYW1pbHkIAAAAVABoAGUAIABNAG8AYQB0AHMAIABGAGEAbQBpAGwAeQAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABkZSBUaXRsZXMAAwAA
AAIAAAAAAMABAAAHAAAAAAAAAEAAAAABAAAA9AAAAAAAAID8AAAAAgAAAAQBAAADAAAADAEAAAQA
AABcAQAABQAAAKABAAAEAAAAAgAAABQAAABfAEEAZABIAG8AYwBSAGUAdgBpAGUAdwBDAHkAYwBs
AGUASQBEAAAAAwAAAA4AAABfAEUAbQBhAGkAbABTAHUAYgBqAGUAYwB0AAAABAAAAA0AAABfAEEA
dQB0AGgAbwByAEUAbQBhAGkAbAAAAAAABQAAABgAAABfAEEAdQB0AGgAbwByAEUAbQBhAGkAbABE
AGkAcwBwAGwAYQB5AE4AYQBtAGUAAAACAAAAsAQAABMAAAAJBAAAAwAAAKT7VgMfAAAAJAAAAE0A
YQBuAGQAYQB0AG8AcgB5ACAAUgBlAHAAbABpAGMAYQAgAE0AYQBuAGEAZwBlAG0AZQBuAHQAIABj
AGgAYQByAHQAcwAAAB8AAAAeAAAAYwBoAHIAaQA=

------=_NextPart_000_0007_01C1857B.2F04CD40--



From owner-ietf-ldup@mail.imc.org  Mon Dec 17 09:36:41 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id JAA03148
	for <ldup-archive@odin.ietf.org>; Mon, 17 Dec 2001 09:36:40 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBHEE0G00334
	for ietf-ldup-bks; Mon, 17 Dec 2001 06:14:00 -0800 (PST)
Received: from e1.ny.us.ibm.com (e1.ny.us.ibm.com [32.97.182.101])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBHEDw200329
	for <ietf-ldup@imc.org>; Mon, 17 Dec 2001 06:13:59 -0800 (PST)
Received: from northrelay01.pok.ibm.com (northrelay01.pok.ibm.com [9.117.200.21])
	by e1.ny.us.ibm.com (8.9.3/8.9.3) with ESMTP id JAA461344
	for <ietf-ldup@imc.org>; Mon, 17 Dec 2001 09:10:52 -0500
Received: from d01mlc96.pok.ibm.com (d01mlc96.pok.ibm.com [9.117.250.33])
	by northrelay01.pok.ibm.com (8.11.1m3/NCO v5.01) with ESMTP id fBHEDeR84754
	for <ietf-ldup@imc.org>; Mon, 17 Dec 2001 09:13:41 -0500
To: <ietf-ldup@imc.org>
MIME-Version: 1.0
X-Mailer: Lotus Notes Release 5.0.7  March 21, 2001
From: "Timothy Hahn" <hahnt@us.ibm.com>
Message-ID: <OFCB636B3C.1D1169A6-ON85256B25.004D0350@pok.ibm.com>
Date: Mon, 17 Dec 2001 09:13:39 -0500
Subject: Slides from my reports at IETF #52
X-MIMETrack: Serialize by Router on D01MLC96/01/M/IBM(Release 5.0.9 |November 26, 2001) at
 12/17/2001 09:13:41 AM
Content-Type: multipart/mixed; boundary="=_mixed 004DA07185256B25_="
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


--=_mixed 004DA07185256B25_=
Content-Type: multipart/alternative; boundary="=_alternative 004DA07285256B25_="


--=_alternative 004DA07285256B25_=
Content-Type: text/plain; charset="us-ascii"

Hi all,

Here are PDFs of the powerpoint presentations I gave during the LDUP 
working group last week:



Regards,
Tim Hahn

Internet: hahnt@us.ibm.com
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681

--=_alternative 004DA07285256B25_=
Content-Type: text/html; charset="us-ascii"


<br><font size=2 face="sans-serif">Hi all,</font>
<br>
<br><font size=2 face="sans-serif">Here are PDFs of the powerpoint presentations I gave during the LDUP working group last week:</font>
<br>
<br><font size=2 face="sans-serif"><br>
</font>
<br><font size=2 face="sans-serif">Regards,<br>
Tim Hahn<br>
<br>
Internet: hahnt@us.ibm.com<br>
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)<br>
phone: 607.752.6388 &nbsp; &nbsp; tie-line: 8/852.6388<br>
fax: 607.752.3681<br>
</font>
--=_alternative 004DA07285256B25_=--
--=_mixed 004DA07185256B25_=
Content-Type: application/octet-stream; name="protocol.pdf"
Content-Disposition: attachment; filename="protocol.pdf"
Content-Transfer-Encoding: base64
Content-Transfer-Encoding: base64

JVBERi0xLjINJeLjz9MNCjI1IDAgb2JqDTw8IA0vTGluZWFyaXplZCAxIA0vTyAyNyANL0ggWyA2
NzQgMjEzIF0gDS9MIDE2ODkwIA0vRSA5MjE3IA0vTiA0IA0vVCAxNjI3MiANPj4gDWVuZG9iag0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICB4cmVmDTI1IDEzIA0wMDAwMDAwMDE2IDAwMDAwIG4NCjAwMDAwMDA2MDcgMDAwMDAgbg0KMDAw
MDAwMDg4NyAwMDAwMCBuDQowMDAwMDAxMDQyIDAwMDAwIG4NCjAwMDAwMDExNDUgMDAwMDAgbg0K
MDAwMDAwMTU3NyAwMDAwMCBuDQowMDAwMDAyMjk1IDAwMDAwIG4NCjAwMDAwMDI2NzIgMDAwMDAg
bg0KMDAwMDAwNzg0NSAwMDAwMCBuDQowMDAwMDA3OTU5IDAwMDAwIG4NCjAwMDAwMDgzODMgMDAw
MDAgbg0KMDAwMDAwMDY3NCAwMDAwMCBuDQowMDAwMDAwODY2IDAwMDAwIG4NCnRyYWlsZXINPDwN
L1NpemUgMzgNL0luZm8gMjQgMCBSIA0vUm9vdCAyNiAwIFIgDS9QcmV2IDE2MjYyIA0vSURbPDA4
MzU0NjRlZDYyN2I5YTBiMDU0NGU4NTQzMTcwMWZiPjwwODM1NDY0ZWQ2MjdiOWEwYjA1NDRlODU0
MzE3MDFmYj5dDT4+DXN0YXJ0eHJlZg0wDSUlRU9GDSAgICAgDTI2IDAgb2JqDTw8IA0vVHlwZSAv
Q2F0YWxvZyANL1BhZ2VzIDEzIDAgUiANL0pUIDIzIDAgUiANPj4gDWVuZG9iag0zNiAwIG9iag08
PCAvUyA4MiAvRmlsdGVyIC9GbGF0ZURlY29kZSAvTGVuZ3RoIDM3IDAgUiA+PiANc3RyZWFtDQpI
iWJgYGBmYGBaxMACZIQz8DIgAC9QDAQ5Ehie3OA44MfAkJkGFDZaWdqdZJreZJrewNDRwWDc0dHA
AUZQRUDAxcCg8xhIcwIxD1gknYGXcQED0yS1gk5/C8ZERyGJVY3rZgMlAAIMAGoLGS8NZW5kc3Ry
ZWFtDWVuZG9iag0zNyAwIG9iag0xMDggDWVuZG9iag0yNyAwIG9iag08PCANL1R5cGUgL1BhZ2Ug
DS9QYXJlbnQgMTMgMCBSIA0vUmVzb3VyY2VzIDI4IDAgUiANL0NvbnRlbnRzIDMxIDAgUiANL1Jv
dGF0ZSA5MCANL01lZGlhQm94IFsgMCAwIDYxMiA3OTIgXSANL0Nyb3BCb3ggWyAwIDAgNjEyIDc5
MiBdIA0+PiANZW5kb2JqDTI4IDAgb2JqDTw8IA0vUHJvY1NldCBbIC9QREYgL1RleHQgXSANL0Zv
bnQgPDwgL0YyIDMwIDAgUiA+PiANL0V4dEdTdGF0ZSA8PCAvR1MxIDMzIDAgUiA+PiANPj4gDWVu
ZG9iag0yOSAwIG9iag08PCANL1R5cGUgL0ZvbnREZXNjcmlwdG9yIA0vQXNjZW50IDY5OSANL0Nh
cEhlaWdodCA2NjIgDS9EZXNjZW50IC0yMTcgDS9GbGFncyAzNCANL0ZvbnRCQm94IFsgLTE2OCAt
MjE4IDEwMDAgODk4IF0gDS9Gb250TmFtZSAvSEdBSk9QK1RpbWVzLVJvbWFuIA0vSXRhbGljQW5n
bGUgMCANL1N0ZW1WIDg0IA0vWEhlaWdodCA0NTAgDS9DaGFyU2V0ICgvcGVyaW9kL3IvaC9QL3Mv
c2V2ZW4vaS9wYXJlbmxlZnQvTC9hL0YvZXhjbGFtL3QvemVyby9xdWVzdGlvbi9NL3BhcmVucmlc
DWdodC91L1QvYnVsbGV0L2svSC9CL3YvZy9tL3R3by9iL0MveC90aHJlZS9vL3cvUi9jL0QvZC95
L24vcC9sL2UvcXVvdGVkYlwNbGxlZnQvRy9TL2h5cGhlbi9xdW90ZWRibHJpZ2h0L3EvZW5kYXNo
L1UvZi9JKQ0vRm9udEZpbGUzIDMyIDAgUiANPj4gDWVuZG9iag0zMCAwIG9iag08PCANL1R5cGUg
L0ZvbnQgDS9TdWJ0eXBlIC9UeXBlMSANL0ZpcnN0Q2hhciAzMiANL0xhc3RDaGFyIDE4MSANL1dp
ZHRocyBbIDI1MCAzMzMgNDA4IDUwMCA1MDAgODMzIDc3OCAxODAgMzMzIDMzMyA1MDAgNTY0IDI1
MCAzMzMgMjUwIDI3OCA1MDAgDTUwMCA1MDAgNTAwIDUwMCA1MDAgNTAwIDUwMCA1MDAgNTAwIDI3
OCAyNzggNTY0IDU2NCA1NjQgNDQ0IDkyMSANNzIyIDY2NyA2NjcgNzIyIDYxMSA1NTYgNzIyIDcy
MiAzMzMgMzg5IDcyMiA2MTEgODg5IDcyMiA3MjIgNTU2IA03MjIgNjY3IDU1NiA2MTEgNzIyIDcy
MiA5NDQgNzIyIDcyMiA2MTEgMzMzIDI3OCAzMzMgNDY5IDUwMCAzMzMgDTQ0NCA1MDAgNDQ0IDUw
MCA0NDQgMzMzIDUwMCA1MDAgMjc4IDI3OCA1MDAgMjc4IDc3OCA1MDAgNTAwIDUwMCANNTAwIDMz
MyAzODkgMjc4IDUwMCA1MDAgNzIyIDUwMCA1MDAgNDQ0IDQ4MCAyMDAgNDgwIDU0MSAzNTAgMzUw
IA0zNTAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDM1MCAzNTAgMzUwIDM1MCAwIDAgNDQ0IDQ0NCAz
NTAgNTAwIDAgDTAgMCAwIDAgMCAzNTAgMzUwIDAgMjUwIDAgNTAwIDUwMCAwIDAgMCAwIDAgNzYw
IDAgMCAwIDMzMyAwIDAgMCANNTY0IDAgMCAwIDUwMCBdIA0vRW5jb2RpbmcgL1dpbkFuc2lFbmNv
ZGluZyANL0Jhc2VGb250IC9IR0FKT1ArVGltZXMtUm9tYW4gDS9Gb250RGVzY3JpcHRvciAyOSAw
IFIgDT4+IA1lbmRvYmoNMzEgMCBvYmoNPDwgL0xlbmd0aCAzMDMgL0ZpbHRlciAvRmxhdGVEZWNv
ZGUgPj4gDXN0cmVhbQ0KSIk0kMtOwzAQRff+ilnaC7seP+NtVZ4CqQJ31bKI2qQEtSQqQfD5jBO6
8dx56M7xIBzZ4u4V4fjFEDpg1quqguiScgm8TSrFCDJiEXBpWMuWmS1uDSDklmlwViWsDMir0GC8
ViEiAnqvUnIR8pkmj0wrrTVle8ryD9vyp9VmLaTxhr80w6nb12PXf5aCVY5vBNI7HOqxEW/5kVkV
MHqQqDAFcllNfuiLH19f+rHf9yeRP8jcYmEh7qugUgzKGV8BJqdssjhBkYExxWDLVxfhaF/djnIS
XTO2szoJww/fw5wM81gvAh+pXnbODW0VReTj7zjhGhWIt+A6Z0PBlRNvnNflTki6MT/Td11Qlt8L
GvW8fhfSU/w/Q+A7/rCkVrnF805M1jeZ/QkwALKvY2gKZW5kc3RyZWFtDWVuZG9iag0zMiAwIG9i
ag08PCAvRmlsdGVyIC9GbGF0ZURlY29kZSAvTGVuZ3RoIDUwODEgL1N1YnR5cGUgL1R5cGUxQyA+
PiANc3RyZWFtDQpIiWxVa1AUVxbupuluVBiFdkB7TPdE42NRBHwBigiCT8iKuD4iZSLIqCAwZGaA
QUTkNTyGEUUBQRjQAQVEQAFxRdEsZTYFuihKFONq0C1NVst11OT05FK12wP+SFW2uupU16n73ft9
3zn3XByztcFwHGfWrglYvyF07l+i4xRqtzBlXES8Nb1EkGHCNFz4xEbgCGGqrcDbOyNfFP/bgd+c
Sa4G/69ePxbtaaifCE8du6ZJG50wEsftKuoaOzw8POd7eHgFKhNSVNF79mrkc3b9Se7p4714njV6
jUYfa/TxkAdEKSMV8k0pao0iTi1fF79LqUpQqiI0iqj5cnlAbKw8zLqDWh6mUCtUSWJ2lKo8Wi2P
kGtUEVGKuAjVPrlytzwkOl6pSUlQyAPWyCPio9yVKnm0iFMnRqqjo6IjVNEK9UfsqMzR39/LxnDM
BsPG4ZgExxwxjMGxKRgmx7AZNthsAnPDMU8MW0hgK+2wEAkWjWFKDNNgWDKGLRadxAgsFMvFurG7
2DvcFy/E39tMtykj5hA64mfbrbb3yRWkhrxOuVHVNEtn0wN2C+26x306LnGcMH7N+OMT9kx4au9j
f9PB1qFe4iLZIRmaGDyxcZL7pNRJzx39HPudwpx6mSBmM7OfMcsl6swPPwh5+mQncP7lU8CYeiYU
NkKQ1B67LDDk7WOHurfLIg99GfVlI9p8dBV/HGEkk1eMPhcXmBorWurVtbHxak2s4tzBGl6itngn
413DBHRZvKXI3d0duSJ/FnFNgVcDuQubbq+6GVt2cgryA1wPSjMLxa+GBr/lxmAQPkxcQKuk4P7y
JbiCPwtczL3t9zhFz7q7GxvTU6eAH8L1SDmTRcVzlweEijDY0otfhALiIpRIQ6Cgl5bIDwGB3wWC
AMdDIsOa5srWxq/r4uJUqn2K86kiw7Q6i1MdbnoFJ14RcNjCSWGyARWt8c1H2WgympwPmY/7DVAE
k/k6WxQSgZLReOTQgTQQAiEdoAEHGB8BySiEl+gfCKcHcXgxTAhK+ER68AgZEqXN2y1DWRT4QhDZ
eeXpi19YsPXs91joFo7mxnE1OvKbUw0PXshgfK37Nv+dnovm82g12kimCywtSWsRXvzdCQYeKe4x
d8ANnkjvGGp1LVq7vpiyABXrH+k7nwtV9OyQpWqylVuf6UzgCfbPYTpMWdIcUcUzP96+dObSN2x7
ws9IunZBLHLZxVXkkmcrOoZeyAZbtmxz9w/3iOclJ5KEnwbxaiGAEDZbXKUjFJX61c7PKrPI601V
9bdkEEeh7SM/kbco2GuZTkqeZX0Q/trg1N99C7D1lUMvmbfMa8gDb2n5cF3b3/rsGpov3+1mr2ee
Tmji6tXr6sNkq912zoouiW87W2qq5FsUZWfYpu6zHQ+b1m7YpV2dsotTBpM5aerVoSxzpAT5/982
gl69UdsKHW1Q1qo1OgFpBp15KdBMc6XFRfrP3PL0AHYklA5K3++Vz2nhGc2cB1U1cBc+LOtaVMUb
xIWXS5+fAoIFdxRYvohDTRRsR/+SvjqnyzjPM4nXU6/G+LCIRQ6Z6zamFLVyQrlthaG8mispL6yR
MZcG3mvRBOuXr+TnFZAiobRO4V0bfhbsCAsmbJVmQiKZRMVkxeh1ieubFsiQvzjQHJAGGSBgLiyG
kGFwNBTyKVVkXGxgEJooQ05owjPYC5919l28wLd33Ky+JoOJ95ZNr+UlAm6VK/xo1freLFCAM7En
naGVgm8fwTaYChq0EHCUzaPTFFOppR/mlGaMGrAqPXVFLsfEQpjQJ63WG6+AU0/Czi7uzuempV4s
6kQF+uDw9OLznATC9Ub4rh2OXxDPeGOGY6c7TMwbQQ2pUjigof0zUpJ9ZGgmqKiG4qJaDqZRQNSl
Lqjga6k8mEGOnKWYJ1q6V2fK8WVHVtE+6Qf8c0Xnm4x00OHksu9ZUNIwM//7AlNGqddUUVBaG/RW
QXOb0zmgBPnrqDpGsHwBv0rRsRSqX1dd4M2iJfSe1Zo1yH0GkLAOjsLs3n9zzAeQX58TeIY/TWXB
HjKRUubE6LL27WiaJUN7kTNaJrrs0hB6YyPvNwBE3BsZI4ANTL1ZzsMOeqDO0FnEjboJVzuh1Yj/
aoYeM2F0hkYjtfxIhvE2C5HQg7bS/rmJXlb+jUZ6LC+E0gNlJx4c5YyoUUsP5pSLBqMtqAe2/DEv
6h/DS9R6o8XWiANtJqqc4ZqRCjCoS4asm/1QW3z/sAi6pqWf/K5dl+SLpSgWq31ReGutA5MAV8RL
iJq11Ji3TKu40E+TGSyWNUErNNNdsJJUUcglLDgXucq8UwrbeCGOflRb3G+VWv+RAN4pVJkJy3vr
XuEU+nrEgFIFA4m2a6nB3Jqs5eyIK42wpAMrdKLodCO98nDSqXesMMNAg2HkEllOQbFwmzSMbKZ8
Dh5Y/rGywUWa4w+tYh6ZjvYe+dhDp9rgmrVP/2OGVLFPXzOPT4pX8n52aXqQ1ZhAvcYrZwzuZ9hf
cZ+F9XS+yVhgkn3oq2q/xFee7GwfYsURwqBJNxDHoUoK/hwkbSotbOiEWfloBpqHWP3upORCg5Yb
nQPQ3gLN1hPNZjhoPdEy2yiNNKirEs+gSDg6pbsEnE3AsjAf+dQs4FA9xTzW0nezKzI/VippqVV0
g5FeVpRh/AcLX9A9BshE35GwmGJej4QLw9L648U32sElD00VCbjmexVYxYrz2NKODwxD2ltC2CS+
Z0nUoXydRqnafH6DzNNjE6Lm9H31cgsPsxefjA5j/0d0lQdFcWfhoDvzY9cEEzot2u12s9HENequ
Uql4xCMKUTCKeHGLKMh9BGVgmBkGBmEYGAYGARmac0AYDhFE8UAUAQ+UlEcixyo6nusmGzdLtHyd
+pGq/Q3uulXzz1S91+97/b73fa93hmz7ArO+MKNLziVKm3CExCwtL6mo4Zq6I0dYmHPznw952IMf
V8OnzLULzedfDS5ZXsEdkU7yQXzRaVddA5HjEFnz4TLRt4pemp9WdZMRi3oRTLGk4Ck1fJ00C1ZL
JgakWXi1pI6IhLmhCqawvRM66Vpd4mLb0M4g2+qdFn+tsns5Dr+dnvqrTeKxSi69nV2ncGZwLVqh
iVLLuMC4bdGrWLxOD/wgD3dsHL9r43ibHI1oy9IJ979FW/TYB/KgVA9e9zgoQ68qhNEizuGx5hJo
k2BlnWP7k/UP4M9jZGdPwHQ6VpcWo+K0GTFqGRsW19xFVLj1cD8sOM8RDQ6iM9LVmiQ2Tt/YyIMP
uqr/Gc/nqJ+wfaBbiLI0vrm+ylzDF2sFgWk+Xnm0qSvdlfNE1Cu89OvPIucx7j1hvX0XTxxt51yN
nbmVbHlzXncn76BpEaddh2XC6WTHW1ZIfkKNEdscpi8VDFa2sH9vm/sXDyWWYLTfzXSMX6qkw159
cx42XwGXZ6+3AcLvrfHempjF5UCfVGuQUBMBCv/UALW9EuUVlugL2dP6pFiedPu9KGm0OzI6VfwI
FtO+kWHBruzS9SOAYN7z5vaLxpTYIr5QViirSKg3zGoylFfVMpXq5tgarnx/kBDO4new/TK8DLu8
3nhv7NtTIO0lsHv05eIfLG+ANzyCZCt1DP4xTocfDD2wh/Vw7Yf5MPtmd39Xd/SGQzwVBwtwA+2u
1VefZMRBZMN6wYbVbxJr0VuseDFqjIotC2bx7xZihJ1wMMxJ7hhqb28gjugpLdBJqLiLVWfLz1XY
k7ZuQVwLoDa7I7emiguJnYclBm9wYT1WnISv0ngYlIJTulvPIvYL/L4n3ow/GlkOU2HuZZjdXMLj
e7CQflLe0dDH3i1xDzhEzj69IM5rgT0Wu1+sU+Ga6E3j6WAvbTf29HFdxi872epaU2Pr7obt2F2P
v9rOL4++j29IVysyPHScCkwC+qsht36AgRvggBxw3CX4ZGRPEiw4AlEdRouj5Xbcg5VXwXe0wUIN
OVOnnF8soZsqa5rbOfK3S2PW/6CzH8rpzglivDfsxr/nSASWhp9o5qmh4F7Rjb7RUdv2qBwz8TLt
2g2rEp7e5aj7zvDdWbpjNVKmGIqSSKCzECIrimDXLAjA07a27rqSwFN3alxW6tUKJr4+rUHNUcPO
vuGBe0mN4d3jz2BFP0gsqUPORo4ae4jpQklsXkB+LVN94mS1gSts05ay/b90wDvnM6o0teRUFES2
982871jB00qNiUqB9ivWGIlABqATuTANT9/iH+0X8YaO+f+lY9DkiPPfjngiGq3TZO7MJC8tRkD+
RrlxlIEsdO5kh9DI9plD3Xnsj1bLtf5aEhGBHFaSwtMP9yXDn8Yd/zb+uZV6Rc4KjSBdX6CtIF4I
KDvfR+4r91HMUiGDyWQoY8urcr14rEKrFFmbdRw185r4G7nQ5GXRbU1mc6NFVhsboYomykwevEwA
1zLS0X0rNXpoBqgFaXKeJMogKzrJgAcxFg5vkpQqijMyGbVad5CTqyBOkH5jCD3kO4gXgffMxgYJ
OAJr0V9kBRycijqLDqZnHExXpfIhkV47wmNyDDPFz1A2obvorfBO9X9D9+I8E9tRqAziJ5LQzoM5
7uncZJevbUheWKn2mhmQLki35iuKLjPiD5Pp/7ale0+ml/4/PQHtSFF7ERMXl5N8J1Nmj+MjK2yy
/T6cTXU/eki7x+aE5JIXuV9AEXlRhm4GjiHj2d5bPRWhHhyORRsTdUE2/iYKaG9eTP4IA0bUam4l
a/F9npcvj8PR18ocvywSES+gBINk96G8JjO5QNBQ6ss5Kz23HEjiOtNjOjeyO/dGB3nxwyiTgB3b
pQhUBartFchQVPJmIJsmCeRk0vQ53rOCm5U6c+8Z7aVSB9qKRwjIJz+1/CgDraj9eHN1E3ulMtyT
x/vQmgRtmC0ilDClIE3oYiAFwXS3hl1e+yJ2+XN3J8uN2soFTZYrOPy2nMYiuljsGp9B2bOpkCMG
05hZju3wdMw+mws0LLwN/L/g40WPsRNfMeFJ43fJ59HQy76e688vkLD38Rz3fb68A46thK0tor2Q
cdVx2AqyR9TYMHaig7OTPLScArSVyFV5mNjPEhR91JJ+ggX+O3AmwjprzX2X1bv845P4HIPkDsoi
MH8KkAcoAlS2ERaX6IvZY/mKEJ4as+ro44X9ljYWHIz480S8bok8akfgqVpb7QrYNiB+YMlOdqx7
BEorOTkmxFW0OTTR5MviD+bjKZjDC4bdng7c7myq4I3YU+qXmviljgBLL0NBJSkF5xiypoRBBZk6
yTr8gM6N1it95PaZKYocFesS+uNznjzzvd47Yz/mucXbbkEBfn7hUZpx3XHESg2MWOntSt0ODRlA
poB8DZEFtxjKBDLUcvp8HUcN1JVEHGPryqqOHE2uidwXmewWyOc8nTSiRv9kvxR/JenVUHz4fyvp
0KIxG5PEzbUZyY4XrnmaIPJ2gImqh5twg6ZKsuGPknQp1ZSt02ZmKGKqd7OfBntsX9sXdi2Cpxou
R0laUqoPRDPbApWR8+WPL6ZxSim1Dy82IaqhuLAgv5TNs5ySn2EHxzofX42w+DfxVBN+Z1ASZs5q
rGbqazr6Tp9J+thMNk3TI77bYnebiHwREflPiMYX6Y2ZXJi3SrYv1F4mk2WlMTrDfyqv3t8m6jC+
plxvKKlmt5nuLrmLiWiiRhBjhBh1McOwF1JNVBZJJ2yQhf3QroWzwrVb67W9u15/7Ppjveu6lWth
VCabpVZgQMA4hiaLkcgSSWQJLwhKYvTVc8vXF179D3jzvHqez/N5njx5Ps8jxCX6Grpue67uvnrx
0nyjTAdyWGDii2PDVG/sxlcM1P7G7cHlcdV4Sm27tQ7vmCNN+AzvlQ6nKOoLJDxEO47jv4hJdoAk
LqBxfIgPhr6kj7lH/aNUT+hH89q/g57141XxrOgkUQp/+jdvvbiUrNXoRgPbicdFjPBdmv5Gva62
2hXW+FCzzBh9VmOsKUwMZ6tIJ0NHycFutP3IC/RWyGBCOqxNkZWTiXKc1lAvh89Ks2FtdKH7IeLK
77YeKMSjGUrJ6KdzDLTD7fINpfB1ovnmKCxcY+FNzVLaeNVq/Gzic3g5Mhv1kuNh94SHfhl5Xoch
jKuFc0XyTCUxnzLxP+HweiTHLyJsO9rvcEH7e0CcI+f0pN7MvofDK2I6MLsfKPSn44iaCirUZCo3
Zxa9Cfj7aAIbVON8mkons7UCAzT8pS2msiWlU8PNNQFV1tQZy7TRb4Xvf+1IRqbGOZINRUeai8yl
4cdlt3x4GbVDr+MubEnnq9WG3KmhHRw+LekSS06EQ3tfoX1n+m8WyVIlXjQJDXD4vDwVq+/9Az3j
QDa0a9jlF91CJwdbTTw2VhSZnFqazNB34AnMVO/11/4xlrQ21ThILBqfNVtOcbazciHoI0OB4aCb
fgN5EA6TmKDwhTx5+pysy2aafj+eF09FpjxgQ5JDHVnaxpEnBhC9z3yEAiNFN+USD4cZYnFMH76y
9t0qkFdpeMl4UTMFNx/vtN8zdkMLaoFo7PO2Ru0QMA+ahqgTe2CbebtvaRkN7urpJncuB/TKvNY4
P5gd4yOSJNBSVJBEiudTmZnChdVvGeJt8M48gr8dXJctMUOwQmLTx4ZwGbfz0Au65TzoVvgJ1k2o
NedK11tOZ5dMd60419ZWbt6m/+cbNfm2tFWBOVR70DTEB0TpB2O3GVHN/r56i7z7fv6oZ4g7MLAQ
PKWkZDlBy8mEHKcUJRLynejrOcgQAoR9j+Bv96sbz6uIzYI3ZgMt/S8n437V+CgLfWor+lQ2vGl4
MrZ57rH1xw2hHTY6/gPjNXrOCmVuZHN0cmVhbQ1lbmRvYmoNMzMgMCBvYmoNPDwgDS9UeXBlIC9F
eHRHU3RhdGUgDS9TQSBmYWxzZSANL1NNIDAuMDIgDS9PUCBmYWxzZSANL0JHIDM0IDAgUiANL1VD
UiAzNSAwIFIgDS9UUiAvSWRlbnRpdHkgDT4+IA1lbmRvYmoNMzQgMCBvYmoNPDwgL0Z1bmN0aW9u
VHlwZSAwIC9Eb21haW4gWyAwIDEgXSAvUmFuZ2UgWyAwIDEgXSAvQml0c1BlclNhbXBsZSA4IC9T
aXplIFsgMjU2IF0gDS9MZW5ndGggMjcxIC9GaWx0ZXIgL0ZsYXRlRGVjb2RlID4+IA1zdHJlYW0N
CkiJAAAB//4AAQIDBAUGBwgJCgsMDQ4PEBESExQVFhcYGRobHB0eHyAhIiMkJSYnKCkqKywtLi8w
MTIzNDU2Nzg5Ojs8PT4/QEFCQ0RFRkdISUpLTE1OT1BRUlNUVVZXWFlaW1xdXl9gYWJjZGVmZ2hp
amtsbW5vcHFyc3R1dnd4eXp7fH1+f4CBgoOEhYaHiImKi4yNjo+QkZKTlJWWl5iZmpucnZ6foKGi
o6SlpqeoqaqrrK2ur7CxsrO0tba3uLm6u7y9vr/AwcLDxMXGx8jJysvMzc7P0NHS09TV1tfY2drb
3N3e3+Dh4uPk5ebn6Onq6+zt7u/w8fLz9PX29/j5+vv8/f7/AgwArfZ/gQplbmRzdHJlYW0NZW5k
b2JqDTM1IDAgb2JqDTw8IC9GdW5jdGlvblR5cGUgMCAvRG9tYWluIFsgMCAxIF0gL1JhbmdlIFsg
LTEgMSBdIC9CaXRzUGVyU2FtcGxlIDE2IA0vU2l6ZSBbIDI1NiBdIC9MZW5ndGggNTI3IC9GaWx0
ZXIgL0ZsYXRlRGVjb2RlID4+IA1zdHJlYW0NCkiJAAAC//2AAICAgQGBgYICgoKDA4ODhASEhIUF
hYWGBoaGhweHh4gIiIiJCYmJigqKiosLi4uMDIyMjQ2NjY4Ojo6PD4+PkBCQkJERkZGSEpKSkxOT
k5QUlJSVFZWVlhaWlpcXl5eYGJiYmRmZmZoampqbG5ubnBycnJ0dnZ2eHp6enx+fn6AgoKChIaGh
oiKioqMjo6OkJKSkpSWlpaYmpqanJ6enqCioqKkpqamqKqqqqyurq6wsrKytLa2tri6urq8vr6+w
MLCwsTGxsbIysrKzM7OztDS0tLU1tbW2Nra2tze3t7g4uLi5Obm5ujq6urs7u7u8PLy8vT29vb4+
vr6/P7+/wEDAwMFBwcHCQsLCw0PDw8RExMTFRcXFxkbGxsdHx8fISMjIyUnJycpKysrLS8vLzEzM
zM1Nzc3OTs7Oz0/Pz9BQ0NDRUdHR0lLS0tNT09PUVNTU1VXV1dZW1tbXV9fX2FjY2NlZ2dnaWtra
21vb29xc3NzdXd3d3l7e3t9f39/gYODg4WHh4eJi4uLjY+Pj5GTk5OVl5eXmZubm52fn5+ho6Ojp
aenp6mrq6utr6+vsbOzs7W3t7e5u7u7vb+/v8HDw8PFx8fHycvLy83Pz8/R09PT1dfX19nb29vd3
9/f4ePj4+Xn5+fp6+vr7e/v7/Hz8/P19/f3+fv7+/3///wIMAOesPxAKZW5kc3RyZWFtDWVuZG9i
ag0xIDAgb2JqDTw8IA0vVHlwZSAvUGFnZSANL1BhcmVudCAxMyAwIFIgDS9SZXNvdXJjZXMgMiAw
IFIgDS9Db250ZW50cyAzIDAgUiANL1JvdGF0ZSA5MCANL01lZGlhQm94IFsgMCAwIDYxMiA3OTIg
XSANL0Nyb3BCb3ggWyAwIDAgNjEyIDc5MiBdIA0+PiANZW5kb2JqDTIgMCBvYmoNPDwgDS9Qcm9j
U2V0IFsgL1BERiAvVGV4dCBdIA0vRm9udCA8PCAvRjEgMTAgMCBSIC9GMiAzMCAwIFIgPj4gDS9F
eHRHU3RhdGUgPDwgL0dTMSAzMyAwIFIgPj4gDT4+IA1lbmRvYmoNMyAwIG9iag08PCAvTGVuZ3Ro
IDU1MiAvRmlsdGVyIC9GbGF0ZURlY29kZSA+PiANc3RyZWFtDQpIiXxRy47TMBTd5yvMzkbE9fUj
TpbD9KGiamaYpquKRVrcTFAnLUkKQgjxFfwv13GiKSCx8rkPn3PuvUDKaLJYAynbCEhFImV4mhKr
M64zYlTGM2tJbMED0rjoEL3No8lcEiD5IRJEK55BKkk8AkFAa54qYwgYizmdkPwZO8tIcCGEJfke
o/xrtKW3T0VdupbFEvUS2jLgllb13oWMorGQCI3imn5BQF3TVqeafcjfIYUCr4juRiCIlAnXSgLJ
BLdgda+MqqC86pb+YrHOUiSeM8EzlCqOx29eTHGg7WXHYhD0mYFEwarr3MdXg1YMXGuFk0x7Pjnw
/fR8Fvn2TKL1hmluqCs6twjwdDlXdfnoPl9c23mdhO5PdVdUdT+05SktPDCo1yEF0GJ3dGPmdAjq
PNVZ4i1IoUYHYIODzWbJcPN0GvZkkQa8sA8Fl2jJV935WO2Ll84E0HM/Jxo4e49tLzWZw3hYwNtl
GR52BLhwlXIjcKuAa7ZgsnDY3kcTJBgoZF5O82IXoJ8mEYKuQzh7v5ndBXg7ZEL9/iFf3ofM3c0q
5L73nkBwSMIB/PBb2uGOQqcLz8rVZfeEt7Mp+qLLu3ygXswer+5nlPqXIDb9n/W1sdHS/KoaStjj
q8b6INiTXKWJvfbXDNzDzvvAL131XKvpzUPIrbumqgMsw/OmZ8xf/02yGdSXSAPwPxr/P76yNMxM
f7D80x8F/ZKe5dFvAQYA2hnvPgplbmRzdHJlYW0NZW5kb2JqDTQgMCBvYmoNPDwgDS9UeXBlIC9Q
YWdlIA0vUGFyZW50IDEzIDAgUiANL1Jlc291cmNlcyA1IDAgUiANL0NvbnRlbnRzIDYgMCBSIA0v
Um90YXRlIDkwIA0vTWVkaWFCb3ggWyAwIDAgNjEyIDc5MiBdIA0vQ3JvcEJveCBbIDAgMCA2MTIg
NzkyIF0gDT4+IA1lbmRvYmoNNSAwIG9iag08PCANL1Byb2NTZXQgWyAvUERGIC9UZXh0IF0gDS9G
b250IDw8IC9GMiAzMCAwIFIgPj4gDS9FeHRHU3RhdGUgPDwgL0dTMSAzMyAwIFIgPj4gDT4+IA1l
bmRvYmoNNiAwIG9iag08PCAvTGVuZ3RoIDU2NyAvRmlsdGVyIC9GbGF0ZURlY29kZSA+PiANc3Ry
ZWFtDQpIiXRTPXPUMBDt/StEt2LGivVhyWpJgIGhYDhfdaE4bPkw49jB58sNFT8Cht+bXdm+kCKV
V9rd996+lSU7JFfvN5IdjolkLUt0LoqCOeOF8SzXXnjnWOokBWwMSZO8KZOrd4pJVjZJxowWXhaK
pWuQMSkLYZ21TOYO74xl5R1WHpJMZFnmWFnhqTwnO7j+vu8P4chThXwWjlwKB21fhflGQ5opDHMt
DDxgAGE8tkPPv5YfEyW8lShNCuktgt5EeGkIHm6hGvqp7U+hvuW8/IGEWpI+bFiDjCllhdFKMp8J
J52JOhFEEcYOfvPUeIe6Tn378xQ+cCMU1BwZCwiIzklh04aR1OL0cN7HUQohoR8mCi1W/DmHrkvn
5tC0faj/ijhBJgrjLU2gMr0OoOxM/iU0CGyIimtYLMkReeIK3RjIFovZT1hj4Xr7ec77WaCCuXeP
vc00pxzs6zrUq7WUOkYd5ev/rNtdptWIS9XIFqctEGGd1ogcPY4lKIyw+2mMx188R/btdu6/IZkG
9tOSbb+dpoAbWSDai4Dd6g01ZJTiNEV/OU6zZ+mTaUZbeVl78WxlBWywHXVXEz2XiCHBrQEqN7BB
4VJh6R3Hj0c/FBFObbUsUUeToaEuepcYaFRy6rp5s/naUr+8zdVTfNPjYlSky8lSntq4B3pPC6aH
0329n8L86Au4H4dpqIbu+ORZ03bdizb9e7bQmbsPoV5+MQm0BvyNHtpA7BLOr2LH2zJ5FGAAtjT2
wAplbmRzdHJlYW0NZW5kb2JqDTcgMCBvYmoNPDwgDS9UeXBlIC9QYWdlIA0vUGFyZW50IDEzIDAg
UiANL1Jlc291cmNlcyA4IDAgUiANL0NvbnRlbnRzIDkgMCBSIA0vUm90YXRlIDkwIA0vTWVkaWFC
b3ggWyAwIDAgNjEyIDc5MiBdIA0vQ3JvcEJveCBbIDAgMCA2MTIgNzkyIF0gDT4+IA1lbmRvYmoN
OCAwIG9iag08PCANL1Byb2NTZXQgWyAvUERGIC9UZXh0IF0gDS9Gb250IDw8IC9GMiAzMCAwIFIg
Pj4gDS9FeHRHU3RhdGUgPDwgL0dTMSAzMyAwIFIgPj4gDT4+IA1lbmRvYmoNOSAwIG9iag08PCAv
TGVuZ3RoIDUyOCAvRmlsdGVyIC9GbGF0ZURlY29kZSA+PiANc3RyZWFtDQpIiXxSTW/UMBC951eY
2/gQN/5K4lMlulAVcUA0e6o4LImza5FNlnyw6ol/we9lbCeUSoiTZ8bjN2/eMyfH5Ob+kZPjlHDi
SCI1K0tSKMOUIVoaZoqCpAX3ARlt0iZvq+TmvSCcVG2SESWZ4aUg6RZkhCvFSqk1EVIyzbki1Rk7
j0nGsiwzpKoxq67JE+zcVC/T5IaepkJLJuAyuH6e6JfqA/ZI7iFx/BZkRIicKSk4MRkreBGhEZYX
HvYJftJUmYIZeKScSTgNS9d47Ayxr1gyYDFVOZzwYBwOP9b8EI+RppLlYC+dqw8PFHeCXYVHAYev
3drq+siPlcrkJOVMZJJUu8jDRB71SBXC28Ns72M4LBfXHz/b74udZmqQz22ASV9wlJL5BiTEq4UU
8hZIrJ89moJnmsN+/xDi3SZeG+8GmhoceMayRCfg493+U2COgvzygcF1HBUw+cSLNWNycn9Sf9fP
to84jW1w2O3qiacpc77RlH+xzJHfjnoig0cqEeka0qCbwqGIiNuCtc3W0MSKa1s7Uh16Ja7Qz75B
I+FLLA/zUA/dVrSdPVMeBKYp516XiDOvO3Boh/HfLqXhF67qtjTVTMOC9KHrIikNY7R/xn/5f4/4
K490+HQF2HrxRkhw83OUu4S7oZ9cY2M9IAem5WZI6TcIj22z3lzRcAlDfPLtTSDyrkp+CzAAiP7f
YQplbmRzdHJlYW0NZW5kb2JqDTEwIDAgb2JqDTw8IA0vVHlwZSAvRm9udCANL1N1YnR5cGUgL1R5
cGUxIA0vRmlyc3RDaGFyIDMyIA0vTGFzdENoYXIgMTgxIA0vV2lkdGhzIFsgNjAwIDYwMCA2MDAg
NjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCAN
NjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2
MDAgNjAwIA02MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2
MDAgNjAwIDYwMCA2MDAgDTYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2
MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCANNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2
MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIA02MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2
MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgDTYwMCA2MDAgNjAwIDYwMCA2
MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCANNjAwIDYwMCA2
MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIA02
MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYwMCA2MDAgNjAwIDYw
MCA2MDAgDTYwMCA2MDAgNjAwIDYwMCA2MDAgXSANL0VuY29kaW5nIC9XaW5BbnNpRW5jb2Rpbmcg
DS9CYXNlRm9udCAvSEdBS0VQK0NvdXJpZXIgDS9Gb250RGVzY3JpcHRvciAxMSAwIFIgDT4+IA1l
bmRvYmoNMTEgMCBvYmoNPDwgDS9UeXBlIC9Gb250RGVzY3JpcHRvciANL0FzY2VudCA2MjkgDS9D
YXBIZWlnaHQgNTYyIA0vRGVzY2VudCAtMTU3IA0vRmxhZ3MgMzUgDS9Gb250QkJveCBbIC0yOCAt
MjUwIDYyOCA4MDUgXSANL0ZvbnROYW1lIC9IR0FLRVArQ291cmllciANL0l0YWxpY0FuZ2xlIDAg
DS9TdGVtViA1MSANL1hIZWlnaHQgNDI2IA0vQ2hhclNldCAoL3IvaC9QL2kvRi9ML2EvdC9icmFj
ZXJpZ2h0L1QvTy9BL2cvYi9RL0MvYy9SL0QvY29tbWEvbi9wL2wvZS9TL04vRS9HL1UvXA1JL2Jy
YWNlbGVmdCkNL0ZvbnRGaWxlMyAxMiAwIFIgDT4+IA1lbmRvYmoNMTIgMCBvYmoNPDwgL0ZpbHRl
ciAvRmxhdGVEZWNvZGUgL0xlbmd0aCAyNDkxIC9TdWJ0eXBlIC9UeXBlMUMgPj4gDXN0cmVhbQ0K
SImEVWtUFEcW7pmhuwfEkUzZiDROI4qPKAjiIqBiAMNDYpSg8dHqqjiDRMJjEFEzIhwyPEQQIeKy
RiWIGo1gEHWRht7gDh40vhUjQTTRlbjq0XhMvJXU7NltIG7ir/1T3ef217e+76t7b6koBzWlUqmG
REeFxb49b0JEaqY5yWjuCwVhnsIeKjxCjUdo8DCHbc7Ku7NrKin9RfvLadpQrfpPcfHA6szCnCGQ
/UaDxyBvPaVRqbQLl/v5Tfb185sSkZq20ZyUuGad57iE8Z7+wUHBE5U12K9/9fcMW526yugZvzFj
nfHDDM+YlIRUc1qqeeU642pfz7DkZM/+XzM8zcYMo3m9EnzFj1JRgynKi6JGO1BvUpQPRU1iqXA1
FUlR0WpqDkXNpag4ioqnqAUUtZSillOUpyKUUlMeVArVreJVSaqD6nHq2eqNalkzWLNA82+HhbSe
jqWLacy8z7SyLqyFPcu+0LpqV2jbHJ0c5zrudwQdcYBsFcTAVA1UgI4LI9l0O/MTZNO6YgncJDgu
6UvBSIy4l5jAhB5CEERzUMO0ETca/TRGBJMUz5BEXEkTswjeDFloN9HzRTBKoxjoULBkMwMMHKe3
M7qNMi6QLFn1WfoTNiwcQx3oKM7G5zh0YX1RlqXAWlpSaKhfaNodyxMnMm4yWUECH5NRMK+zq/5U
u5DSSCcueTd5Bk9GhcJgcIdwcLz68MU3i4lDmdBcTqNP844Vfn7Y/cj26tI6g0IfmyQVuMjwyKYB
F4kLEaGBgcjLMBLGQzzR3CaOAvmHeJuRXWEKcx4GrSSxJJ14Z80RCM/0yU+VQJT0MFKGadL3rSta
0a/IjvOxnmtmCiCatlvE+3ZTgMigX7EVYpYyVjKcJu2QCtPCJFgkhpFpoHy0Q8vv2VSvsmm+coVa
iQx/Ha1AdyrGvpa/zzcIkaBhwLojNuwqrzyGeuEmFHMiTCTD2KxF8y3v8STwXXADGqJfdj4T0APQ
dcz1HnDm4h+dQZ3LlNCD1y2P44nLHyyPEVDv3SvVB78WYEarxL4y85CseYzPKptG2U0kSsQTJRKJ
TRD5O2K8DHKrBroVFDkqPrebvEWcx+yCYnoJs5mINPmbgkeMLvvssr4f9D/I0CWjDXg3ruQ6JbJF
BDWDmuxrilgF+ReaTBe/Y9AGuylQhK0SUePEEhY1Qau9lr6l+CLhrZLqlA2PtmlwG9Rx4OMHY0gI
CfEjY4gP8eklYyAEQnphDPgY7MtcAbWDC3iBV7uiFhEUpzy8iFecEkR9FQMzJShWVATLUNehgWCF
lN30JxEaJfImzAQDC/N3geeR5mR4g4QJ5JDIwAhXMDAdPcl+izaFEXeBuDI6YpCsp7+Q1q/HJyR9
PztUj/fM5rZYzNnpfEZGRWWagFIq01N3JvIfbSq05gllbGVFVVXlbu3W8vKtFTwE9DN+XY2AlpeQ
kVfIIBj0QVpyytrEdTWr3WpPHt7dwh+tzc2pFVB97oHanC/4usO7auoE+wI8lPu/ehMkSFT06mQ4
0qYBncRNFSFLmgQJL5kfl38Z3EqGasFZmsJk1ifBML8WbQnbXX6t7V/uB7OBJd4Gck6829c+IczP
O2ZkpOcGEQ+BuPUVbJoiXs7NOpilv9e4rg3ONa9uQzU5aHXOdZjMVawtX/uBu7kgvcBskPJpVJtz
tLSiqpLfsSe3sEzIPGHLbeSfAX0GHIAjzrt8KgS0n4rYEVh91/3mrabu5oubk/cY/pq+fU1JSmup
G6qgPr5Bo/KcoprKkzvrtWhrToJS4DU55sK8j3J468bK0nxhn3HFzhU8GR3jHU38Yb61tkjotHYX
THZPzw0gQQaFgnfUiQdbhN/quLkZYmSIaR4ahE8ppWzfwmwi2S35VRBPQ7hS+y4i/F0iLkothzNV
JL6ljO4bbpmSqleGK7IGL4dMjrBJoYrfLkm3gDXAyecSyRI7ybcs8eh5GxCgnqvgYYBvO/viz1md
RU6UYLYErKT/ygYdMrLgJLjE1eS+ICGz3tmQEWfYt4z+rOnc3vP8hXOZEUISu32l/yer+FDCLvES
UEtg14K7dztPn24yIEsojJT82UVPOdQSt2rG+2/x8+bXNR/Yew9891/Os54SBjZ7JIN7mx4YGX62
Icun+DL3pJy2H1RazYNFLXdK2LnHrxq7eXD6EVSgBffxT4h+eqxp4SqhCA4zZEIOjbfZfTjf0Glj
xwffefb0zvXnLy6/NdKgZIeXsjVLf9EGTxvRLZyE53CpBR/nWvmCwm0l+cL+JQmVS3niTJymkrFk
BDgFALrVdUw+L5gb6HnBf7YoFwCKfXgF0PcXQP3NjbVTy4XmMho15TUUVRfWaGtLasv2928TKMFo
WXWxHVNtGnzoEbc2MiItlh/le+ZlvoCnMNv2bdu11/3SbKVdQ8gQohvYThcAQ+53Ha67ZrDPhMXc
P6s6zzzjn/Qs8hV0xF8GVoZrUqKsb7LBbRk9wZlYx30nEY/JRQy6fSk58stonmjHTlLouz72B6fr
tobWRqFkOjP6nfUzI5d88lm4Ycti2nL24aY2HvyO3OsR7EPgQ+7+zRs/PLwc6hMwa9Yk34irT/vb
TzmB4WdU4KS0n6wBJ4lTvE+XAqGcabadPdHB9+xdFi6QDCWaJk2FnezXGdeJU9TsNSsXGGAPq5xA
mfiovP+mviOBBpz7JzS44FBlRE8gd0i0iH0lEoV1LGgcwHk6ce4f1NW2AeAh3MnZD/wP1cUSs2W7
+Fs+xQHQ9BH774RvLAUZ+OD3PYjrQAXwJHY8DnTwyoiKkRO+cYEdGJ67XwIrYJBhL8HJ7M0UUGj+
qQbq2Lr7t+734F62kE1XE6/DUpm88B14QgtLkgMG+Ztrv4HJ8c6P2t+5WJIa2KlZwEo6cN/3hXuF
L1wA+50H6HfR6B88u39HfE8CuZf9O++hxRukly2pK+mX3zuR9WBr0uI4aWuPOFtreeEL3xlA2Q/o
zOjv3kLfDfcJXxDe8N3wx1FR4Qu/pV2jL/7uZ1e8mf7oyrUVy47Lte5hjVeObIyXtk5d+kn+e+b3
Wbt/z2IHapAGepdh9/d9G7/vAxapC4GlgOdPJtHfrCGppvK/NdgjE8rj2+VS2be3byuPlAIKmB5K
/c4qt5vdsTdhwWkpYJ7bWbIkaZHclkWLluwEcU9vW3C1Vw7ZzDXAODP/cUn0aue2Mm8pYLDELira
UiKXVFJSFAvieieUOXbKRbMDi86ND+W/a7Dv3TZ/e7/cRvb4/oT5e6WAAg9DNv5mleOrnPvj2Nzf
erO+W82a1M32vWXK/Ul/WrvZ5RY4V4r/5+Hcx7WP+8cbEQBfkIhzCmVuZHN0cmVhbQ1lbmRvYmoN
MTMgMCBvYmoNPDwgDS9UeXBlIC9QYWdlcyANL0tpZHMgWyAyNyAwIFIgMSAwIFIgNCAwIFIgNyAw
IFIgXSANL0NvdW50IDQgDT4+IA1lbmRvYmoNMTQgMCBvYmoNPDwgDS9EdCAoRDoyMDAxMTIxNzA4
NTcwNikNL0pUTSAoRGlzdGlsbGVyKQ0+PiANZW5kb2JqDTE1IDAgb2JqDS9UaGlzIA1lbmRvYmoN
MTYgMCBvYmoNPDwgDS9DUCAoRGlzdGlsbGVyKQ0vRmkgMTUgMCBSIA0+PiANZW5kb2JqDTE3IDAg
b2JqDTw8IA0vUiBbIDE3MCAxNzAgXSANPj4gDWVuZG9iag0xOCAwIG9iag08PCANL0pURiAwIA0v
TUIgWyAwIDAgNjEyIDc5MiBdIA0vUiAxNyAwIFIgDS9XIFsgMCAzIF0gDT4+IA1lbmRvYmoNMTkg
MCBvYmoNPDwgDS9GaSBbIDE2IDAgUiBdIA0vUCBbIDE4IDAgUiBdIA0+PiANZW5kb2JqDTIwIDAg
b2JqDTw8IA0vRG0gWyA2MTIgNzkyIDYxMiA3OTIgXSANPj4gDWVuZG9iag0yMSAwIG9iag08PCAN
L01lIDIwIDAgUiANPj4gDWVuZG9iag0yMiAwIG9iag08PCANL0QgWyAxOSAwIFIgXSANL01TIDIx
IDAgUiANL1R5cGUgL0pvYlRpY2tldENvbnRlbnRzIA0+PiANZW5kb2JqDTIzIDAgb2JqDTw8IA0v
QSBbIDE0IDAgUiBdIA0vQ24gWyAyMiAwIFIgXSANL1YgMS4xMDAwMSANPj4gDWVuZG9iag0yNCAw
IG9iag08PCANL0NyZWF0aW9uRGF0ZSAoRDoyMDAxMTIxNzA4NTcwNikNL1Byb2R1Y2VyIChBY3Jv
YmF0IERpc3RpbGxlciA0LjAgZm9yIFdpbmRvd3MpDS9BdXRob3IgKHRqaCkNL0NyZWF0b3IgKFBz
Y3JpcHQuZGxsIFZlcnNpb24gNS4wKQ0vVGl0bGUgKE1pY3Jvc29mdCBQb3dlclBvaW50IC0gaWV0
ZjUycmVwb3J0LWxkdXBwcm90b2NvbC5wcHQpDS9Nb2REYXRlIChEOjIwMDExMjE3MDg1NzA3LTA1
JzAwJykNPj4gDWVuZG9iag14cmVmDTAgMjUgDTAwMDAwMDAwMDAgNjU1MzUgZg0KMDAwMDAwOTA2
NSAwMDAwMCBuDQowMDAwMDA5MjE3IDAwMDAwIG4NCjAwMDAwMDkzMzAgMDAwMDAgbg0KMDAwMDAw
OTk1NSAwMDAwMCBuDQowMDAwMDEwMTA3IDAwMDAwIG4NCjAwMDAwMTAyMDkgMDAwMDAgbg0KMDAw
MDAxMDg0OSAwMDAwMCBuDQowMDAwMDExMDAxIDAwMDAwIG4NCjAwMDAwMTExMDMgMDAwMDAgbg0K
MDAwMDAxMTcwNCAwMDAwMCBuDQowMDAwMDEyNDg5IDAwMDAwIG4NCjAwMDAwMTI4MDcgMDAwMDAg
bg0KMDAwMDAxNTM5MCAwMDAwMCBuDQowMDAwMDE1NDc0IDAwMDAwIG4NCjAwMDAwMTU1MzggMDAw
MDAgbg0KMDAwMDAxNTU2MSAwMDAwMCBuDQowMDAwMDE1NjEzIDAwMDAwIG4NCjAwMDAwMTU2NTMg
MDAwMDAgbg0KMDAwMDAxNTcyOSAwMDAwMCBuDQowMDAwMDE1Nzg0IDAwMDAwIG4NCjAwMDAwMTU4
MzMgMDAwMDAgbg0KMDAwMDAxNTg2OSAwMDAwMCBuDQowMDAwMDE1OTQ2IDAwMDAwIG4NCjAwMDAw
MTYwMTMgMDAwMDAgbg0KdHJhaWxlcg08PA0vU2l6ZSAyNQ0vSURbPDA4MzU0NjRlZDYyN2I5YTBi
MDU0NGU4NTQzMTcwMWZiPjwwODM1NDY0ZWQ2MjdiOWEwYjA1NDRlODU0MzE3MDFmYj5dDT4+DXN0
YXJ0eHJlZg0xNzMNJSVFT0YN
--=_mixed 004DA07185256B25_=
Content-Type: application/octet-stream; name="infomod.pdf"
Content-Disposition: attachment; filename="infomod.pdf"
Content-Transfer-Encoding: base64
Content-Transfer-Encoding: base64

JVBERi0xLjINJeLjz9MNCjIyIDAgb2JqDTw8IA0vTGluZWFyaXplZCAxIA0vTyAyNCANL0ggWyA2
NzQgMTc2IF0gDS9MIDE0MDk5IA0vRSA5OTMyIA0vTiA0IA0vVCAxMzU0MSANPj4gDWVuZG9iag0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICB4cmVmDTIyIDEzIA0wMDAwMDAwMDE2IDAwMDAwIG4NCjAwMDAwMDA2MDcgMDAwMDAgbg0KMDAw
MDAwMDg1MCAwMDAwMCBuDQowMDAwMDAxMDA1IDAwMDAwIG4NCjAwMDAwMDExMDggMDAwMDAgbg0K
MDAwMDAwMTU4MyAwMDAwMCBuDQowMDAwMDAyMjgyIDAwMDAwIG4NCjAwMDAwMDI3MzMgMDAwMDAg
bg0KMDAwMDAwODU2MCAwMDAwMCBuDQowMDAwMDA4Njc0IDAwMDAwIG4NCjAwMDAwMDkwOTggMDAw
MDAgbg0KMDAwMDAwMDY3NCAwMDAwMCBuDQowMDAwMDAwODMwIDAwMDAwIG4NCnRyYWlsZXINPDwN
L1NpemUgMzUNL0luZm8gMjEgMCBSIA0vUm9vdCAyMyAwIFIgDS9QcmV2IDEzNTMxIA0vSURbPGQ5
ZGY1MjJhYWZkZWI0MGNkYTU4NGNlYmY4ZGUzNDMyPjxkOWRmNTIyYWFmZGViNDBjZGE1ODRjZWJm
OGRlMzQzMj5dDT4+DXN0YXJ0eHJlZg0wDSUlRU9GDSAgICAgDTIzIDAgb2JqDTw8IA0vVHlwZSAv
Q2F0YWxvZyANL1BhZ2VzIDEwIDAgUiANL0pUIDIwIDAgUiANPj4gDWVuZG9iag0zMyAwIG9iag08
PCAvUyA1MiAvRmlsdGVyIC9GbGF0ZURlY29kZSAvTGVuZ3RoIDM0IDAgUiA+PiANc3RyZWFtDQpI
iWJgYGBmYGBaxMACJAMZ+BgQAMQGijJwJDA0uSgpdB2AUVAgAMScUMzAkM7Ay7iAgemFqsO6yALG
REchCQYGgAADAI5KCvUNZW5kc3RyZWFtDWVuZG9iag0zNCAwIG9iag03MiANZW5kb2JqDTI0IDAg
b2JqDTw8IA0vVHlwZSAvUGFnZSANL1BhcmVudCAxMCAwIFIgDS9SZXNvdXJjZXMgMjUgMCBSIA0v
Q29udGVudHMgMjggMCBSIA0vUm90YXRlIDkwIA0vTWVkaWFCb3ggWyAwIDAgNjEyIDc5MiBdIA0v
Q3JvcEJveCBbIDAgMCA2MTIgNzkyIF0gDT4+IA1lbmRvYmoNMjUgMCBvYmoNPDwgDS9Qcm9jU2V0
IFsgL1BERiAvVGV4dCBdIA0vRm9udCA8PCAvRjIgMjcgMCBSID4+IA0vRXh0R1N0YXRlIDw8IC9H
UzEgMzAgMCBSID4+IA0+PiANZW5kb2JqDTI2IDAgb2JqDTw8IA0vVHlwZSAvRm9udERlc2NyaXB0
b3IgDS9Bc2NlbnQgNjk5IA0vQ2FwSGVpZ2h0IDY2MiANL0Rlc2NlbnQgLTIxNyANL0ZsYWdzIDM0
IA0vRm9udEJCb3ggWyAtMTY4IC0yMTggMTAwMCA4OTggXSANL0ZvbnROYW1lIC9IRUtLRk4rVGlt
ZXMtUm9tYW4gDS9JdGFsaWNBbmdsZSAwIA0vU3RlbVYgODQgDS9YSGVpZ2h0IDQ1MCANL0NoYXJT
ZXQgKC90d28vbS9xdW90ZXJpZ2h0L3gvdGhyZWUvby9wYXJlbmxlZnQvUi9xdWVzdGlvbi9wL2Zv
dXIvUy9wYXJlbnJpZ2h0L3F1b1wNdGVkYmxsZWZ0L0UvcS9UL1UvQi9yL3NpeC9nL2IvQy9zL3Nl
dmVuL2MvYS9EL2NvbW1hL3QvZWlnaHQvbC9lL0cvaHlwaGVuXA0vdS9xdW90ZWRibHJpZ2h0L25p
bmUvZi9JL0gvcGVyaW9kL3YvY29sb24vaC9QL3cvaS9GL0wvZW5kYXNoL2QveS9uL3plcm9cDS9q
L04vTS96L29uZS9PKQ0vRm9udEZpbGUzIDI5IDAgUiANPj4gDWVuZG9iag0yNyAwIG9iag08PCAN
L1R5cGUgL0ZvbnQgDS9TdWJ0eXBlIC9UeXBlMSANL0ZpcnN0Q2hhciAzMiANL0xhc3RDaGFyIDE4
MSANL1dpZHRocyBbIDI1MCAzMzMgNDA4IDUwMCA1MDAgODMzIDc3OCAxODAgMzMzIDMzMyA1MDAg
NTY0IDI1MCAzMzMgMjUwIDI3OCA1MDAgDTUwMCA1MDAgNTAwIDUwMCA1MDAgNTAwIDUwMCA1MDAg
NTAwIDI3OCAyNzggNTY0IDU2NCA1NjQgNDQ0IDkyMSANNzIyIDY2NyA2NjcgNzIyIDYxMSA1NTYg
NzIyIDcyMiAzMzMgMzg5IDcyMiA2MTEgODg5IDcyMiA3MjIgNTU2IA03MjIgNjY3IDU1NiA2MTEg
NzIyIDcyMiA5NDQgNzIyIDcyMiA2MTEgMzMzIDI3OCAzMzMgNDY5IDUwMCAzMzMgDTQ0NCA1MDAg
NDQ0IDUwMCA0NDQgMzMzIDUwMCA1MDAgMjc4IDI3OCA1MDAgMjc4IDc3OCA1MDAgNTAwIDUwMCAN
NTAwIDMzMyAzODkgMjc4IDUwMCA1MDAgNzIyIDUwMCA1MDAgNDQ0IDQ4MCAyMDAgNDgwIDU0MSAw
IDAgMCAwIA0wIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAwIDAgMCAzMzMgNDQ0IDQ0NCAwIDUwMCAw
IDAgMCAwIDAgMCAwIDAgDTAgMjUwIDAgNTAwIDUwMCAwIDAgMCAwIDAgNzYwIDAgMCAwIDMzMyAw
IDAgMCA1NjQgMCAwIDAgNTAwIF0gDS9FbmNvZGluZyAvV2luQW5zaUVuY29kaW5nIA0vQmFzZUZv
bnQgL0hFS0tGTitUaW1lcy1Sb21hbiANL0ZvbnREZXNjcmlwdG9yIDI2IDAgUiANPj4gDWVuZG9i
ag0yOCAwIG9iag08PCAvTGVuZ3RoIDM3NyAvRmlsdGVyIC9GbGF0ZURlY29kZSA+PiANc3RyZWFt
DQpIiTxRy07rMBDd+ytmaUt48GMSx1tUHkUgIfBdobuI2qQEtQkqQfAlfC9jh7Ka4zOPc2ZsYSfO
r58s7N6FhQGEr7BpIFBEilD5iDEE0MFmAMdO9OIiifMrBxZSLwyQx2gbB/oEDDhH6ILjEktIoSZI
B67cCYPGmAhpw6/0KZ7l3erfg9KustjIx+5tP2zaeZjGTHl0cq0M1nLsp+NBNUiyVTrTpeR/uhWE
sTINaIvWVQHSqihYygryftp2e5VeWcvbbI3XOAGmQo3kqgZs9FgHb4tH7nY+dz/L1VERVrLtZ13A
0M39gvbKoZXbj7ffxNiXOCkd5UE5g1FO2yVnCDmy46+5GHZY+2izYSJfL4Z5+7hIJpXXjnJQjnLX
QQX08obZLNiW6OULZxt+j99rZojRBTOO6++LhnYYTSxXYQ17uoqhReRyuxy84oN3f/j7j+QtWIRT
Wmm+Lw/No9t5fuk+la6ZeT9b/ofketxg0bxM4keAAQA19YOkCmVuZHN0cmVhbQ1lbmRvYmoNMjkg
MCBvYmoNPDwgL0ZpbHRlciAvRmxhdGVEZWNvZGUgL0xlbmd0aCA1NzM1IC9TdWJ0eXBlIC9UeXBl
MUMgPj4gDXN0cmVhbQ0KSIl8VQlQFGcW7mamuwFhkGkapMf0jGdWUcCTUzOsgiiYBcR4sK6MMOgQ
YMjMAIMjQQ5RGBALFxHkkiOA3CAeIfEIq9kSb4OgiRpNFknpWrLRfT3+1GYbk9pN1aa2/qpXf716
73vf9/7//T+Oia0wHMfp4MCQkKD33SI1iWr9wghtoipp0r2cl2H8dJx/x4rnRLyrmJfbOSN/lPRm
zxtngqvFfzKbf7Z2FDQ7wE3H/unTQIoROG5d0Xiiz9Nzkbunp9cqbXK6TrNzl0Hxu5h5ikU+3ssW
TFqvt9Zn0vp4KgJitTvUig3peoM6Ua9YmxSj1SVrdSqDOtZdoQhISFBETCLoFRFqvVqXKnjfUlVo
9AqVwqBTxaoTVboPFdo4RagmSWtIT1YrAtYoVEmxHlqdQiPk6VN26DWxGpVOo9b/kvtW5tvtr2Vj
OGZthdlbYY5izEWMKTBsFoUtFGNLRNjvbbD1EkyDYVoMS8Ow94TWYSIsENuD9WIvcRE+G0/CG/BX
VlFW5Va86F1Rq1gifl88QEQSPSRDHiL/RZmoUWuZdZz1VzbLbPJseFu97YMpblMK7Vzsdttdtw+3
r7X/u+SCg7dDicPrqUunjjiucSyS2khN0vv0drrViXZKcvqJyWbeOHs5dzt/4eKjkOj5QNiPMMCk
90Ee20OH0bWv+UDGDmspfXDtNvsw/KhBF78nOqYzq7HkUGHhQa6w+GBhkaykJC/baPrTWpVcord4
p+H9j0XQb/FmkIeHB5qPlCzi2lZ9torr3nAt8FJC2bFpaCXgZtCOs1Dy7N7QZe7nNIh6LOpGgQx4
PH0K80HJAhd/Z+sdTj2w9lb4iUzTNFiJcDPSzmFRiduKgDBOotgHmHeqFDx76OiHaAZD1/5/onT0
r6nCB1fwHigQ9UApEwoFVyiJYi+I8FsgEoHjXkF1bUdl14mPGhMTdboP1e2mWrkko9EibcTrn8HR
ZyI4aOEYcCpCxWv881EuckJO+ZD9zY0iKAYneaMYhapQGrJF9n3IAKEQ2gcGsAdbFaShULnEXGGZ
J0jWCK2ydYbmClKZdqRfzj+ncopQsCeRQR0fMJvLZO3mj1LlE1uogLj88HxOYh7m64ZwGH0s4rXw
DvPxISI01nggToZySPCH1cTpTx+NvmZBvOiG55KFUcgtkavNIy4ebxkelYFtg8dmZfSipe5yFITC
iUyepSQZnfzol1K4fV99h74JC+Ehc7OoIa/TaD0YXxagY5U7/N25MPXANpnJkKvd9CSvHhaB3Xcw
E6Yt71BVyelvr5365NRF9mTyD4gJXpyAXGK4iv1EU0XfvVHZUOcHmz2UUZ5JglgwhQzy4WnSsl7I
vUcb+QQwMXP48O0kuj4RToRAO0WfQX4jFG0Ev3EKBQ8yKJeEbBgjikhJRhMvPgu5x/D2MSj+m4iP
sCgZlJeDVqJ1aDuL4sYXgC0EdcKSr4BdMTxjQ2pmfCRXDZIXIIOgJmu0A/Uwr1venY2IbZtClcFD
4P9laX1pqby+ofNwu0xyNJUfG8Jr+AARv9Eyn5kgSdP26NmVOcT5tqrmqzJIJNHWiTHiKgm7LDMJ
gY1F0iQdGA4cAu3NyGF6DHbBVYb+HiifvsQANvCPqlDfbVdHOHpstL+ntZFDnsiXWXHNB6ZyQpBv
G4jPX0+9hNhncnrkYvPXfU/Z4fXXELFs7sa5vr2aRwFC4uI1OxKWsvTI1/wcpr/1wt2Rjo2RSzbE
qXXczrrFf7nHSqpTIecUL27CW55C7nPR5RQGSo7BXIiAXSz4uQGBHJCzH8KQq89ZZA3s5xequk5z
Rfn7dJHClcyLcYXLYGHSD2R+nMXtTlWlx8tmpIDN8IO6sUfgakabwuQSf2EUoBBE0m9A5EJHw3R4
+VsTQdciJP6tSXmS84o/2yK9ce4qYOsq7z2lX9LP4QB4M+WPG3u/GLRu6Thz6xx7PrsuuY1r1q9t
jpAFLYyeqylN6m06Ul8p71SXfcK2nWvqG2kL/kOMMSg9htOGEPsy9EFhLH2oFCkFMvUnKjqb9Q0J
SXpDgrr1Y6EoXDFXG7ugrxfKuozVUiDGIW/cFyi6o9LiwjzYX54ZwE6EUaszd3vlc0Z4QtHtoKsB
rvuVX//SKnmREHjmyHfHQcSCB1pVvpRDbSRsRd8zz1rzstrldMp502fxPixikX322vD04i6OLxdX
FJXXcKXlhbUy+tTtH41oyuTK18oXFBACoYzT/D968SawFlkwfhOTDSlEKhmfE2/OS1nXtliGlMIP
aI8MqAgC3GAZhD4Gx6JCeXoVkZiwajVykCEpmvJEuGKzTw/2dMtP9l2q+VwGDnf8ZjbIJTw+KZf/
dlLrj+M8CTidcMwZuki4fB82gysY0BLAUa4c1ZF0pZEa2Xck620DAjNN7+3n6ASI4AeZGnP1pyAd
SI7u526ur/f1YtFpVGAOicosaeckUCKU6Ia+k/DnbqHKi3GorGurp9P4H2AFoyXnbt+SiBxlaJ65
8KQcCilwz39YUJFR5udaB3sM5Oq9WXpBozvoyPYycy0HMhJEjabFFfIG8gDMIiaaSLrTSF3Jq9/n
P8lrpSE7ROCVZuS7KAlEmavhr/8tfLiur55+weuFp0PAppRZ6Wk+MjRHwG4pKW7gYPr/Yj/8D3Yg
5ZO5R7lfOPS2amr1wbSyuyxoKZiTf7egPuuIl6vQy4xeuFIFHb3SViB5xfPYRpq3bIF/MuhwOnkj
r6bAm0XLqZ1BhjXIYxYQ/2a7SoOiurJwsO13cazBlC8N+p71HqOJGHUy6kyCuGAUo6AREUdWxbVY
REAUmqabrVFaoGloQAS6oVllaWUZFlEEEdC4kLgFxIWIaDJOZnRiK+V51G2n5jZYZqom1e9Pv7r3
nu+e75zvfA/cIBscrv+To0fBvmueSxVfTh2BIHE0FZGyX3UkdMfpuSwOxrZ4OSHYrtbj4hbe+Q6I
wv7N0gJMgpmXC3nYge6c1LRlceNEQkcbNBqs3pig1yQy2ILRQK3UKg3fMbAHerE3+vJYtKMFv9GA
Jt4LHuhOfsFgNmfARhkaSCkk3GIv3Ate//+e3H9iv81htWFsssEKkElUbAudBmq15nDefcthDytz
+jPJpk4ZevQ/nfJF2rsqaBJeWnigD0I7kUlcL6Mmcks3/srcQZlQj87DGvEhCtt5bjyG57NLYzOa
eSEMPajMuWm5as07AFZtQrFJNPbacpY/hSPNGqwQNGLsJ6MGjpUeWcmY5yP8gTRulYpcOtGA1mRK
y14xwhwNAo35jLiQghzhO7HGvI1ySohb+Y7ZjVlRx+9ZLvOgIvu6diKxydKx2UQLSLw/t9Fnx7wt
ARNlhNLqIxgx+EkaMtNCtBivklHNepVvC2/e9FuBGaIOMuFr81lxkVBOwZRTWddzLAHICP1Pg1Ud
qRlnoETwNl5CSmMu2EEIyLEjWOOl5OeIrbEch8BcbAdufMNkLPrkRzgIMhANmUwgWoZl+OAnS7CI
n6j6smbotDT1LyZQkKZ+Tg/piX71Hz2RuNZCpYs6yjFl4sLOGrmun4ENKK3CkF7BjvYVt5zhi/Rt
LfcZorc0/vAi5jhcRIH7WsnpExm1bTA3Dc/BCzGjDpTGZGhk3LhoQksD1FsiEjQJlohjDgbJHs3h
4ugqvAeyZ1zIA9sKYBj4DDuVLuFwDUUPydDto7rkd7UlXWbJVq0BLc9SGm4w4It6NZCMr4rhc4p+
bvYXRiQ1x3MutoBdKp5JAMxPc0wnof1J9sZarO6MQPxLkbCVeEYplZSmioo4tK1uM7t40VZMzevb
+cyLB4fP9SGezLa9nssx6wO27TIuiqrFweIySp9XVMrVdoQMsjDn5r8e87AbPymB+cy1C8bO0b7F
TkVcJTVewcKLVquSUggxQUjpR0sFH4PEMTPRcJMRcrsRTKqOxZNK+QoqBZzF5qtUCnYWVxBFLasy
wCS225xKrU6NWmQps7PIwnqbMGawem2Ct22iMYttwHEy6vaxCrk9g8vRMuX+hGhue7hn6EoWr1ED
38dDv6UrH1i6kijeoKowiXTrt2gzmbZkzOarweMhB4VotEh3L5ezeaK8BCoprKiY3vh03SOYN0RU
phmmScJSEw/EcarkAwnRbGC4sZ2MrLoTvbCgkyMDK0CSnJSglLLh6poaHrzRFfUv+FOOfo6tt6/d
q8iPMJ40lJXyx1U6HWP8W/Hp2vYkF84d0aPYceNfQhwY167A7p6LzacbORdta3oxqzdmdLTyNspT
wtTrsFTXFjP91jDEPKWHiE+8K7mU1Vd8iv17w8efuSmwGKNDawvqeUeFJHD0YCds+gaW/PTGExD+
/SqvLVEpXBr0UCqNmDb7y/3i/ROsFSgjO0+dzbappWHEKyi/F8Q1VpX3RMJsWCTxCQnc5cI6rhsE
BA7PjI0XtbFhuXx2dHZ0UeRJzcxajd5QzhQnGMNKOf2hAF0QS4yOtaXLlrzZ8HDo2zNAdRPYXWq9
8LvqCeBVIxAzTNfDP0ySoCP7Du9m3Vx64VOYdbOjt70jdH0OT4fDAlwlcVWpS1oYoQ9ZsF6wYPUd
x5r7HitehGr2hxXuYvHkhRhhO7wL5sQ0DTQ2VhH74E5lpYrp8IuGc/rzRdbkWrcg/BSgBqvKWyJh
IfE+gVG71i9h3Za1wFeJPPRRYJe0tuuP7HL8oTvehGcPOhHP9fFlmGXM4/FDWCh5qm+q6mEf5Ln6
55DvFbVOcDgFu6utXg2L4JrgJcHTwJpq1Hb1cO3aL1vZkvKCmrqdVVuxqxp/tZV3Cv0B36Cc5clu
qVwcFOjQnzTpJ68ycANskA0OvwRzB3dLYUEl7G/SVk+vvh3+aMUV8LlXVU0P2NNn7F8sltQWlxob
OfK3XVmm/jnVeiCtIy2A8Vq/E0/hyApMBTUbeXpgV7ewVnKjqbxhRI+ZiGjV6vUrI398wNE/2MOd
c5ImZ6SI1eRKyUJ73d7o3GB21QJ/PHVL3Y5vInm6v3TJCnWCnIk4mViVwNF37X2Ctu8hMe7uNP0E
y3pBXB0/YK/l6KHHWJItDsvwzyxnSppbSjRcdoMqn+191QQfdCYblOWWLyqB7Z7gu38Y3IfpIUGh
k/geV2qJQPqj5nSYiqdt9gv1DZ4ox8x35RgwTnHme4rNoWiN8ui2oyRpB3TITyvT3mMgBZ1vadLV
sD1l+1x57IecZSo/FVkRjGxWkMDTTvTEwB9M0++bvhimR4kHU+qodVmqIjK9AR3L9Jb5yLzlM+OQ
pqBAU8jqDekePI5DK+Upm1I5esY14S2xs7LC0IbasrKa6ujysOC4UKLMK3B4nXA3v3v86GfjR4NV
gMRJ7enLBSmv7GJl0riowHOydpC0vIQpvLADqTK9Y7xjvRUzFUijex+LniGHC78dgoB/U0hS9mKY
biy1hSQdtSVTnnuZEX5Gx0gTvPSSe8V7jWco/3hGAduUrQjgzZHor7EJHsRNCE5kv13B0a7pI8Pw
teX5aBbdMfJY4hqWtjed5OeQDgVn7Nd0MFCPtOe6b3UV7XPjcBjaEJUaYCnLKB3ak3Egc5ABLaor
qyPV/n2Ghw+Pg9BGRZpvClkRoUORGvHOnIzaMmKF0ED86zkr3DcflnKtSQdaN7Db9oQGePD/rbxa
g5uoonBr2WxVjJol0u6OuzqKDr6AOqKMAypWh85oRUWpbVMrbaFPKX2kabJ9JOaxead5Ndk2fSSl
aWyFUkupIK8qUHRQy2BVqGTsiA98oOOPs8zlhzc4+t8/99eZ833n3nO/75x50ojJLqi0xXxxe6aW
dPoC/9T+wvW+WB7SzyguJCE3SU1fuKTczLcXp8ArRXKrq61njMb75/i+d/sTzMnIjnwOVZBP7TJv
T0WU4wZwd4gHaWgl4dbcYdXmikpVIXv+OtzXKbiS63Du4H9w+riUE08fuQThSxlglUqViF6H0vFC
xly6F5Tw0Bxwv8OKhxfRcq73Wr4SLW2CF778a+boJz8ewWG3oXs2VRRwclQXgZdHpUzRcEoxn4Tm
76iFebRcWWpR55lZLZgj5LO6IHaVNWTNWLzzfQa4s3AX1svsp77N2aAq3KnmrE7iHGnCNH8t0hRp
i/jUE/oDdj+zx6Ut46iFpKDc1/VRfC8Dcg9a24Q2rtFUbymeiqawe+GVWen2uKVFEfsOdEk8SVyT
1isHy5tCBQy6fSW6AbHowfnc72fn9id6OQ/Kl73R1vS0gIl1hsmSQKv7EI1/H+4gt1EgNqKLSluN
XbdVk2ls1Vp5Jqf88o8cznnL8XMLlx25O1NDqQhXfsvrNnyi+CpJzX6VVL6qE7bo8QMYRbLAWeX+
gqZC0EyOHjgcY6nZWKByDxML9w2NtQxUVVS15BZz1u+v+8tIYcsbrYU6XKvTH/y3++Wj+kGPWnox
amhRHDmdH4KquaIQtRs+h8+UVMACdxKdMiphEcxGg7a2/03mgdK8V5+Z2X66kqOGT1QTo639jTX0
K8W6qpWaxWMdrE5GVaDVIZIa9ne5Xd2MIz6lmWY+Xdi/eKoyXpjgqARK+5TYPmga6ad3D0zMHJhW
rxjEP01/VFo6mj6HtduHtfs+LN0+u8fIbn+db64oz2xubjZ10IJTcNrYY+i47P7J+iMfHBybirHt
QaK9s7W5himwn3yXg4k/SLn+REdYuiOsOJuE53FL47m08bDyJas1upeGX1COlpyzutVlNDWNOshq
o97wDttcX9dWx+QZTuO14zy6r41MWEetL9HIQ979TeNk5JB7YoKdmiKeIJ1WgtIc7N0XPh7OlHvV
0mtiep9UkiHtSvkNx8uGbIOGJnpHLlpd9SC7AvyE4DOJ3fTQoCvmZEVUwJP9tn6TWLc39xfEx17M
LO1xWvyM1x8dDnKwDL6MnfT2vOdK7VteNRxTwwYxfeDqYxnS5zg/T8bM/ZZGusNU39nArkENT0I1
wU+YghF6ZMg15sH53+TJSXPQOI6I1agoSwXLNgO1h45H3dEU+iaeHLL62vuLgEGXs6rCHr2X6fIE
47joJWD8AXUSO8JOo4/xuQMTPRyw8Ls47gkMeLNFEssEJNTYPtJ7pW0ZMHNO6TZ3d/C02mCpTQmZ
SiS1jnpHxQm0DAqyLsJSXyiRmHJkiyiHJ3ttUZua7jQZ8h9lNSPbZiP0wJAzggmV8eSYo9s+mf8z
ujcLydC6GlWbtV7I5mEFzqe2R6xcMDzQ5WfPw60ENuXk439Kh0RFWHqLGpd2pq6c4WWjjh69hja0
1+jr2fWoAZHQRQheY0+IHt7jiDowzLY2MmTdbe5uABmyZYVrD63iaV0ZYrfijay9NlLPqKwVJo4a
3xWtOTx/4AzQR1h4RHpIxD4acmbLF+0JiU6kz/RkSGuBUba6ifU6U+3dDGJlMCitJER0hRTQTURE
dtzrfu8cA6wM9V1bSfDSLaR8UXoO0lAaWOwtiqmJcuB+Sh3UJLUJVuF5fmlanX5dXi79xIn26NCY
OPX+jsAuo9lmE1ibRbBZGaPR4+/rmT6zn6M2QmPf/4iXg+rDdLskZIBrSaEkfPgPFQumkqZIAFc+
8VPqoLZQAx9Lz+G8icC3Z87SF18ONTVU86Vle/W7vR6Hw8U63C6Hk/F6zQaNriTvLY4SwKT5H/Hy
tvDVB8JIHYBGuwxE3zXeQbaFpdcDUBLORG87pEYf3Ga/MX5T8mZJWAZXlX8DHm8qiAplbmRzdHJl
YW0NZW5kb2JqDTMwIDAgb2JqDTw8IA0vVHlwZSAvRXh0R1N0YXRlIA0vU0EgZmFsc2UgDS9TTSAw
LjAyIA0vT1AgZmFsc2UgDS9CRyAzMSAwIFIgDS9VQ1IgMzIgMCBSIA0vVFIgL0lkZW50aXR5IA0+
PiANZW5kb2JqDTMxIDAgb2JqDTw8IC9GdW5jdGlvblR5cGUgMCAvRG9tYWluIFsgMCAxIF0gL1Jh
bmdlIFsgMCAxIF0gL0JpdHNQZXJTYW1wbGUgOCAvU2l6ZSBbIDI1NiBdIA0vTGVuZ3RoIDI3MSAv
RmlsdGVyIC9GbGF0ZURlY29kZSA+PiANc3RyZWFtDQpIiQAAAf/+AAECAwQFBgcICQoLDA0ODxAR
EhMUFRYXGBkaGxwdHh8gISIjJCUmJygpKissLS4vMDEyMzQ1Njc4OTo7PD0+P0BBQkNERUZHSElK
S0xNTk9QUVJTVFVWV1hZWltcXV5fYGFiY2RlZmdoaWprbG1ub3BxcnN0dXZ3eHl6e3x9fn+AgYKD
hIWGh4iJiouMjY6PkJGSk5SVlpeYmZqbnJ2en6ChoqOkpaanqKmqq6ytrq+wsbKztLW2t7i5uru8
vb6/wMHCw8TFxsfIycrLzM3Oz9DR0tPU1dbX2Nna29zd3t/g4eLj5OXm5+jp6uvs7e7v8PHy8/T1
9vf4+fr7/P3+/wIMAK32f4EKZW5kc3RyZWFtDWVuZG9iag0zMiAwIG9iag08PCAvRnVuY3Rpb25U
eXBlIDAgL0RvbWFpbiBbIDAgMSBdIC9SYW5nZSBbIC0xIDEgXSAvQml0c1BlclNhbXBsZSAxNiAN
L1NpemUgWyAyNTYgXSAvTGVuZ3RoIDUyNyAvRmlsdGVyIC9GbGF0ZURlY29kZSA+PiANc3RyZWFt
DQpIiQAAAv/9gACAgIEBgYGCAoKCgwODg4QEhISFBYWFhgaGhocHh4eICIiIiQmJiYoKioqLC4uL
jAyMjI0NjY2ODo6Ojw+Pj5AQkJCREZGRkhKSkpMTk5OUFJSUlRWVlZYWlpaXF5eXmBiYmJkZmZma
Gpqamxubm5wcnJydHZ2dnh6enp8fn5+gIKCgoSGhoaIioqKjI6OjpCSkpKUlpaWmJqampyenp6go
qKipKampqiqqqqsrq6usLKysrS2tra4urq6vL6+vsDCwsLExsbGyMrKyszOzs7Q0tLS1NbW1tja2
trc3t7e4OLi4uTm5ubo6urq7O7u7vDy8vL09vb2+Pr6+vz+/v8BAwMDBQcHBwkLCwsNDw8PERMTE
xUXFxcZGxsbHR8fHyEjIyMlJycnKSsrKy0vLy8xMzMzNTc3Nzk7Ozs9Pz8/QUNDQ0VHR0dJS0tLT
U9PT1FTU1NVV1dXWVtbW11fX19hY2NjZWdnZ2lra2ttb29vcXNzc3V3d3d5e3t7fX9/f4GDg4OFh
4eHiYuLi42Pj4+Rk5OTlZeXl5mbm5udn5+foaOjo6Wnp6epq6urra+vr7Gzs7O1t7e3ubu7u72/v
7/Bw8PDxcfHx8nLy8vNz8/P0dPT09XX19fZ29vb3d/f3+Hj4+Pl5+fn6evr6+3v7+/x8/Pz9ff39
/n7+/v9///8CDADnrD8QCmVuZHN0cmVhbQ1lbmRvYmoNMSAwIG9iag08PCANL1R5cGUgL1BhZ2Ug
DS9QYXJlbnQgMTAgMCBSIA0vUmVzb3VyY2VzIDIgMCBSIA0vQ29udGVudHMgMyAwIFIgDS9Sb3Rh
dGUgOTAgDS9NZWRpYUJveCBbIDAgMCA2MTIgNzkyIF0gDS9Dcm9wQm94IFsgMCAwIDYxMiA3OTIg
XSANPj4gDWVuZG9iag0yIDAgb2JqDTw8IA0vUHJvY1NldCBbIC9QREYgL1RleHQgXSANL0ZvbnQg
PDwgL0YyIDI3IDAgUiA+PiANL0V4dEdTdGF0ZSA8PCAvR1MxIDMwIDAgUiA+PiANPj4gDWVuZG9i
ag0zIDAgb2JqDTw8IC9MZW5ndGggODEyIC9GaWx0ZXIgL0ZsYXRlRGVjb2RlID4+IA1zdHJlYW0N
CkiJdFVLc5swEL7zK3SUDlIQEpbpsXlNO9PpTOxbpgcMwqZjgytkN+kP6e/trgQkTpMT+358uysk
2SZX9ytJtkMiSUsSlYvlkhhdCF2QXBWiMIZwI5EgziZN8nmdXN1lRJJ1k6REK1HIZUb4RKREai2W
Ks+JzA3I9IKsD2C5TVKRpqkh6wq49e/kkV7vym5rB8YzyLegA5PC0LarbJQoylMFZK6EpmcgqHVD
23fsx/orhMggY1EQPn5BkC6EybOCyAwImReYmGNWJTHrI/3LuIZCJb1jmShoy3gOzBPjEoW2jskk
/Q4SCd8vDPqmNyhGFmpAbbmPfqdQeq6B9FHSTwF2o+V5Ch07krRiKUTsnQNJ4D26oO/R2eZNQZ9i
p6LQhEuRkfVNxFDHbjIhF2KpUwG1SYUzAkJnE5yr/hDS5kB7y6QGFJ8wK7AhMJ8ia4Wh+evYAakU
PO4Y+rdYErIRI6Sa3h1K79tui0lQgq3Ap58sSqbA9dy39WQBeEk0dDh3FC5QiCGwqv0s+nXq/cxU
IcxuLCAGdYwbZCoPK/E+SnKcedthoCWgjiCM8AY21lnEThhqPJjgt55cGobTbaGyS8D0PIviBS+F
A71GEGABSvAsKHQmMYUNXD1tQRe103zC4qC8iT3D4gS9926Qk0/Zze4v2mwS+f5/FC4GCqNyD/a4
b6vSww3du/50lPOkuvpyaO8YZx9BIM0rCJZwtd9gxw1FRA20jDcd+tSg6jdDv7dBA/CnIJlVvj1E
xaoKn13k6tNoHyOtTpvIdt49g2uGCWOqLhjUFzjIYgZCjkDYs+08nqemKzw3WLCdhSxRZOtRCHkm
M4f7ounzewDwtwhoLOIBCGybb8ohXgxKHRQIDR9DKoOvD36qMmQ0kPE2ZjQxo6HYYIg3zgfJHk8o
oIeGu8hZFxcW1JufUWPxQiA6PHQqrrqa2bGAMjDDEJ6xRfYSvPtol8Y+hwCOixmwSIUDH8KjacK9
Khgf3FTYZkMb1x+wgizMh6exQxNKQNvG8z+Mo6WFUwuvAQyV7+vyyCFXuHzQvUopw98hcmkq/NNo
Ex+223XyT4ABAE4PnTcKZW5kc3RyZWFtDWVuZG9iag00IDAgb2JqDTw8IA0vVHlwZSAvUGFnZSAN
L1BhcmVudCAxMCAwIFIgDS9SZXNvdXJjZXMgNSAwIFIgDS9Db250ZW50cyA2IDAgUiANL1JvdGF0
ZSA5MCANL01lZGlhQm94IFsgMCAwIDYxMiA3OTIgXSANL0Nyb3BCb3ggWyAwIDAgNjEyIDc5MiBd
IA0+PiANZW5kb2JqDTUgMCBvYmoNPDwgDS9Qcm9jU2V0IFsgL1BERiAvVGV4dCBdIA0vRm9udCA8
PCAvRjIgMjcgMCBSID4+IA0vRXh0R1N0YXRlIDw8IC9HUzEgMzAgMCBSID4+IA0+PiANZW5kb2Jq
DTYgMCBvYmoNPDwgL0xlbmd0aCA4MjEgL0ZpbHRlciAvRmxhdGVEZWNvZGUgPj4gDXN0cmVhbQ0K
SIl8VMuS0zAQvPsrdJQobKyHLZsjbKDgwGW9p4WD13ESU8FOJQ5LfgMovpeekZxdLuRgaTSvnlYr
WmyTV+9vtdieEi0GkdgiqyrhXZ25WhS2zmrvReo1bcSxTzbJmyZ59c4ILZpNkgtns1pXRqTLJhda
V1npy1LowuPMlaL5hshtkmd5nnvRdLCax+Revt2147Y/qdSgXylPSmdeDmPXhxMr09xiW9jMye/Y
yP54GqZRfWk+JiarSw1oOtN1iaI3XF47Ki8/y24a52E89+vPSjVf0dBUWe4xWrpscGRMVtTGCm1N
5oGWgKZUpqypzL38o1Ln4ZNvVVoC4S0thfxEoHIcA7oGAHlRRVbLkS0jZ1pl+yOar0O0pXODSQZ2
94p8a/LV8qhSD6s/xJR9iOlaOrfyQ3DfLG1DgymYBUpx1Ejly+iMFS93d0jWuNSQTaROscmGeczB
Y1FFHkvikRkoIgNHmthQixKphz1yczl0wWzDcqssODg/kA+xq3HmqJh6octEl6XGGMNi0CXcdRHt
XfS2RBb0Uz0hMPgeons5PLENNmPPqCUn56VQz1M2L4L6XJyqP+wHYtdYCgYjNtyfKUiCj0pbuRu6
HVWzhHXXL7HflPYImdbDBjRUAN7yd6YaUO90rTIph28Xgs7HY8+bNeNJn2h3+F1pL/Uz4WlggvDo
8vZtUB7PHSREw22WTc9CqhhqDGUCqyUD9OH2lvAuhLS3VN3LMx87+RDXlSKxj9GaF86vzU6hmV6K
T1NsdhMLrv4nLROl1c7I8lSDKB4ezmzy26cnN5DFnRx4aJfjdTifeRm356Dz4USrljsmwtFwY/C0
uDBHBLwkB/6ZZDsyWcSQhrmLcXjCLErKtuhOvs1z8VgbcENaHj4GWshfyuTyMA2KtAoV0oK3jXvl
x42OlQyn9O7kb9JRyY/WsOR8eLI5Kf3C1k/KLoKUK0w8YgSTP3Ucrp4u9CGpAz/v6dLkNgKkDuvw
/GwEEbC37JqfT+fMdTqS6aCq8P9mHKV9j0q3pEV2nfmEITkQD4gE+nyaQwqDM2gLOfI6bv9NCLWu
DR7htUQcxYYOe4a3apK/AgwAbXp/rgplbmRzdHJlYW0NZW5kb2JqDTcgMCBvYmoNPDwgDS9UeXBl
IC9QYWdlIA0vUGFyZW50IDEwIDAgUiANL1Jlc291cmNlcyA4IDAgUiANL0NvbnRlbnRzIDkgMCBS
IA0vUm90YXRlIDkwIA0vTWVkaWFCb3ggWyAwIDAgNjEyIDc5MiBdIA0vQ3JvcEJveCBbIDAgMCA2
MTIgNzkyIF0gDT4+IA1lbmRvYmoNOCAwIG9iag08PCANL1Byb2NTZXQgWyAvUERGIC9UZXh0IF0g
DS9Gb250IDw8IC9GMiAyNyAwIFIgPj4gDS9FeHRHU3RhdGUgPDwgL0dTMSAzMCAwIFIgPj4gDT4+
IA1lbmRvYmoNOSAwIG9iag08PCAvTGVuZ3RoIDI2NiAvRmlsdGVyIC9GbGF0ZURlY29kZSA+PiAN
c3RyZWFtDQpIiUyQPW4DIRCFe04xJRSMGQYMVJHi/Egp0oTOShFFu6sUu5a8KzknyXkzEFtKAw/m
Me8bCCa1e34jmFZF8AWKI+YMKRQMBSIXLCmBTdQEnAc1qvuqdk8eCOqoHATGQtmDvQkHFAJmjlFE
Rpci1FmMk3LonAtQP+VUL+qoDydjE2Y9z8OyrcZ6SS1aLj2yXmSLjElb45C0C7iZiF5//5W3ViYp
r4Yw9EfdPXb3h0liPZv3+iJhTA1NxrgJB97vMbAnKA4TpdAZhY9z4zvqH2M5yFfoVyPNl+HayUpW
4D3Uh+72/M+dhOrQ0wWm6Flo20oseIPxwnYdso28nC53vedjVb8CDACRAVQsCmVuZHN0cmVhbQ1l
bmRvYmoNMTAgMCBvYmoNPDwgDS9UeXBlIC9QYWdlcyANL0tpZHMgWyAyNCAwIFIgMSAwIFIgNCAw
IFIgNyAwIFIgXSANL0NvdW50IDQgDT4+IA1lbmRvYmoNMTEgMCBvYmoNPDwgDS9EdCAoRDoyMDAx
MTIxNzA4NTUzNikNL0pUTSAoRGlzdGlsbGVyKQ0+PiANZW5kb2JqDTEyIDAgb2JqDS9UaGlzIA1l
bmRvYmoNMTMgMCBvYmoNPDwgDS9DUCAoRGlzdGlsbGVyKQ0vRmkgMTIgMCBSIA0+PiANZW5kb2Jq
DTE0IDAgb2JqDTw8IA0vUiBbIDE3MCAxNzAgXSANPj4gDWVuZG9iag0xNSAwIG9iag08PCANL0pU
RiAwIA0vTUIgWyAwIDAgNjEyIDc5MiBdIA0vUiAxNCAwIFIgDS9XIFsgMCAzIF0gDT4+IA1lbmRv
YmoNMTYgMCBvYmoNPDwgDS9GaSBbIDEzIDAgUiBdIA0vUCBbIDE1IDAgUiBdIA0+PiANZW5kb2Jq
DTE3IDAgb2JqDTw8IA0vRG0gWyA2MTIgNzkyIDYxMiA3OTIgXSANPj4gDWVuZG9iag0xOCAwIG9i
ag08PCANL01lIDE3IDAgUiANPj4gDWVuZG9iag0xOSAwIG9iag08PCANL0QgWyAxNiAwIFIgXSAN
L01TIDE4IDAgUiANL1R5cGUgL0pvYlRpY2tldENvbnRlbnRzIA0+PiANZW5kb2JqDTIwIDAgb2Jq
DTw8IA0vQSBbIDExIDAgUiBdIA0vQ24gWyAxOSAwIFIgXSANL1YgMS4xMDAwMSANPj4gDWVuZG9i
ag0yMSAwIG9iag08PCANL0NyZWF0aW9uRGF0ZSAoRDoyMDAxMTIxNzA4NTUzNikNL1Byb2R1Y2Vy
IChBY3JvYmF0IERpc3RpbGxlciA0LjAgZm9yIFdpbmRvd3MpDS9BdXRob3IgKHRqaCkNL0NyZWF0
b3IgKFBzY3JpcHQuZGxsIFZlcnNpb24gNS4wKQ0vVGl0bGUgKE1pY3Jvc29mdCBQb3dlclBvaW50
IC0gaWV0ZjUycmVwb3J0LWxkdXBpbmZvbW9kLnBwdCkNL01vZERhdGUgKEQ6MjAwMTEyMTcwODU1
MzctMDUnMDAnKQ0+PiANZW5kb2JqDXhyZWYNMCAyMiANMDAwMDAwMDAwMCA2NTUzNSBmDQowMDAw
MDA5NzgwIDAwMDAwIG4NCjAwMDAwMDk5MzIgMDAwMDAgbg0KMDAwMDAxMDAzNCAwMDAwMCBuDQow
MDAwMDEwOTE5IDAwMDAwIG4NCjAwMDAwMTEwNzEgMDAwMDAgbg0KMDAwMDAxMTE3MyAwMDAwMCBu
DQowMDAwMDEyMDY3IDAwMDAwIG4NCjAwMDAwMTIyMTkgMDAwMDAgbg0KMDAwMDAxMjMyMSAwMDAw
MCBuDQowMDAwMDEyNjYwIDAwMDAwIG4NCjAwMDAwMTI3NDQgMDAwMDAgbg0KMDAwMDAxMjgwOCAw
MDAwMCBuDQowMDAwMDEyODMxIDAwMDAwIG4NCjAwMDAwMTI4ODMgMDAwMDAgbg0KMDAwMDAxMjky
MyAwMDAwMCBuDQowMDAwMDEyOTk5IDAwMDAwIG4NCjAwMDAwMTMwNTQgMDAwMDAgbg0KMDAwMDAx
MzEwMyAwMDAwMCBuDQowMDAwMDEzMTM5IDAwMDAwIG4NCjAwMDAwMTMyMTYgMDAwMDAgbg0KMDAw
MDAxMzI4MyAwMDAwMCBuDQp0cmFpbGVyDTw8DS9TaXplIDIyDS9JRFs8ZDlkZjUyMmFhZmRlYjQw
Y2RhNTg0Y2ViZjhkZTM0MzI+PGQ5ZGY1MjJhYWZkZWI0MGNkYTU4NGNlYmY4ZGUzNDMyPl0NPj4N
c3RhcnR4cmVmDTE3Mw0lJUVPRg0=
--=_mixed 004DA07185256B25_=--


From owner-ietf-ldup@mail.imc.org  Mon Dec 17 09:43:52 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id JAA03351
	for <ldup-archive@odin.ietf.org>; Mon, 17 Dec 2001 09:43:52 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBHERT900632
	for ietf-ldup-bks; Mon, 17 Dec 2001 06:27:29 -0800 (PST)
Received: from smtp006pub.verizon.net (smtp006pub.verizon.net [206.46.170.185])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBHERS200628
	for <ietf-ldup@imc.org>; Mon, 17 Dec 2001 06:27:28 -0800 (PST)
Received: from D7ST2111 (pool-141-151-8-227.phil.east.verizon.net [141.151.8.227])
	by smtp006pub.verizon.net  with ESMTP
	for <ietf-ldup@imc.org>; id fBHERCd10970
	Mon, 17 Dec 2001 08:27:12 -0600 (CST)
Reply-To: <christopher.apple@verizon.net>
From: "Chris Apple" <christopher.apple@verizon.net>
To: <ietf-ldup@imc.org>
Subject: RE: Profile Draft Slides
Date: Mon, 17 Dec 2001 08:19:13 -0600
Message-ID: <000901c18705$cdfbebf0$0200a8c0@D7ST2111>
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="----=_NextPart_000_000A_01C186D3.83617BF0"
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook, Build 10.0.3311
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
Importance: Normal
In-Reply-To: <000201c185ad$757f54c0$836197ac@D7ST2111>
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


This is a multi-part message in MIME format.

------=_NextPart_000_000A_01C186D3.83617BF0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

DOH! Forgot the attachment. Its here this time.

Chris.

-----Original Message-----
From: owner-ietf-ldup@mail.imc.org [mailto:owner-ietf-ldup@mail.imc.org]
On Behalf Of Chris Apple
Sent: Saturday, December 15, 2001 3:14 PM
To: ietf-ldup@imc.org
Subject: Profile Draft Slides



PowerPoint file of the slide presented during the WG meeting is
attached.

Chris Apple

Christopher.apple@verizon.net


------=_NextPart_000_000A_01C186D3.83617BF0
Content-Type: application/vnd.ms-powerpoint;
	name="ietf_52_ldup_profile.ppt"
Content-Disposition: attachment;
	filename="ietf_52_ldup_profile.ppt"
Content-Transfer-Encoding: base64

0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAABAAAAGAAAAAAAAAAA
EAAAGgAAAAEAAAD+////AAAAABkAAAD/////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////////////////////9
////FwAAAAMAAAAEAAAABQAAAAYAAAAHAAAACAAAAAkAAAAKAAAA/v////7///8WAAAADgAAAA8A
AAAQAAAAEQAAABIAAAATAAAAFAAAABUAAAD+/////v////7/////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////1IA
bwBvAHQAIABFAG4AdAByAHkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAWAAUA//////////8BAAAAEI2BZJtPzxGG6gCqALkp6AAAAAAAAAAAAAAAACAcrbm4gcEB
DAAAAEADAAAAAAAAUABvAHcAZQByAFAAbwBpAG4AdAAgAEQAbwBjAHUAbQBlAG4AdAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAACgAAgECAAAAAwAAAP////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAACAAAAPxAAAAAAAAAFAFMAdQBtAG0AYQByAHkASQBuAGYAbwByAG0AYQB0
AGkAbwBuAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKAACAQQAAAD//////////wAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA0AAABgEQAAAAAAAAUARABvAGMAdQBtAGUAbgB0
AFMAdQBtAG0AYQByAHkASQBuAGYAbwByAG0AYQB0AGkAbwBuAAAAAAAAAAAAAAA4AAIB////////
////////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANgCAAAAAAAADwDo
A54FAAABAOkDKAAAAIAWAADgEAAA4BAAAIAWAAAFAAAACgAAAAAAAAAAAAAAAQAAAAAAAAEPAPID
PAEAAC8AyA8MAAAAMADSDwQAAAAAAAAADwDVB5gAAAAAALcPRAAAAFQAaQBtAGUAcwAgAE4AZQB3
ACAAUgBvAG0AYQBuAAAApLcSAKS3EgCwsxIA+NMDMNSzEgAIAAAA1LMSAH7UAzAAAAYSEAC3D0QA
AABBAHIAaQBhAGwAAABOAGUAdwAgAFIAbwBtAGEAbgAAAKS3EgCktxIAsLMSAPjTAzDUsxIACAAA
ANSzEgB+1AMwAAAGIgAAqQ8KAAAABwAAAAIACQQAAEAAow9uAAAABQD//T8AAAAiIAAAZAAAAAAA
AABkAAAAAAAAAAAAQAIAAAAAAgAAAP//7wAAAAAA////////GAAAAAABAAAABQAAIAEgAQAAAAAA
BQAAQAJAAgAAAAAABQAAYANgAwAAAAAABQAAgASABAAAAAAPAAsEdAAAAA8AAPBsAAAAAAAG8CAA
AAAECAAAAwAAAAkAAAACAAAAAQAAAAcAAAACAAAABAAAAGMAC/AkAAAAgQEEAAAIgwEAAAAIvwEQ
ABAAwAEBAAAI/wEIAAgAAQICAAAIQAAe8RAAAAAEAAAIAQAACAIAAAj3AAAQHwDwDxwAAAAAAPMD
FAAAAAIAAAAAAAAAAAAAAAAAAIAAAAAADwDQB4sAAAAPAPoDZwAAAAAA/gMDAAAAAAEAAAD9AzQA
AABIAAAAZAAAAEgAAABkAAAA4LMSAH7UAzDYsxIACAAAAFwWAADYDAAA7vz//6b///8BAAAAcAD7
AwgAAAAAAAAAcAgAAHAA+wMIAAAAAQAAAEALAAAfAP8DFAAAAAIAAAQMAAAAAAAAAAAAAAACAAAA
PwDZDwwAAAAAANoPBAAAAAAAJQAPAPAP0wIAAAAA8wMUAAAAAwAAAAAAAAACAAAAAAEAAAAAAAAA
AJ8PBAAAAAAAAAAAAKgPUQAAAEdlbmVyYWwgVXNhZ2UgUHJvZmlsZSBmb3IgTERBUHYzIFJlcGxp
Y2F0aW9uC2RyYWZ0LWlldGYtbGR1cC11c2FnZS1wcm9maWxlLTAyLnR4dAAAoQ8sAAAAUgAAAAAA
AAAAACwAAAABAAMAAQABACQAJQAAAAAAAwABABwAAQAAAAAAAAAAAKoPNgAAADMAAAAAAAAABAAA
AAEAAAADAAEAAAAAAAAABAAAAAEAAAADABIAAAAAAAAABAAAAAEAAAADABAAnw8EAAAAAQAAAAAA
qA8mAQAAQ2hhbmdlcyBiZXR3ZWVuIGRyYWZ0IC0wMSBhbmQgLTAyOg1HZW5lcmFsIGVkaXRzIHRv
ICh3ZSBob3BlKSBpbXByb3ZlIGNsYXJpdHkNTm8gbWFqb3IgYWRkaXRpb25zIG9yIGRlbGV0aW9u
cw1OZXcgaXNzdWUgb2YgdGhlIHByb2ZpbGUgZHJhZnQgd2lsbCBiZSBkb25lIG9uY2UgdGhlIGFy
Y2hpdGVjdHVyZS9pbmZvbW9kIGRyYWZ0cyBhcmUgcmUtaXNzdWVkDUFkZHJlc3Mgc3BlY2lmaWMg
YXJjaGl0ZWN0dXJlIGFuZCBpbmZvcm1hdGlvbiBtb2RlbCBpc3N1ZXMNQWRkIHNwZWNpZmljcyB0
byBleGlzdGluZyB0ZXh0AAChD3wAAAAjAAAAAAAAAAAASwAAAAEAAAAAAF8AAAAAAAAAAABaAAAA
AQAAAAAAIwAAAAAAAwABABwASwAAAAAAAwABABgAXwAAAAAAAwABABwAAQAAAAAAAwABABgAIQAA
AAAAAwABABgANwAAAAAAAwABABgAAQAAAAAAAwABABgAAACqDxoAAACwAAAAAAAAAAcAAAABAAAA
AwBwAAAAAAAAAAAA6gMAAAAADwD4A3UIAAACAO8DGAAAAAEAAAABAgcJCAAAAAAAAAAAAAAAAAAA
AGAA8AcgAAAA////AAAAAACAgIAAAAAAAADMmQAzM8wAzMz/ALKysgBgAPAHIAAAAAAA/wD///8A
AAAAAP//AAD/mQAAAP//AP8AAACWlpYAYADwByAAAAD//8wAAAAAAGZmMwCAgAAAM5kzAIAAAAAA
M8wA/8xmAGAA8AcgAAAA////AAAAAAAzMzMAAAAAAN3d3QCAgIAATU1NAOrq6gBgAPAHIAAAAP//
/wAAAAAAgICAAAAAAAD/zGYAAAD/AMwAzADAwMAAYADwByAAAAD///8AAAAAAICAgAAAAAAAwMDA
AABm/wD/AAAAAJkAAGAA8AcgAAAA////AAAAAACAgIAAAAAAADOZ/wCZ/8wAzADMALKysgAAAKMP
PgAAAAEA//0/AAAAIiAAAGQAAAAAAAEAZAAAAAAAAAAAAEACAAAAAAIAAAD//+8AAAAAAP//////
/ywAAAAAAwAAEACjD3wAAAAFAP/9PwABACIgAABkAAAAAAAAAGQAFAAAANgAAABAAgAAAAACAAAA
///vAAAAAAD///////8gAAAAAAEAAIAFAAATINQBIAEAAAIAHACABQAAIiDQAkACAAACABgAgAUA
ABMg8ANgAwAAAgAUAIAFAAC7ABAFgAQAAAAAIACjD24AAAAFAP/9PwAAACIgAABkAAAAAAAAAGQA
HgAAAAAAAABAAgAAAAACAAAA///vAAAAAAD///////8MAAAAAAEAAAAFAAAgASABAAAAAAAFAABA
AkACAAAAAAAFAABgA2ADAAAAAAAFAACABIAEAAAAAFAAow9SAAAABQAAAAEJAAAAAAEAAAAAAAAA
AQABCQAAAAABACABAAAAAAIAAQkAAAAAAQBAAgAAAAADAAEJAAAAAAEAYAMAAAAABAABCQAAAAAB
AIAEAAAAAGAAow8MAAAAAQAAAAAAAAAAAAAAcACjDz4AAAAFAAAAAAAAAAAAAgAcAAEAAAAAAAAA
AgAYAAIAAAAAAAAAAgAUAAMAAAAAAAAAAgASAAQAAAAAAAAAAgASAIAAow8+AAAABQAAAAAAAAAA
AAIAGAABAAAAAAAAAAIAFAACAAAAAAAAAAIAEgADAAAAAAAAAAIAEAAEAAAAAAAAAAIAEAAPAAwE
0wQAAA8AAvDLBAAAEAAI8AgAAAAGAAAABgQAAA8AA/BjBAAADwAE8CgAAAABAAnwEAAAAAAAAAAA
AAAAAAAAAAAAAAACAArwCAAAAAAEAAAFAAAADwAE8NIAAAASAArwCAAAAAIEAAAACgAAkwAL8DYA
AAB/AAEAAQCAAFQfkQCHAAEAAACBAQQAAAiDAQAAAAi/AQEAEQDAAQEAAAj/AQEACQABAgIAAAgA
ABDwCAAAAIABsAHQFFAEDwAR8BAAAAAAAMMLCAAAAAAAAAABAJEADwAN8FQAAAAAAJ8PBAAAAAAA
AAAAAKgPIAAAAENsaWNrIHRvIGVkaXQgTWFzdGVyIHRpdGxlIHN0eWxlAACiDwYAAAAhAAAAAAAA
AKoPCgAAACEAAAABAAAAAAAPAATwFgEAABIACvAIAAAAAwQAAAAKAACDAAvwMAAAAH8AAQABAIAA
FCCRAIEBBAAACIMBAAAACL8BAQARAMABAQAACP8BAQAJAAECAgAACAAAEPAIAAAA4ASwAdAUAA8P
ABHwEAAAAAAAwwsIAAAAAQAAAAIAkQAPAA3wngAAAAAAnw8EAAAAAQAAAAAAqA9SAAAAQ2xpY2sg
dG8gZWRpdCBNYXN0ZXIgdGV4dCBzdHlsZXMNU2Vjb25kIGxldmVsDVRoaXJkIGxldmVsDUZvdXJ0
aCBsZXZlbA1GaWZ0aCBsZXZlbAAAog8eAAAAIQAAAAAADQAAAAEADAAAAAIADQAAAAMADAAAAAQA
AACqDwoAAABTAAAAAQAAAAAADwAE8LUAAAASAArwCAAAAAQEAAAACgAAgwAL8DAAAAB/AAEAAQCA
AHQgkQCBAQQAAAiDAQAAAAi/AQEAEQDAAQEAAAj/AQEACQABAgIAAAgAABDwCAAAAGAPsAFgBoAQ
DwAR8BAAAAAAAMMLCAAAAAIAAAAHAZEADwAN8D0AAAAAAJ8PBAAAAAQAAAAAAKgPAQAAACoAAKEP
FAAAAAIAAAAAAAAAAAACAAAAAAACAA4AAAD4DwQAAAAAAAAADwAE8LcAAAASAArwCAAAAAUEAAAA
CgAAgwAL8DAAAAB/AAEAAQCAANQgkQCBAQQAAAiDAQAAAAi/AQEAEQDAAQEAAAj/AQEACQABAgIA
AAgAABDwCAAAAGAPsAfQDoAQDwAR8BAAAAAAAMMLCAAAAAMAAAAJApEADwAN8D8AAAAAAJ8PBAAA
AAQAAAAAAKgPAQAAACoAAKEPFgAAAAIAAAAAAAAIAAABAAIAAAAAAAIADgAAAPoPBAAAAAAAAAAP
AATwtwAAABIACvAIAAAABgQAAAAKAACDAAvwMAAAAH8AAQABAIAANCGRAIEBBAAACIMBAAAACL8B
AQARAMABAQAACP8BAQAJAAECAgAACAAAEPAIAAAAYA8gENAUgBAPABHwEAAAAAAAwwsIAAAABAAA
AAgCkQAPAA3wPwAAAAAAnw8EAAAABAAAAAAAqA8BAAAAKgAAoQ8WAAAAAgAAAAAAAAgAAAIAAgAA
AAAAAgAOAAAA2A8EAAAAAAAAAA8ABPBIAAAAEgAK8AgAAAABBAAAAAwAAIMAC/AwAAAAgQEAAAAI
gwEFAAAIkwGOn4sAlAHevWgAvwESABIA/wEAAAgABAMJAAAAPwMBAAEAEADwByAAAAD///8AAAAA
AICAgAAAAAAAAMyZADMzzADMzP8AsrKyAA8A7gPYAQAAAgDvAxgAAAABAAAADQ4AAAAAAAAAAACA
AAAAAAcAAAAPAAwEiAEAAA8AAvCAAQAAIAAI8AgAAAADAAAAAwgAAA8AA/AYAQAADwAE8CgAAAAB
AAnwEAAAAAAAAAAAAAAAAAAAAAAAAAACAArwCAAAAAAIAAAFAAAADwAE8GwAAAASAArwCAAAAAII
AAAgAgAAQwAL8BgAAACAAJQkkQC/AQAAAQD/AQAAAQABAwIEAAAAABDwCAAAAJAAIAFgFeAEDwAR
8BAAAAAAAMMLCAAAAAAAAAANAJEADwAN8AwAAAAAAJ4PBAAAAAAAAAAPAATwbAAAABIACvAIAAAA
AwgAACACAABDAAvwGAAAAIAA9CSRAL8BAAABAP8BAAABAAEDAwQAAAAAEPAIAAAAoAWwAdAUwA8P
ABHwEAAAAAAAwwsIAAAAAQAAAA4AkQAPAA3wDAAAAAAAng8EAAAAAQAAAA8ABPBIAAAAEgAK8AgA
AAABCAAAAAwAAIMAC/AwAAAAgQEAAAAIgwEFAAAIkwGOn4sAlAHevWgAvwESABIA/wEAAAgABAMJ
AAAAPwMBAAEAEADwByAAAAD///8AAAAAAICAgAAAAAAAAMyZADMzzADMzP8AsrKyAAAAchcQAAAA
AQAwAAAAAACmBQAAIw4AAAAA9Q8cAAAAAAEAAIMVAAMAAAAAAxAAAAEAAAADAAAAAQAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAIAAAADAAAA
BAAAAAUAAAAGAAAABwAAAAgAAAAJAAAACgAAAAsAAAD+/////v//////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
///////////////////////////////////////////////////////////+/wAABQACAAAAAAAA
AAAAAAAAAAAAAAACAAAAAtXN1ZwuGxCTlwgAKyz5rkQAAAAF1c3VnC4bEJOXCAArLPmuQAIAAPwB
AAAQAAAAAQAAAIgAAAADAAAAkAAAAA8AAACoAAAABAAAALQAAAAGAAAAvAAAAAcAAADEAAAACAAA
AMwAAAAJAAAA1AAAAAoAAADcAAAAFwAAAOQAAAALAAAA7AAAABAAAAD0AAAAEwAAAPwAAAAWAAAA
BAEAAA0AAAAMAQAADAAAAJsBAAACAAAA5AQAAB4AAAAPAAAAT24tc2NyZWVuIFNob3cAAB4AAAAC
AAAAIAAtcwMAAAA/EAAAAwAAAAcAAAADAAAAAQAAAAMAAAAAAAAAAwAAAAAAAAADAAAAAAAAAAMA
AAAxFQgACwAAAAAAAAALAAAAAAAAAAsAAAAAAAAACwAAAAAAAAAeEAAABAAAABAAAABUaW1lcyBO
ZXcgUm9tYW4ABgAAAEFyaWFsAA8AAABEZWZhdWx0IERlc2lnbgBSAAAAR2VuZXJhbCBVc2FnZSBQ
cm9maWxlIGZvciBMREFQdjMgUmVwbGljYXRpb24gZHJhZnQtaWV0Zi1sZHVwLXVzYWdlLXByb2Zp
bGUtMDIudHh0AAwQAAAGAAAAHgAAAAsAAABGb250cyBVc2VkAAMAAAACAP7/AAAFAAIAAAAAAAAA
AAAAAAAAAAAAAAEAAADghZ/y+U9oEKuRCAArJ7PZMAAAADARAAALAAAAAQAAAGAAAAACAAAAaAAA
AAQAAADEAAAACAAAANwAAAAJAAAA9AAAABIAAAAAAQAACgAAACABAAAMAAAALAEAAA0AAAA4AQAA
DwAAAEQBAAARAAAATAEAAAIAAADkBAAAHgAAAFIAAABHZW5lcmFsIFVzYWdlIFByb2ZpbGUgZm9y
IExEQVB2MyBSZXBsaWNhdGlvbiBkcmFmdC1pZXRmLWxkdXAtdXNhZ2UtcHJvZmlsZS0wMi50eHQA
ZAAeAAAADgAAAFJpY2hhcmQgSHViZXIAUHIeAAAADgAAAFJpY2hhcmQgSHViZXIAUHIeAAAAAgAA
ADQAY2geAAAAFQAAAE1pY3Jvc29mdCBQb3dlclBvaW50ACBmb0AAAAAQv80UAwAAAEAAAACwrtSk
tYHBAUAAAADAbaK5uIHBAQMAAAA8AAAARwAAANwPAAD/////AwAAAAgAbxBNDAAAAQAJAAAD5gcA
AAYARgAAAAAAEQAAACYGDwAYAP////8AABAAAAAAAAAAAAC6AwAAygIAAAkAAAAmBg8ACAD/////
AgAAABcAAAAmBg8AIwD/////BAAbAFROUFAUAMjwADAAAAAAFAAAAEQNkQAAAAAAAAAKAAAAJgYP
AAoAVE5QUAAAAgD0AwkAAAAmBg8ACAD/////AwAAAA8AAAAmBg8AFABUTlBQBAAMAAEAAAABAAAA
AAAAAAUAAAALAgAAAAAFAAAADALKAroDBQAAAAQBDQAAAAcAAAD8AgAA////AAAABAAAAC0BAAAI
AAAA+gIFAAEAAAAAAAAABAAAAC0BAQAEAAAALQEAAAkAAAAdBiEA8ADQAsADAAAAAAQAAAAtAQAA
BwAAAPwCAAD///8AAAAEAAAALQECAAQAAADwAQAACAAAAPoCAAAAAAAAAAAAAAQAAAAtAQAAEAAA
ACYGDwAWAP////8AAEcAAACPAgAAEQEAAMECAAAIAAAAJgYPAAYA/////wEAHAAAAPsCAAAAAAAA
AAAAAAAAAAAAAAAAAKoSAJJx9XdAAAAAhAcKG0xT9XdVU/V3AQAAAAAAMAAEAAAALQEDAAUAAAAJ
AgAAAAIFAAAAFAIAAAAABQAAAAIBAgAAABAAAAAmBg8AFgD/////AABHAQAAjwIAAHkCAADBAgAA
CAAAACYGDwAGAP////8BAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAAIBAgAAAAcAAAD8AgEAAAAA
AAAABAAAAC0BBAAEAAAALQEBAAcAAAAbBNEAkQMYADAABAAAAC0BAgAEAAAALQEAAAUAAAAJAgAA
AAIFAAAAFAIAAAAAHAAAAPsC0P8AAAAAAAC8AgAAAAAAAAAiQXJpYWwA9XdAAAAAaAcKUkxT9XdV
U/V3AQAAAAAAMAAEAAAALQEFAAQAAADwAQMABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAA
BQAAAAIBAQAAADcAAAAyClMAXgAgAAAAR2VuZXJhbCBVc2FnZSBQcm9maWxlIGZvciBMREFQdjMl
ABsAHQAbABMAGgAOAA0AIwAaABsAHQAbAA0AIAATAB0AEAAOAA0AGwANABAAHQATAA0AHgAiACMA
IAAbABsABQAAAC4BAQAAAAUAAAACAQIAAAAFAAAACQIAAAACBQAAABQCAAAAAAUAAAAuARgAAAAF
AAAAAgEBAAAAGAAAADIKjABfAQsAAABSZXBsaWNhdGlvbgAjABoAHgANAA0AGwAbABAADQAdAB4A
BQAAAC4BAQAAAAUAAAACAQIAAAAcAAAA+wLb/wAAAAAAAJABAAAAAAAAACJBcmlhbAD1d0AAAACE
BwocTFP1d1VT9XcBAAAAAAAwAAQAAAAtAQMABAAAAPABBQAFAAAACQIAAAACBQAAABQCAAAAAAUA
AAAuARgAAAAFAAAAAgEBAAAAEAAAADIKvADMAAYAAABkcmFmdC0VAAwAFQALAAoADAAFAAAALgEB
AAAABQAAAAIBAgAAAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAC4BGAAAAAUAAAACAQEAAAANAAAA
Mgq8ACMBBAAAAGlldGYJABUACgAKAAUAAAAuAQEAAAAFAAAAAgECAAAABQAAAAkCAAAAAgUAAAAU
AgAAAAAFAAAALgEYAAAABQAAAAIBAQAAAAkAAAAyCrwAVQEBAAAALQANAAUAAAAuAQEAAAAFAAAA
AgECAAAABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAAAA0AAAAyCrwAYgEE
AAAAbGR1cAgAFQAVABUABQAAAC4BAQAAAAUAAAACAQIAAAAFAAAACQIAAAACBQAAABQCAAAAAAUA
AAAuARgAAAAFAAAAAgEBAAAAIgAAADIKvACpARIAAAAtdXNhZ2UtcHJvZmlsZS0wMi4MABUAEwAU
ABUAFQANABQADQAVAAoACAAJABUADAAVABUACgAFAAAALgEBAAAABQAAAAIBAgAAAAUAAAAJAgAA
AAIFAAAAFAIAAAAABQAAAC4BGAAAAAUAAAACAQEAAAAMAAAAMgq8AM4CAwAAAHR4dHAKABMACgAF
AAAALgEBAAAABQAAAAIBAgAAAAUAAAACAQIAAAAEAAAALQEEAAQAAAAtAQEABwAAABsEoQJ5A/AA
SAAEAAAALQECAAQAAAAtAQAABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAACQIAAAACBQAAABQCAAAA
AAUAAAAuARgAAAAFAAAAAgEBAAAACAAAADIKGQFSAAEAAACVAAUAAAAuAQEAAAAFAAAAAgECAAAA
BQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAAADEAAAAyChkBdgAcAAAAQ2hh
bmdlcyBiZXR3ZWVuIC0wMSBhbmQgLTAyOhsAFQAVABUAFAAVABMACgAVABUACgAbABUAFQAVAAoA
DQAUABUACwAUABUAFQAKAA0AFQAVAAoABQAAAC4BAQAAAAUAAAACAQIAAAAcAAAA+wLg/wAAAAAA
AJABAAAAAAAAACJBcmlhbAD1d0AAAABoBwpTTFP1d1VT9XcBAAAAAAAwAAQAAAAtAQUABAAAAPAB
AwAFAAAACQIAAAACBQAAABQCAAAAAAUAAAAuARgAAAAFAAAAAgEBAAAACAAAADIKSAGCAAEAAACW
AAUAAAAuAQEAAAAFAAAAAgECAAAABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIB
AQAAAEYAAAAyCkgBoAAqAAAAR2VuZXJhbCBlZGl0cyB0byAod2UgaG9wZSkgaW1wcm92ZSBjbGFy
aXR5GQASABIAEQALABIABwAJABIAEgAHAAkAEAAIAAkAEgAJAAsAFwASAAgAEgASABIAEgAKAAkA
BwAbABIACwARABAAEgAJABAABwASAAsABwAJABAABQAAAC4BAQAAAAUAAAACAQIAAAAFAAAACQIA
AAACBQAAABQCAAAAAAUAAAAuARgAAAAFAAAAAgEBAAAACAAAADIKdgGCAAEAAACWAAUAAAAuAQEA
AAAFAAAAAgECAAAABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAABQAAAAIBAQAAADYAAAAy
CnYBoAAfAAAATm8gbWFqb3IgYWRkaXRpb25zIG9yIGRlbGV0aW9uczMXABIACQAbABEACAARAAsA
CQASABIAEQAIAAgACAARABIAEAAJABIACwAIABIAEgAHABIACQAHABIAEgAQAAUAAAAuAQEAAAAF
AAAAAgECAAAAHAAAAPsC2/8AAAAAAACQAQAAAAAAAAAiQXJpYWwA9XdAAAAAhAcKHUxT9XdVU/V3
AQAAAAAAMAAEAAAALQEDAAQAAADwAQUABQAAAAkCAAAAAgUAAAAUAgAAAAAFAAAALgEYAAAABQAA
AAIBAQAAAAgAAAAyCqsBUgABAAAAlQAFAAAALgEBAAAABQAAAAIBAgAAAAUAAAAJAgAAAAIFAAAA
FAIAAAAABQAAAC4BGAAAAAUAAAACAQEAAABCAAAAMgqrAXYAJwAAAE5ldyBpc3N1ZSBvZiB0aGUg
cHJvZmlsZSBkcmFmdCBvbmNlIHRoZaEbABUAGwAKAAkAEgATABUAFQAKABUACgAKAAsAFQAUAAsA
FQAMABUACgAJAAgAFQAKABUADAAVAAsACgAKABUAFQATABQACwAKABUAFQAFAAAALgEBAAAABQAA
AAIBAgAAAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAC4BGAAAAAUAAAACAQEAAAA2AAAAMgrYAXYA
HwAAAGFyY2hpdGVjdHVyZSBkcmFmdCBpcyByZS1pc3N1ZWQzFQAMABMAFQAIAAsAFAATAAoAFQAN
ABUACgAVAAwAFQAKAAsACgAIABMACgANABUADAAJABIAEwAVABUAFAAFAAAALgEBAAAABQAAAAIB
AgAAABwAAAD7AuD/AAAAAAAAkAEAAAAAAAAAIkFyaWFsAPV3QAAAAGgHClRMU/V3VVP1dwEAAAAA
ADAABAAAAC0BBQAEAAAA8AEDAAUAAAAJAgAAAAIFAAAAFAIAAAAABQAAAC4BGAAAAAUAAAACAQEA
AAAIAAAAMgoHAoIAAQAAAJYABQAAAC4BAQAAAAUAAAACAQIAAAAFAAAACQIAAAACBQAAABQCAAAA
AAUAAAAuARgAAAAFAAAAAgEBAAAACQAAADIKBwKgAAEAAABBABUABQAAAC4BAQAAAAUAAAACAQIA
AAAFAAAACQIAAAACBQAAABQCAAAAAAUAAAAuARgAAAAFAAAAAgEBAAAAOQAAADIKBwK1ACEAAABk
ZHJlc3Mgc3BlY2lmaWMgYXJjaGl0ZWN0dXJlIGFuZCAAEgASAAsAEgAQABAACAAQABIAEgAQAAcA
CQAHABAACQASAAsAEAARAAgACAASABAACQASAAsAEQAJABIAEgASAAkABQAAAC4BAQAAAAUAAAAC
AQIAAAAFAAAACQIAAAACBQAAABQCAAAAAAUAAAAuARgAAAAFAAAAAgEBAAAAGAAAADIKBwKBAgsA
AABpbmZvcm1hdGlvbgAHABIACAASAAsAGwARAAkABwASABIABQAAAC4BAQAAAAUAAAACAQIAAAAF
AAAACQIAAAACBQAAABQCAAAAAAUAAAAuARgAAAAFAAAAAgEBAAAAGQAAADIKLQKgAAwAAABtb2Rl
bCBpc3N1ZXMbABIAEQASAAcACQAHABAAEAASABIAEAAFAAAALgEBAAAABQAAAAIBAgAAAAUAAAAJ
AgAAAAIFAAAAFAIAAAAABQAAAC4BGAAAAAUAAAACAQEAAAAIAAAAMgpbAoIAAQAAAJYABQAAAC4B
AQAAAAUAAAACAQIAAAAFAAAACQIAAAACBQAAABQCAAAAAAUAAAAuARgAAAAFAAAAAgEBAAAANAAA
ADIKWwKgAB4AAABBZGQgc3BlY2lmaWNzIHRvIGV4aXN0aW5nIHRleHQVABIAEgAJABAAEgASABAA
BwAJAAcAEAAQAAkACAASAAkAEgAQAAcAEAAJAAcAEgASAAkACAASABAACQAFAAAALgEBAAAABQAA
AAIBAgAAAAUAAAACAQIAAAAEAAAALQEBAAQAAAAtAQQAHAAAAPsCEAAHAAAAAAC8AgAAAAABAgIi
U3lzdGVtAAAAAAoAAAAEAAAAAAADAAAAAQAAAAAAMAAEAAAALQEDAAQAAADwAQUADwAAACYGDwAU
AFROUFAEAAwAAAAAAAAAAAAAAAAACQAAACYGDwAIAP////8BAAAAAwAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHgAAABAAAABEZXNpZ24gVGVtcGxh
dGUAAwAAAAEAAAAeAAAADQAAAFNsaWRlIFRpdGxlcwADAAAAAQAAAACYAAAAAwAAAAAAAAAgAAAA
AQAAADYAAAACAAAAPgAAAAEAAAACAAAACgAAAF9QSURfR1VJRAACAAAA5AQAAEEAAABOAAAAewAx
ADkAMQA2ADEAOABDADkALQA0ADEAMgA5AC0ANAA2AEYAOAAtAEEAMAA4ADMALQAyADAAOABFAEEA
NABBAEUAMgBGAEYAOAB9AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAPYPJQAAABQAAABfwJHjGxAAAA0A9AMDABIAUmljaGFyZCBIdWJlcggAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQwB1AHIAcgBlAG4AdAAgAFUAcwBlAHIA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABoAAgD///////////////8A
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAAAALQAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AP///////////////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAA////////////////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD///////////////8AAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABSAG8AbwB0ACAARQBuAHQAcgB5AAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFgAFAP//////////AQAAABCN
gWSbT88RhuoAqgC5KegAAAAAAAAAAAAAAAAwkirNBYfBARwAAABABAAAAAAAAFAAbwB3AGUAcgBQ
AG8AaQBuAHQAIABEAG8AYwB1AG0AZQBuAHQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAoAAIB
AgAAAAMAAAD/////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAD8QAAAA
AAAABQBTAHUAbQBtAGEAcgB5AEkAbgBmAG8AcgBtAGEAdABpAG8AbgAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAACgAAgEEAAAA//////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAANAAAAYBEAAAAAAAAFAEQAbwBjAHUAbQBlAG4AdABTAHUAbQBtAGEAcgB5AEkAbgBmAG8A
cgBtAGEAdABpAG8AbgAAAAAAAAAAAAAAOAACAf///////////////wAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAP//////////AwAAAAQAAAAFAAAABgAAAAcA
AAAIAAAACQAAAAoAAAD+//////////////8OAAAADwAAABAAAAARAAAAEgAAABMAAAAUAAAAFQAA
AP7//////////v///xcAAAD9/////v////7///8dAAAAGwAAAP//////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////AQAAAAIAAAADAAAABAAAAAUAAAAGAAAABwAA
AAgAAAAJAAAACgAAAAsAAAANAAAA/v///w4AAAAPAAAAEAAAAP7/////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////9wAGwAZQBAAHYAZQByAGkAegBvAG4ALgBuAGUA
dAAAAB8AAAAMAAAAQwBoAHIAaQBzACAAQQBwAHAAbABlAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP7/AAAFAAIAAAAAAAAAAAAAAAAAAAAAAAIAAAAC
1c3VnC4bEJOXCAArLPmuRAAAAAXVzdWcLhsQk5cIACss+a5AAgAA/AEAABAAAAABAAAAiAAAAAMA
AACQAAAADwAAAKgAAAAEAAAAtAAAAAYAAAC8AAAABwAAAMQAAAAIAAAAzAAAAAkAAADUAAAACgAA
ANwAAAAXAAAA5AAAAAsAAADsAAAAEAAAAPQAAAATAAAA/AAAABYAAAAEAQAADQAAAAwBAAAMAAAA
mwEAAAIAAADkBAAAHgAAAA8AAABPbi1zY3JlZW4gU2hvdwAAHgAAAAIAAAAgAC1zAwAAAD8QAAAD
AAAABwAAAAMAAAABAAAAAwAAAAAAAAADAAAAAAAAAAMAAAAAAAAAAwAAADEVCAALAAAAAAAAAAsA
AAAAAAAACwAAAAAAAAALAAAAAAAAAB4QAAAEAAAAEAAAAFRpbWVzIE5ldyBSb21hbgAGAAAAQXJp
YWwADwAAAERlZmF1bHQgRGVzaWduAFIAAABHZW5lcmFsIFVzYWdlIFByb2ZpbGUgZm9yIExEQVB2
MyBSZXBsaWNhdGlvbiBkcmFmdC1pZXRmLWxkdXAtdXNhZ2UtcHJvZmlsZS0wMi50eHQADBAAAAYA
AAAeAAAACwAAAEZvbnRzIFVzZWQAAwAAAAIAAAAeAAAAEAAAAERlc2lnbiBUZW1wbGF0ZQADAAAA
AQAAAB4AAAANAAAAU2xpZGUgVGl0bGVzAAMAAAABAAAAAMABAAAHAAAAAAAAAEAAAAABAAAAwAAA
AAIAAADIAAAAAwAAACABAAAEAAAAKAEAAAUAAABcAQAABgAAAKABAAAFAAAAAgAAAAoAAABfUElE
X0dVSUQAAwAAABQAAABfQWRIb2NSZXZpZXdDeWNsZUlEAAQAAAAOAAAAX0VtYWlsU3ViamVjdAAF
AAAADQAAAF9BdXRob3JFbWFpbAAGAAAAGAAAAF9BdXRob3JFbWFpbERpc3BsYXlOYW1lAAAAAAAA
9g8lAAAAFAAAAF/AkeMbEAAADQD0AwMAEgBSaWNoYXJkIEh1YmVyCAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAACAAAA5AQAAEEAAABOAAAAewAxADkAMQA2ADEAOABDADkALQA0ADEAMgA5AC0ANAA2AEYA
OAAtAEEAMAA4ADMALQAyADAAOABFAEEANABBAEUAMgBGAEYAOAB9AAAAAAADAAAAQrmRqx8AAAAV
AAAAUAByAG8AZgBpAGwAZQAgAEQAcgBhAGYAdAAgAFMAbABpAGQAZQBzAAAAAAAfAAAAHgAAAGMA
aAByAGkAcwB0AG8AcABoAGUAcgAuAGEAcAA=

------=_NextPart_000_000A_01C186D3.83617BF0--



From owner-ietf-ldup@mail.imc.org  Mon Dec 17 18:20:55 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id SAA21386
	for <ldup-archive@odin.ietf.org>; Mon, 17 Dec 2001 18:20:53 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBHN4KO02630
	for ietf-ldup-bks; Mon, 17 Dec 2001 15:04:20 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBHN4I202623
	for <ietf-ldup@imc.org>; Mon, 17 Dec 2001 15:04:19 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Mon, 17 Dec 2001 17:53:53 -0700
Message-Id: <sc1e3131.082@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Mon, 17 Dec 2001 17:53:36 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <ietf-ldup@imc.org>, <hahnt@us.ibm.com>
Subject: Re: Is State-based LDUP needed?
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fBHN4J202626
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


Tim -

I guess I'd consider it a failure if only one state-based system were developed under the spec.  If there were only one, I'd argue that part of the spec should be dropped before being sent forward in the standization effort.

At this point, I don't even know of one committed vendor to the state-based version.

Ed

>>> "Timothy Hahn" <hahnt@us.ibm.com> 12/14/01 07:27AM >>>
Ed,

First, I'll say that we've only been looking at log-based implementations 
- probably because we see benefits in doing this way.

Second, it occurs to me that LDUP really doesn't need to have two 
interoperable "state-based" implementations - it just needs to have two 
interoperable implementations (regardless of state-based/log-based). Thus, 
I don't quite follow your argument that we'd need at least two state-based 
implementations to leave "state-based" references in the specs.  If/when 
we get two interoperable implementations (state-based or log-based), that 
would be enough I would think.

Either way though, if just concentrating on "log-based" moves the specs 
along quicker, I'm all for it.

Regards,
Tim Hahn

Internet: hahnt@us.ibm.com 
Internal: Timothy Hahn/Endicott/IBM@IBMUS or IBMUSM00(HAHNT)
phone: 607.752.6388     tie-line: 8/852.6388
fax: 607.752.3681





"Ed Reed" <eer@OnCallDBA.COM>
Sent by: owner-ietf-ldup@mail.imc.org 
12/13/2001 11:05 PM

 
        To:     <ietf-ldup@imc.org>
        cc: 
        Subject:        Is State-based LDUP needed?

 


I asked this question at the ldup meeting on Thursday, and agreed to post 
the question to the distribution list.

Is anyone planning to implement state-based ldup?  If not - that is, if 
there are not going to be at least two interoperable implementations of 
the proposed specification, should we not remove it from the ldup design 
now, rather than later?

The protocol will support it, but there are certainly places in the 
architecture and other documents where the different handling of change 
information required by the state-based scheme adds unnecessary text if 
noone is actually going to use it.

This is a pragmatic decision - I personally like state based schemes, even 
though there are things (like transaction replication) that I doubt 
they'll ever be able to handle well.  Also, all the implementers I know 
are focused on the log-based scheme, instead.  It seems easier for them to 
get their heads around, for some reason...

So - I don't think it's appropriate for me to be the only one championing 
it, and have reached the conclusion that if we can't find even two 
implementers to build it, we should not bother including it in further 
work.

If you're planning to build it, speak up.  If not, silence may well be 
taken as assent to remove references to it from the various protocol 
documents.

Best regards,
Ed

Ps - yeah, I know, you told me so...

=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM 
Note:  Area code is 585






From owner-ietf-ldup@mail.imc.org  Tue Dec 18 00:14:11 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id AAA27111
	for <ldup-archive@odin.ietf.org>; Tue, 18 Dec 2001 00:14:11 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBI4xGG16461
	for ietf-ldup-bks; Mon, 17 Dec 2001 20:59:16 -0800 (PST)
Received: from nexus.adacel.com (shelob.adacel.com.au [203.36.26.146] (may be forged))
	by above.proper.com (8.11.6/8.11.3) with SMTP id fBI4x9216446
	for <ietf-ldup@imc.org>; Mon, 17 Dec 2001 20:59:11 -0800 (PST)
Received: (qmail 28734 invoked from network); 18 Dec 2001 04:53:18 -0000
Received: from unknown (HELO osmium) (10.32.24.165)
  by nexus.adacel.com with SMTP; 18 Dec 2001 04:53:18 -0000
Reply-To: <steven.legg@adacel.com.au>
From: "Steven Legg" <steven.legg@adacel.com.au>
To: "'John McMeeking'" <jmcmeek@us.ibm.com>
Cc: <ietf-ldup@imc.org>
Subject: RE: Supporting Partial Replication
Date: Tue, 18 Dec 2001 15:59:19 +1100
Message-ID: <007f01c18780$c184bcf0$a518200a@osmium.mtwav.adacel.com.au>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2377.0
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2120.0
Importance: Normal
In-Reply-To: <OF3AC3E5B6.C390151E-ON86256B1E.004964B4@rchland.ibm.com>
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit



John,

John McMeeking wrote:
> I agree with Ryan on the first part of the note (I haven't gotten through
> the flurry of LDUP notes).

See my reply to Ryan for corrections and clarifications.

> But, we may have a problem with update vectors
> under partial/fractional replication.

Yes.

> Update vectors are defined per replica and per replication context
(actually,
> defined per replica, where a replica is a replicated instance of a
> replication context).
>
> If R2 is a subtree of R1, separate update vectors are maintained for both
> R2 and R1, and the problem Steven describes doesn't occur.

Yes (doh!).

> For either S1
> or S2 to replicate U1 to S3, replication context R1 must be added to S3,
> and the replication context properly initialized on S3 -- either via a
full
> update replication session, or via some other means (i.e. LDIF). At that
> point, U1 (and the rest of the entries in R1) are present on S3 and
> replication continues normally. Replication of R1 is independent of R2.

You're alluding to the flip side of what I'm saying. If the current
architecture can't support a replication topology where the servers
in a cycle hold different replication areas in the same replication
context then the choices are to not replicate, or to force all the
servers in the cycle to have the same replication area(s). Too bad
if I don't want S3 to see stuff in R1.

> If R2 is a sparse/fractional replica of R1, R2 would not be considered a
> separate replication context. In this case, sparse/fractional replication
> is an attribute of the replicaSubentry for S3. If U1 falls within the
> attributes and/or entries specified for S3, it will be replicated under
> the replication agreements targeting S3 under R1, and the UV for S3
updated
> accordingly.
>
> What happens when S3 is a fractional replica, and U1 does not contain any
> attributes replicated to S3? draft-ietf-ldup-model-06, section 8.2,
> specifies "When fully populating or incrementally bringing up to date a
> Fractional Replica each of the Replication Updates must only
> contain updates to the attributes in the Fractional Entry Specification."
> This implies that S3 will never see U1, and thus not fully update its
> update vector until such time as it receives an update originating at the
> same server.

Do you agree that S1 will also never see U1 ?
This breaks eventual convergence.

> Steven's example described U1 and U2 as successive updates
> originating at S2. Suppose U1 originated at S1 and U2 originated at S2:
>
> Initially, the update vector for S3 (UV3) looks like
> < Tx#1#0#0, Ty#2#0#0, null > (latest CSNs from S1 and S2, no updates
> originating at S3)

BTW, you have the second and third components of the CSNs transposed.

> At time T15, on server 1, client performs update U1: CSN = T15#1#0#0.
>
> This is replicated to S2, and eventually S2 replicates to S3. But since
> no attributes in U1 are present in S3's fractional entry specification,
> no replication occurs.
>
> Update vector for S3 remains < Tx#1#0#0, Ty#2#0#0, null >

There are some subtleties in the way that the update vector is maintained
that are not explicitly called out by the architecture draft. The update
U1 at S1 causes the update vector for S1 to be revised, but since the
CSN < T15#1#0#0 > in the update vector for S1 is an attribute value it is
tagged with a CSN! Common sense suggests this CSN should also be < T15#1#0#0
>.
Changes to the update vector are always replicated (as add-attribute-value
primitives), so S3 will receive the change to S1's update vector because
of U1, even though it doesn't receive U1. On receiving the
add-attribute-value primitive for S1's update vector, S3 will change its
copy of S1's update vector. It will also change the CSN for S1 in it's own
update vector to < T15#1#0#0 >, the CSN on the add-attribute-value
primitive.

The update vector for S3 actually becomes < T15#1#0#0, Ty#2#0#0, null >.
The purge point advances in due course.

Regards,
Steven



From owner-ietf-ldup@mail.imc.org  Tue Dec 18 00:14:15 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id AAA27122
	for <ldup-archive@odin.ietf.org>; Tue, 18 Dec 2001 00:14:15 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBI4xHr16463
	for ietf-ldup-bks; Mon, 17 Dec 2001 20:59:17 -0800 (PST)
Received: from nexus.adacel.com (shelob.adacel.com.au [203.36.26.146] (may be forged))
	by above.proper.com (8.11.6/8.11.3) with SMTP id fBI4x6216445
	for <ietf-ldup@imc.org>; Mon, 17 Dec 2001 20:59:08 -0800 (PST)
Received: (qmail 28730 invoked from network); 18 Dec 2001 04:53:15 -0000
Received: from unknown (HELO osmium) (10.32.24.165)
  by nexus.adacel.com with SMTP; 18 Dec 2001 04:53:15 -0000
Reply-To: <steven.legg@adacel.com.au>
From: "Steven Legg" <steven.legg@adacel.com.au>
To: <ietf-ldup@imc.org>
Subject: RE: Supporting Partial Replication (RESEND)
Date: Tue, 18 Dec 2001 15:59:16 +1100
Message-ID: <007e01c18780$bf7d2d20$a518200a@osmium.mtwav.adacel.com.au>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2377.0
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2120.0
Importance: Normal
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit



This is a resend. Some people didn't see it the first time around
because of SPAM filters.

=================================================================

Ryan,

Please disregard my previous reply to you (which didn't make it to the
mailing list).
More below.

Ryan Moats wrote:
>On Thu, Dec 06, 2001 at 02:45:14PM +1100, Steven Legg wrote:
>| Folks,
>| A while ago I promised to write up my thoughts on changes to the LDUP
>| architecture to support partial replication. Well this is part one of
>| that write up, which discusses changes to the architecture to make it
>| more amenable to replication topologies involving partial replicas.
>| Consider the following replication topology:
>|   S1 ====== S2
>|     \      /
>|      \    /
>|       \  /
>|        S3
>| Servers S1 & S2 hold full copies of replication area R1. S3 holds
>| replication area R2, a subset of R1. R2 could be a subtree of R1, a
>| sparse replica or a fractional replica. The exact details don't matter
>| at this stage. It is enough to recognize that R2 is a subset of the
>| information in R1.
>| Suppose that there are two successive update operations, U1 & U2,
performed
>| at S2, where U1 affects information in R1 but wholly outside of R2 and U2
>| is wholly within R2 (and thus also within R1). The CSN alloted to U1 is
>| less than the CSN alloted to U2.
>| Suppose S3 and S2 establish replication sessions to exchange updates.
>| S3 has no changes to send. S2 will send U2 because it is within the scope
>| of the replication agreement S3 has with S2, but will not send U1.
>| S3 and S1 then establish replication sessions. S1 has no changes tosend.
>| S3 sends U2 since the CSN for U2 is more recent than the CSNcorresponding
>| to S2 in S1's update vector. S1 will set the CSN corresponding to S2 in
>| its update vector to be the CSN for U2.
>| Now, if S2 establishes a replication session with S1 it will send no
>| updates. In particular, it won't send U1 because the CSN corresponding to
>| S2 in S1's update vector is already greater than the CSN for U1. In fact,
>| S1 will never receive U1, so the requirement for all replicas to converge
>| will not be satisfied. In general, the current LDUP architecture only
>| works if the replication topology has no cycles, or where there are
>| cycles, if the replicas in each cycle have replication agreements for
>| exactly the same area of replication.
>|
>
>Hold on...  I'm deleting the rest of this message because you've lost me
>here.

Sorry. I was operating under a false assumption that a server can have only
one replication context and therefore only one update vector. I'll correct
that shortly to remove some of the confusion.

>I thought that (a) we had a separate CSN vector for each other
>server

You're thinking of the purge vector. In the current architecture, each
replication context a server maintains contains a replica subentry that
holds a single CSN vector which is its own update vector. There is also
a replica subentry for each other server, which holds a copy of the
other server's update vector in the same replication context.

The combination of a server's own update vector and the update vectors
received from all the other servers constitute the purge vector for that
replication context. The purge vector tells a server which of the CSNs
it holds are old enough to be discarded, but plays no part in deciding
which updates are propagated.

>and that (b) that CSN vector was to the level of attribute and entry.

A CSN in an update vector is a replication context wide value. It is the
CSN of the most recent update in the associated replication context
received from the corresponding server .

>Thus,
>I don't see the problem.

The critical bit I missed was that in the case where R2 is a complete
subordinate subtree of R1 the current architecture avoids losing updates
by making R2 a separate replication context. However, R2 can also be a
fractional replica or, in future, a sparse replica (although the latest
version of the information model effectively allows sparse replicas by
allowing non-trivial subtree specifications).

If R2 is a fractional replica then R1 and R2 share the same replication
context and therefore there is only one update vector for each of S1, S2
and S3 in my example. Consider the example again, but assume R2 is a
fractional replica of  R1. I've appended the example with corrections
as required.

Steven

=======================================================

Consider the following replication topology:

  S1 ====== S2
    \      /
     \    /
      \  /
       S3

Servers S1 & S2 hold full copies of replication area R1, which can be
considered to be the entire contents of a particular replication context.
S3 holds replication area R2, a subset of R1, in the same replication
context.
R2 could be a sparse replica or a fractional replica. The exact details
don't matter at this stage. It is enough to recognize that R2 is a subset
of the information in R1, and that both are in the same replication context.

Suppose that there are two successive update operations, U1 & U2, performed
at S2, where U1 affects information in R1 but wholly outside of R2 and U2
is wholly within R2 (and thus also within R1). The CSN alloted to U1 is
less than the CSN alloted to U2.

Suppose S3 and S2 establish replication sessions to exchange updates.
S3 has no changes to send. S2 will send U2 because it is within the scope
of the replication agreement S3 has with S2, but will not send U1.

S3 and S1 then establish replication sessions. S1 has no changes to send.
S3 sends U2 since the CSN for U2 is more recent than the CSN corresponding
to S2 in S1's update vector. S1 will set the CSN corresponding to S2 in
its update vector to be the CSN for U2.

Now, if S2 establishes a replication session with S1 it will send no
updates. In particular, it won't send U1 because the CSN corresponding to
S2 in S1's update vector is already greater than the CSN for U1. In fact,
S1 will never receive U1, so the requirement for all replicas to converge
will not be satisfied. In general, the current LDUP architecture only
works if the replication topology with respect to a particular replication
context has no cycles, or where there are cycles, if the replicas in each
cycle have replication agreements for exactly the same area of replication.

However we can get around this restriction by maintaining an update vector
and replica ID per replication area (per replication context) for which a
server
has a replication agreement, instead of a single update vector and single
replica ID per replication context per server.

A single replication context is assumed in what follows.
Let UV(S,R) be a reference to the update vector maintained by
server S for replication area R. It becomes convenient at this point to
have global unique identifiers for replication areas, e.g. R.
ASIDE: It also makes sense to have replication area descriptions as distinct
managed objects, and for replication agreement objects to just reference
a replication area by its unique identifier, instead of itself describing
the information to be replicated.

Suppose there is a server, S, with replication agreements for replication
area, R. We require a replica ID to uniquely identify the copy of the
information in R maintained by S. It is convenient for the purposes of
this discussion to use the notation S.R for that replica ID.

Let T be some other server and let Q be a replication area maintained by T.
An element in UV(S,R) for replica T.Q with the CSN value, C, is an assertion
that S has received from T.Q all updates to R with CSNs less than or equal
to C.

If all such updates have been received then it is also true that S has
received from T.Q all updates (with CSNs less than or equal to C) to
every replication area P, where P is a subset of R.

All the client updates processed by T.Q must be within replication area Q,
so if Q is a subset of, or the same as, R then S has received from T.Q all
updates (with CSNs less than or equal to C) to every replication area P,
where P is a superset of R.

We can use these results to obtain the following rule for maintaining
multiple update vectors in the one server, which for the sake of argument
I will call the update vector cascade rule:

  Given that S is receiving updates for replication area R, when S
  receives an update with a CSN containing a replica ID of T.Q it shall
  revise the CSN corresponding to T.Q in UV(S,R) and in every UV(S,P)
  where P is a subset of R. If Q is a subset of, or the same as, R then
  S shall revise the CSN corresponding to T.Q in every UV(S,P) where P
  is a superset of R.

For each update we need to be able to determine the replication area to
which it has been applied. Provided the replica and replication area
administrative objects are available then a lookup using the replica ID
in the CSN associated with update can give us the replication area. We also
need to be able to determine the supersets and subsets of the replication
area. This can be precalculated and cached from examination of the
replication area objects.

Now I'll show how the new architecture supports the topology of the
original example. Conceptually, S1 will now hold two replicas S1.R1 and
S1.R2, and two update vectors UV(S1,R1) and UV(S1,R2). S2 will now hold
two replicas S2.R1 and S2.R2, and two update vectors UV(S2,R1) and
UV(S2,R2). S3 holds only one replica S3.R2 and one update vector UV(S3,R2).

The update U1 is within R1 but outside R2 so this update is necessarily
applied to the replica S2.R1. The replica ID in the CSN for U1 will be
S2.R1.

In applying the update U2, S2 has a choice between replicas S2.R1 and
S2.R2 since U2 is within both R1 and R2. The detailed steps following
each choice are different, but the final outcome is always the same.
Note that S2 doesn't hold duplicates of all the entries and attributes
in R2. U2 acts on the same instance of the target entry and its attributes
regardless of the selected replica. The only material difference is the
replica ID that goes into the CSN generated for U2.

Firstly, I'll run through what happens if S2 chooses to apply U2 within R2.
The replica ID in the CSN for U2 will be S2.R2. S2 will set the CSN
corresponding to S2.R2 in UV(S2,R2) to be the CSN for U2. It will also
set the CSN corresponding to S2.R2 in UV(S2,R1) to be the CSN for U2.
This is the result of S2 applying the cascade rule to itself (R = Q = R2,
S = T = S2).

S3 and S2 establish replication sessions to exchange updates to replication
area R2. As before, S3 has no changes to send, and S2 will send U2 but
will not send U1. S3 sets the CSN corresponding to S2.R2 in UV(S3,R2) to
the CSN for U2.

S3 and S1 establish replication sessions to exchange updates to replication
area R2. S1 has no changes to send, as before. S3 sends U2 since the CSN
on U2 is more recent than the CSN corresponding to S2.R2 in UV(S1,R2).
S1 will set the CSN corresponding to S2.R2 in UV(S1,R2) to be the CSN for
U2. S1 will also set the CSN corresponding to S2.R2 in UV(S1,R1) by
application of the cascade rule (S = S1, T = S2, R = Q = R2).

If S2 establishes a replication session with S1 to send updates to
replication area R1 it will obtain UV(S1,R1). S2 will send U1 to S1 since
the CSN for U1 is greater than the CSN corresponding to S2.R1 in UV(S1,R1).
It won't send U2 since the CSN corresponding to S2.R2 in UV(S1,R1) already
has the value of the CSN for U2.

So S3 gets U2 and S1 gets both U1 and U2, exactly as it should be.

Now, I'll run through what happens if S2 chooses to apply U2 within R1.
The replica ID in the CSN for U2 will be S2.R1. S2 will set the CSN
corresponding to S2.R1 in UV(S2,R1) to be the CSN for U2. It will also
set the CSN corresponding to S2.R1 in UV(S2,R2) to be the CSN for U2.
This is the result of S2 applying the cascade rule to itself (R = Q = R1,
S = T = S2).

S3 and S2 establish replication sessions to exchange updates to replication
area R2. As before, S3 has no changes to send, and S2 will send U2 but
will not send U1. S3 sets the CSN corresponding to S2.R1 in UV(S3,R2) to
the CSN for U2.

S3 and S1 establish replication sessions to exchange updates to replication
area R2. S1 has no changes to send, as before. S3 sends U2 since the CSN
on U2 is more recent than the CSN corresponding to S2.R1 in UV(S1,R2).
S1 will set the CSN corresponding to S2.R1 in UV(S1,R2) to be the CSN for
U2. Application of the cascade rule (S = S1, T = S2, R = R2, Q = R1)
results in NO changes to UV(S1,R1). We now have the situation that U2 is
notionally present in the replica S1.R2 but not in the replica S1.R1. As
I indicated earlier, a server doesn't hold duplicates of entries and
attributes that are in multiple replication areas. If S1 engages in any
replication sessions with other servers for the replication area R1 it
must exclude any changes with CSNs greater than the relevant CSN in
UV(S1,R1). This includes U2 when it has been originally applied within R1
and so far only received via S3.

If S2 establishes a replication session with S1 to send updates to
replication area R1 it will obtain UV(S1,R1). S2 will send both U1 and U2
to S1 since the CSNs for U1 and U2 are greater than the CSN corresponding
to S2.R1 in UV(S1,R1). S1 will receive U2 twice but URP will quickly
ignore the duplicate. Importantly, S1 will set the CSN corresponding to
S2.R1 in UV(S1,R1) to be the CSN for U2.

We can avoid the duplication if we arrange for S2 to obtain both UV(S1,R1)
and UV(S1,R2) (in general, any UV(S1,P) where P is a subset of R1) at the
start of the replication session, but we must still make some provision
for UV(S1,R1) to be revised correctly. It will be easier to just accept
that there may be some harmless duplication.

This example is too simple and narrow to show it but in general, applying
an update to the smallest subset replication area (e.g. R2 instead of R1)
allows it to propagate through more paths, more quickly and with less
duplication.

The following topology is more interesting but I'll leave example
walkthroughs as an exercise for the reader.

       S0
      /  \
     /    \
    /      \
  S1        S2
    \      /
     \    /
      \  /
       S3

Servers S0, S1 and S2 hold full copies of replication area R1 in some
replication context. S3 holds replication area R2, a subset of R1,
in the same replication context.

In this situation S0 only has replication agreements for R1 and therefore
only needs to maintain one update vector UV(S0,R1). It doesn't need to
bother with R2. This is a particularly useful result because it means that
a server, having entered into a replication agreement with some peer
server, isn't signficantly affected by the replication agreements the peer
server might make with yet other servers.


The extended architecture described here also provides a mechanism for
supporting expedited changes, which aren't possible in the current
architecture. The information, changes to which are to be expedited, is
set up as a subset replication area. Additional replication agreements
are established with the peer servers for this subset replication area,
presumably with on-change replication schedules. Updates to the subset
area get propagated immediately, while other updates propagate less
frequently, but eventually all updates get through.

The extended architecture can also handle replication areas that are
subordinate subtrees in a replication context without needing to make
the subordinate subtree a separate replication context.




_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp.




From owner-ietf-ldup@mail.imc.org  Wed Dec 19 10:33:46 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id KAA29482
	for <ldup-archive@odin.ietf.org>; Wed, 19 Dec 2001 10:33:45 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBJF9hZ26517
	for ietf-ldup-bks; Wed, 19 Dec 2001 07:09:43 -0800 (PST)
Received: from e1.ny.us.ibm.com (e1.ny.us.ibm.com [32.97.182.101])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBJF9f226513
	for <ietf-ldup@imc.org>; Wed, 19 Dec 2001 07:09:41 -0800 (PST)
Received: from northrelay02.pok.ibm.com (northrelay02.pok.ibm.com [9.117.200.22])
	by e1.ny.us.ibm.com (8.9.3/8.9.3) with ESMTP id KAA445948;
	Wed, 19 Dec 2001 10:06:44 -0500
Received: from d27ml001.rchland.ibm.com (d27ml001.rchland.ibm.com [9.5.39.28])
	by northrelay02.pok.ibm.com (8.11.1m3/NCO v5.01) with ESMTP id fBJF9VB134362;
	Wed, 19 Dec 2001 10:09:32 -0500
Subject: RE: Supporting Partial Replication
To: <steven.legg@adacel.com.au>
Cc: ietf-ldup@imc.org
X-Mailer: Lotus Notes Release 5.0.9  November 16, 2001
Message-ID: <OFFCC1B498.F0DA4D8F-ON86256B27.004F1068@rchland.ibm.com>
From: "John McMeeking" <jmcmeek@us.ibm.com>
Date: Wed, 19 Dec 2001 09:13:24 -0600
X-MIMETrack: Serialize by Router on d27ml001/27/M/IBM(Build M10_08082001 Beta 3|August
 08, 2001) at 12/19/2001 09:13:24 AM
MIME-Version: 1.0
Content-type: multipart/alternative; 
	Boundary="0__=09BBE1B4DFDC96F88f9e8a93df938690918c09BBE1B4DFDC96F8"
Content-Disposition: inline
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


--0__=09BBE1B4DFDC96F88f9e8a93df938690918c09BBE1B4DFDC96F8
Content-type: text/plain; charset=US-ASCII

See responses marked <JAM>

John  McMeeking



                                                                                                                           
                      "Steven Legg"                                                                                        
                      <steven.legg@adac        To:       John McMeeking/Rochester/IBM@IBMUS                                
                      el.com.au>               cc:       <ietf-ldup@imc.org>                                               
                                               Subject:  RE: Supporting Partial Replication                                
                      12/17/2001 10:59                                                                                     
                      PM                                                                                                   
                      Please respond to                                                                                    
                      steven.legg                                                                                          
                                                                                                                           
                                                                                                                           




John,

John McMeeking wrote:
> I agree with Ryan on the first part of the note (I haven't gotten through
> the flurry of LDUP notes).

See my reply to Ryan for corrections and clarifications.

> But, we may have a problem with update vectors
> under partial/fractional replication.

Yes.

> Update vectors are defined per replica and per replication context
(actually,
> defined per replica, where a replica is a replicated instance of a
> replication context).
>
> If R2 is a subtree of R1, separate update vectors are maintained for both
> R2 and R1, and the problem Steven describes doesn't occur.

Yes (doh!).

> For either S1
> or S2 to replicate U1 to S3, replication context R1 must be added to S3,
> and the replication context properly initialized on S3 -- either via a
full
> update replication session, or via some other means (i.e. LDIF). At that
> point, U1 (and the rest of the entries in R1) are present on S3 and
> replication continues normally. Replication of R1 is independent of R2.

You're alluding to the flip side of what I'm saying. If the current
architecture can't support a replication topology where the servers
in a cycle hold different replication areas in the same replication
context then the choices are to not replicate, or to force all the
servers in the cycle to have the same replication area(s). Too bad
if I don't want S3 to see stuff in R1.

<JAM>
Are you talking about setting up something like this?
Server S1 hold R1 and R2
Server S2 holds R2
Server S3 holds R1 and R2
Set up replication agreements such that S1 supplies S2, S2 supplies S3 and
S3 supplies S1.

As defined (and I think we agree this is the current behavior), LDUP allows
this to be done only for R2.  As S2 does not hold R1, you can not set up
replication for R1 to/from S2.  As I understand it, the agreements for R1
and R2 are completely independent.  For example, if I add R2 to S2, and
then set up the cycle described above, there would be at least 6
replication agreements S1->S2(R1), S1->S2(R2), S2->S3(R1), ...  Going back
to the scenario described above, under LDUP you would set up two
independent cycles: S1->S2->S1 (for R1) and S1->S2->S3->S1 (for R2).

I don't see a problem.
</JAM>

> If R2 is a sparse/fractional replica of R1, R2 would not be considered a
> separate replication context. In this case, sparse/fractional replication
> is an attribute of the replicaSubentry for S3. If U1 falls within the
> attributes and/or entries specified for S3, it will be replicated under
> the replication agreements targeting S3 under R1, and the UV for S3
updated
> accordingly.
>
> What happens when S3 is a fractional replica, and U1 does not contain any
> attributes replicated to S3? draft-ietf-ldup-model-06, section 8.2,
> specifies "When fully populating or incrementally bringing up to date a
> Fractional Replica each of the Replication Updates must only
> contain updates to the attributes in the Fractional Entry Specification."
> This implies that S3 will never see U1, and thus not fully update its
> update vector until such time as it receives an update originating at the
> same server.

Do you agree that S1 will also never see U1 ?
This breaks eventual convergence.

<JAM>

Okay, now I think I understand...  Let me restate this scenario:
S1 holds full replica of R1
S2 hold full replica of R1
S3 holds fractional replica of R1
Replication agreements are defined such that S1 supplies S3, S3 supplies
S2, and S2 supplies S1.

Under such an configuration, U1 is not seen by S3, as S1 doesn't replicate
it to S2.  Before proceeding, let me restate that there is a difference
between holding a subtree of an area of replication and holding a
fractional replica.  As I understand it, holding a subtree implies the
existance of another area of replication corresponding to that subtree --
as opposed to a sparse replica (not supported by the ldup model) which
holds some entries in an area of replication.

I see three solutions to the problem you describe:

1.  Replace the restriction in ldup-model-06 8.2 such that all updates are
sent to fractional replicas.  When acting as a supplier, a fractional
replica replicates all replication updates, even those that are not within
the set of attributes held by the fractional replica.  Also, the fractional
replica is responsible for applying only those update primitives that are
within the fractional replica specification.

I think this would cause major problems for state-based implementations.
It seems reasonable for log-based implementations.

2.  Add a resriction to the model & info model to effect that a fractional
replica cannot act as a supplier in LDUP.

In your scenario that implies S3 cannot be a supplier to S1.  Thus S2 must
be a supplier to S2 and U1 and U2 are both replicated from S2 to S1.  I'm
not sure how this would be done -- either the configuration is rejected
(preferred), or a fractional replica simply ignores requests to act as a
supplier.  I prefer rejecting the configuration -- why let someone set up a
replication path that will never be used?

3.  Add a restriction that a fractional replica can act as a supplier only
to another fractional replica, where the consumers fractional specification
is a subset of the suppliers fractional specification (i.e. the supplier
replica holds all entries/attributes held by the consumer, and may hold
more).

For your scenario, this would preclude S3 acting as a supplier to S2 (S2 -
a full replica - does not hold a subset of the attributes held be S3).  I'm
not sure where/when this restriction would be enforced.  It seems that
either the configuration has to be rejected outright -- topic for
management draft -- or that a supplier would have to evaluate the
fractional specifications (if any) for itself and the consumer and
determine whether it should, in fact use the agreement at all.

Assuming state-based replication remains in the standards, I think (2)
would be a much cleaner solution, and most easily implemented.

</JAM>

> Steven's example described U1 and U2 as successive updates
> originating at S2. Suppose U1 originated at S1 and U2 originated at S2:
>
> Initially, the update vector for S3 (UV3) looks like
> < Tx#1#0#0, Ty#2#0#0, null > (latest CSNs from S1 and S2, no updates
> originating at S3)

BTW, you have the second and third components of the CSNs transposed.

> At time T15, on server 1, client performs update U1: CSN = T15#1#0#0.
>
> This is replicated to S2, and eventually S2 replicates to S3. But since
> no attributes in U1 are present in S3's fractional entry specification,
> no replication occurs.
>
> Update vector for S3 remains < Tx#1#0#0, Ty#2#0#0, null >

There are some subtleties in the way that the update vector is maintained
that are not explicitly called out by the architecture draft. The update
U1 at S1 causes the update vector for S1 to be revised, but since the
CSN < T15#1#0#0 > in the update vector for S1 is an attribute value it is
tagged with a CSN! Common sense suggests this CSN should also be <
T15#1#0#0
>.
Changes to the update vector are always replicated (as add-attribute-value
primitives), so S3 will receive the change to S1's update vector because
of U1, even though it doesn't receive U1. On receiving the
add-attribute-value primitive for S1's update vector, S3 will change its
copy of S1's update vector. It will also change the CSN for S1 in it's own
update vector to < T15#1#0#0 >, the CSN on the add-attribute-value
primitive.

The update vector for S3 actually becomes < T15#1#0#0, Ty#2#0#0, null >.
The purge point advances in due course.

Regards,
Steven




--0__=09BBE1B4DFDC96F88f9e8a93df938690918c09BBE1B4DFDC96F8
Content-type: text/html; charset=US-ASCII
Content-Disposition: inline

<html><body>
<p>See responses marked &lt;JAM&gt;<br>
<br>
John  McMeeking<br>
<br>
<img src="/icons/graycol.gif">&quot;Steven Legg&quot; &lt;steven.legg@adacel.com.au&gt;<br>
<br>
<br>

<table V5DOTBL=true width="100%" border="0" cellspacing="0" cellpadding="0">
<tr valign="top"><td width="1%"><img src="/icons/ecblank.gif" border="0" height="1" width="72" alt=""><br>
</td><td style="background-image:url(/mail2.box/StdNotesLtrGateway?OpenImageResource); background-repeat: no-repeat;" width="1%"><img src="/icons/ecblank.gif" border="0" height="1" width="225" alt=""><br>
<ul><ul><ul><ul><b><font size="2">&quot;Steven Legg&quot; &lt;steven.legg@adacel.com.au&gt;</font></b>
<p><font size="2">12/17/2001 10:59 PM</font><br>
<font size="2">Please respond to steven.legg</font></ul></ul></ul></ul></td><td width="100%"><img src="/icons/ecblank.gif" border="0" height="1" width="1" alt=""><br>
<font size="1" face="Arial">	</font><br>
<font size="2">	To:	</font><font size="2">John McMeeking/Rochester/IBM@IBMUS</font><br>
<font size="2">	cc:	</font><font size="2">&lt;ietf-ldup@imc.org&gt;</font><br>
<font size="2">	Subject:	</font><font size="2">RE: Supporting Partial Replication</font><br>
<br>
<font size="1" face="Arial">       </font></td></tr>
</table>
<br>
<font face="Courier New"><br>
John,<br>
<br>
John McMeeking wrote:<br>
&gt; I agree with Ryan on the first part of the note (I haven't gotten through<br>
&gt; the flurry of LDUP notes).<br>
<br>
See my reply to Ryan for corrections and clarifications.<br>
<br>
&gt; But, we may have a problem with update vectors<br>
&gt; under partial/fractional replication.<br>
<br>
Yes.<br>
<br>
&gt; Update vectors are defined per replica and per replication context<br>
(actually,<br>
&gt; defined per replica, where a replica is a replicated instance of a<br>
&gt; replication context).<br>
&gt;<br>
&gt; If R2 is a subtree of R1, separate update vectors are maintained for both<br>
&gt; R2 and R1, and the problem Steven describes doesn't occur.<br>
<br>
Yes (doh!).<br>
<br>
&gt; For either S1<br>
&gt; or S2 to replicate U1 to S3, replication context R1 must be added to S3,<br>
&gt; and the replication context properly initialized on S3 -- either via a<br>
full<br>
&gt; update replication session, or via some other means (i.e. LDIF). At that<br>
&gt; point, U1 (and the rest of the entries in R1) are present on S3 and<br>
&gt; replication continues normally. Replication of R1 is independent of R2.<br>
<br>
You're alluding to the flip side of what I'm saying. If the current<br>
architecture can't support a replication topology where the servers<br>
in a cycle hold different replication areas in the same replication<br>
context then the choices are to not replicate, or to force all the<br>
servers in the cycle to have the same replication area(s). Too bad<br>
if I don't want S3 to see stuff in R1.</font><br>
<br>
<font face="Courier New">&lt;JAM&gt;</font><br>
<font face="Courier New">Are you talking about setting up something like this?</font><br>
<font face="Courier New">Server S1 hold R1 and R2</font><br>
<font face="Courier New">Server S2 holds R2</font><br>
<font face="Courier New">Server S3 holds R1 and R2</font><br>
<font face="Courier New">Set up replication agreements such that S1 supplies S2, S2 supplies S3 and S3 supplies S1.</font><br>
<br>
<font face="Courier New">As defined (and I think we agree this is the current behavior), LDUP allows this to be done only for R2.  As S2 does not hold R1, you can not set up replication for R1 to/from S2.  As I understand it, the agreements for R1 and R2 are completely independent.  For example, if I add R2 to S2, and then set up the cycle described above, there would be at least 6 replication agreements S1-&gt;S2(R1), S1-&gt;S2(R2), S2-&gt;S3(R1), ...  Going back to the scenario described above, under LDUP you would set up two independent cycles: S1-&gt;S2-&gt;S1 (for R1) and S1-&gt;S2-&gt;S3-&gt;S1 (for R2).</font><br>
<br>
<font face="Courier New">I don't see a problem.</font><br>
<font face="Courier New">&lt;/JAM&gt;<br>
<br>
&gt; If R2 is a sparse/fractional replica of R1, R2 would not be considered a<br>
&gt; separate replication context. In this case, sparse/fractional replication<br>
&gt; is an attribute of the replicaSubentry for S3. If U1 falls within the<br>
&gt; attributes and/or entries specified for S3, it will be replicated under<br>
&gt; the replication agreements targeting S3 under R1, and the UV for S3<br>
updated<br>
&gt; accordingly.<br>
&gt;<br>
&gt; What happens when S3 is a fractional replica, and U1 does not contain any<br>
&gt; attributes replicated to S3? draft-ietf-ldup-model-06, section 8.2,<br>
&gt; specifies &quot;When fully populating or incrementally bringing up to date a<br>
&gt; Fractional Replica each of the Replication Updates must only<br>
&gt; contain updates to the attributes in the Fractional Entry Specification.&quot;<br>
&gt; This implies that S3 will never see U1, and thus not fully update its<br>
&gt; update vector until such time as it receives an update originating at the<br>
&gt; same server.<br>
<br>
Do you agree that S1 will also never see U1 ?<br>
This breaks eventual convergence.</font><br>
<br>
<font face="Courier New">&lt;JAM&gt;</font><br>
<br>
<font face="Courier New">Okay, now I think I understand...  Let me restate this scenario:</font><br>
<font face="Courier New">S1 holds full replica of R1</font><br>
<font face="Courier New">S2 hold full replica of R1</font><br>
<font face="Courier New">S3 holds fractional replica of R1</font><br>
<font face="Courier New">Replication agreements are defined such that S1 supplies S3, S3 supplies S2, and S2 supplies S1.</font><br>
<br>
<font face="Courier New">Under such an configuration, U1 is not seen by S3, as S1 doesn't replicate it to S2.  Before proceeding, let me restate that there is a difference between holding a subtree of an area of replication and holding a fractional replica.  As I understand it, holding a subtree implies the existance of another area of replication corresponding to that subtree -- as opposed to a sparse replica (not supported by the ldup model) which holds some entries in an area of replication.</font><br>
<br>
<font face="Courier New">I see three solutions to the problem you describe:</font><br>
<br>
<font face="Courier New">1.  Replace the restriction in ldup-model-06 8.2 such that all updates are sent to fractional replicas.  When acting as a supplier, a fractional replica replicates all replication updates, even those that are not within the set of attributes held by the fractional replica.  Also, the fractional replica is responsible for applying only those update primitives that are within the fractional replica specification.</font><br>
<br>
<font face="Courier New">I think this would cause major problems for state-based implementations.  It seems reasonable for log-based implementations.</font><br>
<br>
<font face="Courier New">2.  Add a resriction to the model &amp; info model to effect that a fractional replica cannot act as a supplier in LDUP.</font><br>
<br>
<font face="Courier New">In your scenario that implies S3 cannot be a supplier to S1.  Thus S2 must be a supplier to S2 and U1 and U2 are both replicated from S2 to S1.  I'm not sure how this would be done -- either the configuration is rejected (preferred), or a fractional replica simply ignores requests to act as a supplier.  I prefer rejecting the configuration -- why let someone set up a replication path that will never be used?</font><br>
<br>
<font face="Courier New">3.  Add a restriction that a fractional replica can act as a supplier only to another fractional replica, where the consumers fractional specification is a subset of the suppliers fractional specification (i.e. the supplier replica holds all entries/attributes held by the consumer, and may hold more).</font><br>
<br>
<font face="Courier New">For your scenario, this would preclude S3 acting as a supplier to S2 (S2 - a full replica - does not hold a subset of the attributes held be S3).  I'm not sure where/when this restriction would be enforced.  It seems that either the configuration has to be rejected outright -- topic for management draft -- or that a supplier would have to evaluate the fractional specifications (if any) for itself and the consumer and determine whether it should, in fact use the agreement at all.</font><br>
<br>
<font face="Courier New">Assuming state-based replication remains in the standards, I think (2) would be a much cleaner solution, and most easily implemented.</font><br>
<br>
<font face="Courier New">&lt;/JAM&gt;<br>
<br>
&gt; Steven's example described U1 and U2 as successive updates<br>
&gt; originating at S2. Suppose U1 originated at S1 and U2 originated at S2:<br>
&gt;<br>
&gt; Initially, the update vector for S3 (UV3) looks like<br>
&gt; &lt; Tx#1#0#0, Ty#2#0#0, null &gt; (latest CSNs from S1 and S2, no updates<br>
&gt; originating at S3)<br>
<br>
BTW, you have the second and third components of the CSNs transposed.<br>
<br>
&gt; At time T15, on server 1, client performs update U1: CSN = T15#1#0#0.<br>
&gt;<br>
&gt; This is replicated to S2, and eventually S2 replicates to S3. But since<br>
&gt; no attributes in U1 are present in S3's fractional entry specification,<br>
&gt; no replication occurs.<br>
&gt;<br>
&gt; Update vector for S3 remains &lt; Tx#1#0#0, Ty#2#0#0, null &gt;<br>
<br>
There are some subtleties in the way that the update vector is maintained<br>
that are not explicitly called out by the architecture draft. The update<br>
U1 at S1 causes the update vector for S1 to be revised, but since the</font><br>
<font face="Courier New">CSN &lt; T15#1#0#0 &gt; in the update vector for S1 is an attribute value it is<br>
tagged with a CSN! Common sense suggests this CSN should also be &lt; T15#1#0#0<br>
&gt;.<br>
Changes to the update vector are always replicated (as add-attribute-value<br>
primitives), so S3 will receive the change to S1's update vector because<br>
of U1, even though it doesn't receive U1. On receiving the<br>
add-attribute-value primitive for S1's update vector, S3 will change its<br>
copy of S1's update vector. It will also change the CSN for S1 in it's own<br>
update vector to &lt; T15#1#0#0 &gt;, the CSN on the add-attribute-value<br>
primitive.<br>
<br>
The update vector for S3 actually becomes &lt; T15#1#0#0, Ty#2#0#0, null &gt;.<br>
The purge point advances in due course.<br>
<br>
Regards,<br>
Steven<br>
<br>
</font><br>
<br>
</body></html>
--0__=09BBE1B4DFDC96F88f9e8a93df938690918c09BBE1B4DFDC96F8--



From owner-ietf-ldup@mail.imc.org  Fri Dec 21 01:12:12 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id BAA02283
	for <ldup-archive@odin.ietf.org>; Fri, 21 Dec 2001 01:12:11 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBL5pkb14590
	for ietf-ldup-bks; Thu, 20 Dec 2001 21:51:46 -0800 (PST)
Received: from nexus.adacel.com (shelob.adacel.com.au [203.36.26.146] (may be forged))
	by above.proper.com (8.11.6/8.11.3) with SMTP id fBL5ph214586
	for <ietf-ldup@imc.org>; Thu, 20 Dec 2001 21:51:44 -0800 (PST)
Received: (qmail 29311 invoked from network); 21 Dec 2001 05:45:37 -0000
Received: from unknown (HELO osmium) (10.32.24.165)
  by nexus.adacel.com with SMTP; 21 Dec 2001 05:45:37 -0000
Reply-To: <steven.legg@adacel.com.au>
From: "Steven Legg" <steven.legg@adacel.com.au>
To: "'John McMeeking'" <jmcmeek@us.ibm.com>
Cc: <ietf-ldup@imc.org>
Subject: RE: Supporting Partial Replication
Date: Fri, 21 Dec 2001 16:51:53 +1100
Message-ID: <003301c189e3$984f2630$a518200a@osmium.mtwav.adacel.com.au>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook 8.5, Build 4.71.2377.0
Importance: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.2120.0
In-reply-to: <OFFCC1B498.F0DA4D8F-ON86256B27.004F1068@rchland.ibm.com>
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 7bit



John,

John McMeeking wrote:
> See responses marked <JAM>

> John,
>
> John McMeeking wrote:
> > For either S1
> > or S2 to replicate U1 to S3, replication context R1 must be added to S3,
> > and the replication context properly initialized on S3 -- either via a
> full
> > update replication session, or via some other means (i.e. LDIF). At that
> > point, U1 (and the rest of the entries in R1) are present on S3 and
> > replication continues normally. Replication of R1 is independent of R2.
>
> You're alluding to the flip side of what I'm saying. If the current
> architecture can't support a replication topology where the servers
> in a cycle hold different replication areas in the same replication
> context then the choices are to not replicate, or to force all the
> servers in the cycle to have the same replication area(s). Too bad
> if I don't want S3 to see stuff in R1.
>
> <JAM>
> Are you talking about setting up something like this?

No. The choices are:

1) don't replicate, i.e. break the cycle by throwing out S3 (in my original
example),

2) force all servers in the cycle to have the same replication area,
i.e. S1 holds R1, S2 holds R1 and S3 holds R1, forget about R2,

3) change the LDUP architecture to support the original topology.


> Server S1 hold R1 and R2
> Server S2 holds R2
> Server S3 holds R1 and R2
> Set up replication agreements such that S1 supplies S2, S2 supplies S3
> and S3 supplies S1.
>
> As defined (and I think we agree this is the current behavior), LDUP
> allows this to be done only for R2. As S2 does not hold R1, you can
> not set up replication for R1 to/from S2. As I understand it, the
> agreements for R1 and R2 are completely independent. For example, if
> I add R2 to S2, and then set up the cycle described above, there would
> be at least 6 replication agreements S1->S2(R1), S1->S2(R2), S2->S3(R1),
> ... Going back to the scenario described above, under LDUP you would
> set up two independent cycles: S1->S2->S1 (for R1) and S1->S2->S3->S1
> (for R2).

They're not independent since S1 and S2 each have a single update vector
for both R1 and R2 in the current architecture. Events in one cycle
affect the other.

>
> I don't see a problem.
> </JAM>
>
> > If R2 is a sparse/fractional replica of R1, R2 would not be considered a
> > separate replication context. In this case, sparse/fractional
replication
> > is an attribute of the replicaSubentry for S3. If U1 falls within the
> > attributes and/or entries specified for S3, it will be replicated under
> > the replication agreements targeting S3 under R1, and the UV for S3
updated
> > accordingly.
> >
> > What happens when S3 is a fractional replica, and U1 does not contain
any
> > attributes replicated to S3? draft-ietf-ldup-model-06, section 8.2,
> > specifies "When fully populating or incrementally bringing up to date a
> > Fractional Replica each of the Replication Updates must only
> > contain updates to the attributes in the Fractional Entry
Specification."
> > This implies that S3 will never see U1, and thus not fully update its
> > update vector until such time as it receives an update originating at
the
> > same server.
>
> Do you agree that S1 will also never see U1 ?
> This breaks eventual convergence.
>
> <JAM>
>
> Okay, now I think I understand... Let me restate this scenario:
> S1 holds full replica of R1
> S2 hold full replica of R1
> S3 holds fractional replica of R1
> Replication agreements are defined such that S1 supplies S3, S3 supplies
S2,
> and S2 supplies S1.

I've assumed symmetry in the replication agreements for my original example,
so the topology is a undirected graph. S1 supplies S2, S1 supplies S3,
S2 supplies S1, S2 supplies S3, S3 supplies S1 and S3 supplies S2. The
subset of these agreements that are significant to the example are S2
supplies S1, S2 supplies S3 and S3 supplies S1. The other agreements are
invoked but end up sending nothing new.

>
> Under such an configuration, U1 is not seen by S3, as S1 doesn't replicate
> it to S2. Before proceeding, let me restate that there is a difference
> between holding a subtree of an area of replication and holding a
> fractional replica. As I understand it, holding a subtree implies the
> existance of another area of replication corresponding to that subtree
> -- as opposed to a sparse replica (not supported by the ldup model) which
> holds some entries in an area of replication.
>
> I see three solutions to the problem you describe:
>
> 1. Replace the restriction in ldup-model-06 8.2 such that all updates are
> sent to fractional replicas. When acting as a supplier, a fractional
replica
> replicates all replication updates, even those that are not within the set
> of attributes held by the fractional replica. Also, the fractional replica
> is responsible for applying only those update primitives that are within
> the fractional replica specification.
>
> I think this would cause major problems for state-based implementations.

Agreed. The server has to store the updates "somewhere" so that they
can be forwarded to other servers.

> It seems reasonable for log-based implementations.

I would expect there to be administrator concerns regardless of the style
of implementation. One reason for setting up a fractional replica is to
protect
certain information held by the supplier from being seen by the consumer.

>
> 2. Add a resriction to the model & info model to effect that a fractional
> replica cannot act as a supplier in LDUP.
>
> In your scenario that implies S3 cannot be a supplier to S1. Thus S2 must
> be a supplier to S2 and U1 and U2 are both replicated from S2 to S1. I'm
> not sure how this would be done -- either the configuration is rejected
> (preferred), or a fractional replica simply ignores requests to act as a
> supplier. I prefer rejecting the configuration -- why let someone set up
> a replication path that will never be used?
>
> 3. Add a restriction that a fractional replica can act as a supplier only
> to another fractional replica, where the consumers fractional
specification
> is a subset of the suppliers fractional specification (i.e. the supplier
> replica holds all entries/attributes held by the consumer, and may hold
more).
>
> For your scenario, this would preclude S3 acting as a supplier to S2
> (S2 - a full replica - does not hold a subset of the attributes held be
S3).
> I'm not sure where/when this restriction would be enforced. It seems that
> either the configuration has to be rejected outright -- topic for
management
> draft -- or that a supplier would have to evaluate the fractional
> specifications (if any) for itself and the consumer and determine whether
> it should, in fact use the agreement at all.

Solutions 2 and 3 both kill any possibility of updateable sparse and/or
fractional replicas. This seriously limits LDUP's usefulness in database
synchronization since external sources of data with which a directory may
be required to synchronize are likely to be both updateable and
sparse/fractional.

They also outlaw secondary shadowing topologies allowed by X.500
replication,
and which I already support. For instance, it would not be possible for a
server to shadow portions from two different naming contexts. In X.500,
administrative areas, e.g. for access control or schema, can and do span
naming contexts (replication contexts in LDUP). The administrative policy
inherited from superior naming contexts is called prefix information and
is included in X.500 replication updates. Prefix information is effectively
a read-only, sparse and fractional copy of information from a superior
naming context. The prefix information for two different naming contexts
will overlap, but neither will be a subset of the other. Solutions 2 and 3
will disallow the prefix information from two such naming contexts to be
replicated to the same shadow DSA.

>
> Assuming state-based replication remains in the standards, I think (2)
> would be a much cleaner solution, and most easily implemented.

... and very limiting. I don't want to have to choose between flexible
replication topologies and multiple masters. I want both.

You didn't enumerate the fourth solution: have an update vector per
replication area in a replication context and use the cascade rule to
maintain the update vectors.

P.S. I'm about to go off for a short break. I'll respond to any
follow-ups after I get back in two weeks.

Regards,
Steven



From owner-ietf-ldup@mail.imc.org  Fri Dec 21 09:18:13 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id JAA24065
	for <ldup-archive@lists.ietf.org>; Fri, 21 Dec 2001 09:18:13 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBLDv8N28462
	for ietf-ldup-bks; Fri, 21 Dec 2001 05:57:08 -0800 (PST)
Received: from e1.ny.us.ibm.com (e1.ny.us.ibm.com [32.97.182.101])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBLDv6228458
	for <ietf-ldup@imc.org>; Fri, 21 Dec 2001 05:57:06 -0800 (PST)
Received: from northrelay02.pok.ibm.com (northrelay02.pok.ibm.com [9.117.200.22])
	by e1.ny.us.ibm.com (8.9.3/8.9.3) with ESMTP id IAA255800;
	Fri, 21 Dec 2001 08:54:03 -0500
Received: from d27ml001.rchland.ibm.com (d27ml001.rchland.ibm.com [9.5.39.28])
	by northrelay02.pok.ibm.com (8.11.1m3/NCO v5.01) with ESMTP id fBLDurU72616;
	Fri, 21 Dec 2001 08:56:54 -0500
Subject: RE: Supporting Partial Replication
To: <steven.legg@adacel.com.au>
Cc: ietf-ldup@imc.org
X-Mailer: Lotus Notes Release 5.0.9  November 16, 2001
Message-ID: <OFB47AF42C.F7EABB1A-ON86256B29.0047D75C@rchland.ibm.com>
From: "John McMeeking" <jmcmeek@us.ibm.com>
Date: Fri, 21 Dec 2001 08:00:42 -0600
X-MIMETrack: Serialize by Router on d27ml001/27/M/IBM(Build M10_08082001 Beta 3|August
 08, 2001) at 12/21/2001 08:00:44 AM
MIME-Version: 1.0
Content-type: multipart/alternative; 
	Boundary="0__=09BBE1BADFD451CC8f9e8a93df938690918c09BBE1BADFD451CC"
Content-Disposition: inline
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


--0__=09BBE1BADFD451CC8f9e8a93df938690918c09BBE1BADFD451CC
Content-type: text/plain; charset=US-ASCII

Before I go on vacation too...

We seem to be operating on different understandings of LDUP with respect to
areas of replication, replication contexts, and update vectors.

You mention a "fourth option", being to maintain "an update vector per
replication area in a replication context and use the cascade rule to
maintain the update vectors."  As I understand LDUP: an area of replication
and a replication context are the same thing, with replication context
being the current LDUP  terminology.  They identify an area of the DIT that
is replicated -- a replication context has a single root entry and is
bounded by subordinate replication contexts (ldup-model 3.5 - Terms and
Definitions).  LDUP already defines a separate update vector per
replication context (per replica).  On the surface at least, your "fourth
option" appears to be LDUP as currently defined.

For a server to participate in a cycle (which ldup-replica-req now refers
to as a "replica group" precisely because of the misunderstanding I had
about your use of cyle), the server must hold a copy of the replication
context.  And just to make sure my assumptions about what this means are
clear:
- a replica group is defined in the context of a replication context.  It
is the servers that hold instances of a particular area of replication.  A
server may be part of several replica-groups.
- replication agreements are defined in the context of a replication
context.

In your example, with three servers and two replication contexts,
agreements between all the servers implies that there are two sets of
agreements between each server -- a set for each replication context.  The
existance of a replication agreement in one replication context does not
imply a corresponding agreement in other replication contexts.

I'm going to go out on a limb, and guess that part of our misunderstandings
has to do with "overlapping" replication contexts -- you mentioned
replicating ACL via sparse or fractional replication, while having a full
replica of some subtree).  This may be required by ldup-replica-req
(mentioned in terminology, but not in specific requirements), but is
currently listed as a non-objective of ldup-model (section 3.3e).  Would it
be fair to state that you think ldup-model (and friends) needs to address
overlapping replication contexts?  My responses have been in the context of
what ldup-model claims to support, and in that context I seem to be having
a problem understanding your concerns and properly communicating my
understanding.


John  McMeeking



                                                                                                                           
                      "Steven Legg"                                                                                        
                      <steven.legg@adac        To:       John McMeeking/Rochester/IBM@IBMUS                                
                      el.com.au>               cc:       <ietf-ldup@imc.org>                                               
                                               Subject:  RE: Supporting Partial Replication                                
                      12/20/2001 11:51                                                                                     
                      PM                                                                                                   
                      Please respond to                                                                                    
                      steven.legg                                                                                          
                                                                                                                           
                                                                                                                           




John,

John McMeeking wrote:
> See responses marked <JAM>

> John,
>
> John McMeeking wrote:
> > For either S1
> > or S2 to replicate U1 to S3, replication context R1 must be added to
S3,
> > and the replication context properly initialized on S3 -- either via a
> full
> > update replication session, or via some other means (i.e. LDIF). At
that
> > point, U1 (and the rest of the entries in R1) are present on S3 and
> > replication continues normally. Replication of R1 is independent of R2.
>
> You're alluding to the flip side of what I'm saying. If the current
> architecture can't support a replication topology where the servers
> in a cycle hold different replication areas in the same replication
> context then the choices are to not replicate, or to force all the
> servers in the cycle to have the same replication area(s). Too bad
> if I don't want S3 to see stuff in R1.
>
> <JAM>
> Are you talking about setting up something like this?

No. The choices are:

1) don't replicate, i.e. break the cycle by throwing out S3 (in my original
example),

2) force all servers in the cycle to have the same replication area,
i.e. S1 holds R1, S2 holds R1 and S3 holds R1, forget about R2,

3) change the LDUP architecture to support the original topology.


> Server S1 hold R1 and R2
> Server S2 holds R2
> Server S3 holds R1 and R2
> Set up replication agreements such that S1 supplies S2, S2 supplies S3
> and S3 supplies S1.
>
> As defined (and I think we agree this is the current behavior), LDUP
> allows this to be done only for R2. As S2 does not hold R1, you can
> not set up replication for R1 to/from S2. As I understand it, the
> agreements for R1 and R2 are completely independent. For example, if
> I add R2 to S2, and then set up the cycle described above, there would
> be at least 6 replication agreements S1->S2(R1), S1->S2(R2), S2->S3(R1),
> ... Going back to the scenario described above, under LDUP you would
> set up two independent cycles: S1->S2->S1 (for R1) and S1->S2->S3->S1
> (for R2).

They're not independent since S1 and S2 each have a single update vector
for both R1 and R2 in the current architecture. Events in one cycle
affect the other.

>
> I don't see a problem.
> </JAM>
>
> > If R2 is a sparse/fractional replica of R1, R2 would not be considered
a
> > separate replication context. In this case, sparse/fractional
replication
> > is an attribute of the replicaSubentry for S3. If U1 falls within the
> > attributes and/or entries specified for S3, it will be replicated under
> > the replication agreements targeting S3 under R1, and the UV for S3
updated
> > accordingly.
> >
> > What happens when S3 is a fractional replica, and U1 does not contain
any
> > attributes replicated to S3? draft-ietf-ldup-model-06, section 8.2,
> > specifies "When fully populating or incrementally bringing up to date a
> > Fractional Replica each of the Replication Updates must only
> > contain updates to the attributes in the Fractional Entry
Specification."
> > This implies that S3 will never see U1, and thus not fully update its
> > update vector until such time as it receives an update originating at
the
> > same server.
>
> Do you agree that S1 will also never see U1 ?
> This breaks eventual convergence.
>
> <JAM>
>
> Okay, now I think I understand... Let me restate this scenario:
> S1 holds full replica of R1
> S2 hold full replica of R1
> S3 holds fractional replica of R1
> Replication agreements are defined such that S1 supplies S3, S3 supplies
S2,
> and S2 supplies S1.

I've assumed symmetry in the replication agreements for my original
example,
so the topology is a undirected graph. S1 supplies S2, S1 supplies S3,
S2 supplies S1, S2 supplies S3, S3 supplies S1 and S3 supplies S2. The
subset of these agreements that are significant to the example are S2
supplies S1, S2 supplies S3 and S3 supplies S1. The other agreements are
invoked but end up sending nothing new.

>
> Under such an configuration, U1 is not seen by S3, as S1 doesn't
replicate
> it to S2. Before proceeding, let me restate that there is a difference
> between holding a subtree of an area of replication and holding a
> fractional replica. As I understand it, holding a subtree implies the
> existance of another area of replication corresponding to that subtree
> -- as opposed to a sparse replica (not supported by the ldup model) which
> holds some entries in an area of replication.
>
> I see three solutions to the problem you describe:
>
> 1. Replace the restriction in ldup-model-06 8.2 such that all updates are
> sent to fractional replicas. When acting as a supplier, a fractional
replica
> replicates all replication updates, even those that are not within the
set
> of attributes held by the fractional replica. Also, the fractional
replica
> is responsible for applying only those update primitives that are within
> the fractional replica specification.
>
> I think this would cause major problems for state-based implementations.

Agreed. The server has to store the updates "somewhere" so that they
can be forwarded to other servers.

> It seems reasonable for log-based implementations.

I would expect there to be administrator concerns regardless of the style
of implementation. One reason for setting up a fractional replica is to
protect
certain information held by the supplier from being seen by the consumer.

>
> 2. Add a resriction to the model & info model to effect that a fractional
> replica cannot act as a supplier in LDUP.
>
> In your scenario that implies S3 cannot be a supplier to S1. Thus S2 must
> be a supplier to S2 and U1 and U2 are both replicated from S2 to S1. I'm
> not sure how this would be done -- either the configuration is rejected
> (preferred), or a fractional replica simply ignores requests to act as a
> supplier. I prefer rejecting the configuration -- why let someone set up
> a replication path that will never be used?
>
> 3. Add a restriction that a fractional replica can act as a supplier only
> to another fractional replica, where the consumers fractional
specification
> is a subset of the suppliers fractional specification (i.e. the supplier
> replica holds all entries/attributes held by the consumer, and may hold
more).
>
> For your scenario, this would preclude S3 acting as a supplier to S2
> (S2 - a full replica - does not hold a subset of the attributes held be
S3).
> I'm not sure where/when this restriction would be enforced. It seems that
> either the configuration has to be rejected outright -- topic for
management
> draft -- or that a supplier would have to evaluate the fractional
> specifications (if any) for itself and the consumer and determine whether
> it should, in fact use the agreement at all.

Solutions 2 and 3 both kill any possibility of updateable sparse and/or
fractional replicas. This seriously limits LDUP's usefulness in database
synchronization since external sources of data with which a directory may
be required to synchronize are likely to be both updateable and
sparse/fractional.

They also outlaw secondary shadowing topologies allowed by X.500
replication,
and which I already support. For instance, it would not be possible for a
server to shadow portions from two different naming contexts. In X.500,
administrative areas, e.g. for access control or schema, can and do span
naming contexts (replication contexts in LDUP). The administrative policy
inherited from superior naming contexts is called prefix information and
is included in X.500 replication updates. Prefix information is effectively
a read-only, sparse and fractional copy of information from a superior
naming context. The prefix information for two different naming contexts
will overlap, but neither will be a subset of the other. Solutions 2 and 3
will disallow the prefix information from two such naming contexts to be
replicated to the same shadow DSA.

>
> Assuming state-based replication remains in the standards, I think (2)
> would be a much cleaner solution, and most easily implemented.

... and very limiting. I don't want to have to choose between flexible
replication topologies and multiple masters. I want both.

You didn't enumerate the fourth solution: have an update vector per
replication area in a replication context and use the cascade rule to
maintain the update vectors.

P.S. I'm about to go off for a short break. I'll respond to any
follow-ups after I get back in two weeks.

Regards,
Steven




--0__=09BBE1BADFD451CC8f9e8a93df938690918c09BBE1BADFD451CC
Content-type: text/html; charset=US-ASCII
Content-Disposition: inline

<html><body>
<p>Before I go on vacation too...<br>
<br>
We seem to be operating on different understandings of LDUP with respect to areas of replication, replication contexts, and update vectors.<br>
<br>
You mention a &quot;fourth option&quot;, being to maintain &quot;an update vector per replication area in a replication context and use the cascade rule to maintain the update vectors.&quot;  As I understand LDUP: an area of replication and a replication context are the same thing, with replication context being the current LDUP  terminology.  They identify an area of the DIT that is replicated -- a replication context has a single root entry and is bounded by subordinate replication contexts (ldup-model 3.5 - Terms and Definitions).  LDUP already defines a separate update vector per replication context (per replica).  On the surface at least, your &quot;fourth option&quot; appears to be LDUP as currently defined.<br>
<br>
For a server to participate in a cycle (which ldup-replica-req now refers to as a &quot;replica group&quot; precisely because of the misunderstanding I had about your use of cyle), the server must hold a copy of the replication context.  And just to make sure my assumptions about what this means are clear:<br>
- a replica group is defined in the context of a replication context.  It is the servers that hold instances of a particular area of replication.  A server may be part of several replica-groups.<br>
- replication agreements are defined in the context of a replication context.<br>
<br>
In your example, with three servers and two replication contexts, agreements between all the servers implies that there are two sets of agreements between each server -- a set for each replication context.  The existance of a replication agreement in one replication context does not imply a corresponding agreement in other replication contexts.<br>
<br>
I'm going to go out on a limb, and guess that part of our misunderstandings has to do with &quot;overlapping&quot; replication contexts -- you mentioned replicating ACL via sparse or fractional replication, while having a full replica of some subtree).  This may be required by ldup-replica-req (mentioned in terminology, but not in specific requirements), but is currently listed as a non-objective of ldup-model (section 3.3e).  Would it be fair to state that you think ldup-model (and friends) needs to address overlapping replication contexts?  My responses have been in the context of what ldup-model claims to support, and in that context I seem to be having a problem understanding your concerns and properly communicating my understanding.<br>
<br>
<br>
John  McMeeking<br>
<br>
<img src="/icons/graycol.gif">&quot;Steven Legg&quot; &lt;steven.legg@adacel.com.au&gt;<br>
<br>
<br>

<table V5DOTBL=true width="100%" border="0" cellspacing="0" cellpadding="0">
<tr valign="top"><td width="1%"><img src="/icons/ecblank.gif" border="0" height="1" width="72" alt=""><br>
</td><td style="background-image:url(/mail2.box/StdNotesLtrGateway?OpenImageResource); background-repeat: no-repeat;" width="1%"><img src="/icons/ecblank.gif" border="0" height="1" width="225" alt=""><br>
<ul><ul><ul><ul><b><font size="2">&quot;Steven Legg&quot; &lt;steven.legg@adacel.com.au&gt;</font></b>
<p><font size="2">12/20/2001 11:51 PM</font><br>
<font size="2">Please respond to steven.legg</font></ul></ul></ul></ul></td><td width="100%"><img src="/icons/ecblank.gif" border="0" height="1" width="1" alt=""><br>
<font size="1" face="Arial">	</font><br>
<font size="2">	To:	</font><font size="2">John McMeeking/Rochester/IBM@IBMUS</font><br>
<font size="2">	cc:	</font><font size="2">&lt;ietf-ldup@imc.org&gt;</font><br>
<font size="2">	Subject:	</font><font size="2">RE: Supporting Partial Replication</font><br>
<br>
<font size="1" face="Arial">       </font></td></tr>
</table>
<br>
<font face="Courier New"><br>
John,<br>
<br>
John McMeeking wrote:<br>
&gt; See responses marked &lt;JAM&gt;<br>
<br>
&gt; John,<br>
&gt;<br>
&gt; John McMeeking wrote:<br>
&gt; &gt; For either S1<br>
&gt; &gt; or S2 to replicate U1 to S3, replication context R1 must be added to S3,<br>
&gt; &gt; and the replication context properly initialized on S3 -- either via a<br>
&gt; full<br>
&gt; &gt; update replication session, or via some other means (i.e. LDIF). At that<br>
&gt; &gt; point, U1 (and the rest of the entries in R1) are present on S3 and<br>
&gt; &gt; replication continues normally. Replication of R1 is independent of R2.<br>
&gt;<br>
&gt; You're alluding to the flip side of what I'm saying. If the current<br>
&gt; architecture can't support a replication topology where the servers<br>
&gt; in a cycle hold different replication areas in the same replication<br>
&gt; context then the choices are to not replicate, or to force all the<br>
&gt; servers in the cycle to have the same replication area(s). Too bad<br>
&gt; if I don't want S3 to see stuff in R1.<br>
&gt;<br>
&gt; &lt;JAM&gt;<br>
&gt; Are you talking about setting up something like this?<br>
<br>
No. The choices are:<br>
<br>
1) don't replicate, i.e. break the cycle by throwing out S3 (in my original<br>
example),<br>
<br>
2) force all servers in the cycle to have the same replication area,<br>
i.e. S1 holds R1, S2 holds R1 and S3 holds R1, forget about R2,<br>
<br>
3) change the LDUP architecture to support the original topology.<br>
<br>
<br>
&gt; Server S1 hold R1 and R2<br>
&gt; Server S2 holds R2<br>
&gt; Server S3 holds R1 and R2<br>
&gt; Set up replication agreements such that S1 supplies S2, S2 supplies S3<br>
&gt; and S3 supplies S1.<br>
&gt;<br>
&gt; As defined (and I think we agree this is the current behavior), LDUP<br>
&gt; allows this to be done only for R2. As S2 does not hold R1, you can<br>
&gt; not set up replication for R1 to/from S2. As I understand it, the<br>
&gt; agreements for R1 and R2 are completely independent. For example, if<br>
&gt; I add R2 to S2, and then set up the cycle described above, there would<br>
&gt; be at least 6 replication agreements S1-&gt;S2(R1), S1-&gt;S2(R2), S2-&gt;S3(R1),<br>
&gt; ... Going back to the scenario described above, under LDUP you would<br>
&gt; set up two independent cycles: S1-&gt;S2-&gt;S1 (for R1) and S1-&gt;S2-&gt;S3-&gt;S1<br>
&gt; (for R2).<br>
<br>
They're not independent since S1 and S2 each have a single update vector<br>
for both R1 and R2 in the current architecture. Events in one cycle<br>
affect the other.<br>
<br>
&gt;<br>
&gt; I don't see a problem.<br>
&gt; &lt;/JAM&gt;<br>
&gt;<br>
&gt; &gt; If R2 is a sparse/fractional replica of R1, R2 would not be considered a<br>
&gt; &gt; separate replication context. In this case, sparse/fractional<br>
replication<br>
&gt; &gt; is an attribute of the replicaSubentry for S3. If U1 falls within the<br>
&gt; &gt; attributes and/or entries specified for S3, it will be replicated under<br>
&gt; &gt; the replication agreements targeting S3 under R1, and the UV for S3<br>
updated<br>
&gt; &gt; accordingly.<br>
&gt; &gt;<br>
&gt; &gt; What happens when S3 is a fractional replica, and U1 does not contain<br>
any<br>
&gt; &gt; attributes replicated to S3? draft-ietf-ldup-model-06, section 8.2,<br>
&gt; &gt; specifies &quot;When fully populating or incrementally bringing up to date a<br>
&gt; &gt; Fractional Replica each of the Replication Updates must only<br>
&gt; &gt; contain updates to the attributes in the Fractional Entry<br>
Specification.&quot;<br>
&gt; &gt; This implies that S3 will never see U1, and thus not fully update its<br>
&gt; &gt; update vector until such time as it receives an update originating at<br>
the</font><br>
<font face="Courier New">&gt; &gt; same server.<br>
&gt;<br>
&gt; Do you agree that S1 will also never see U1 ?<br>
&gt; This breaks eventual convergence.<br>
&gt;<br>
&gt; &lt;JAM&gt;<br>
&gt;<br>
&gt; Okay, now I think I understand... Let me restate this scenario:<br>
&gt; S1 holds full replica of R1<br>
&gt; S2 hold full replica of R1<br>
&gt; S3 holds fractional replica of R1<br>
&gt; Replication agreements are defined such that S1 supplies S3, S3 supplies<br>
S2,<br>
&gt; and S2 supplies S1.<br>
<br>
I've assumed symmetry in the replication agreements for my original example,<br>
so the topology is a undirected graph. S1 supplies S2, S1 supplies S3,<br>
S2 supplies S1, S2 supplies S3, S3 supplies S1 and S3 supplies S2. The<br>
subset of these agreements that are significant to the example are S2<br>
supplies S1, S2 supplies S3 and S3 supplies S1. The other agreements are<br>
invoked but end up sending nothing new.<br>
<br>
&gt;<br>
&gt; Under such an configuration, U1 is not seen by S3, as S1 doesn't replicate<br>
&gt; it to S2. Before proceeding, let me restate that there is a difference<br>
&gt; between holding a subtree of an area of replication and holding a<br>
&gt; fractional replica. As I understand it, holding a subtree implies the<br>
&gt; existance of another area of replication corresponding to that subtree<br>
&gt; -- as opposed to a sparse replica (not supported by the ldup model) which<br>
&gt; holds some entries in an area of replication.<br>
&gt;<br>
&gt; I see three solutions to the problem you describe:<br>
&gt;<br>
&gt; 1. Replace the restriction in ldup-model-06 8.2 such that all updates are<br>
&gt; sent to fractional replicas. When acting as a supplier, a fractional<br>
replica<br>
&gt; replicates all replication updates, even those that are not within the set<br>
&gt; of attributes held by the fractional replica. Also, the fractional replica<br>
&gt; is responsible for applying only those update primitives that are within<br>
&gt; the fractional replica specification.<br>
&gt;<br>
&gt; I think this would cause major problems for state-based implementations.<br>
<br>
Agreed. The server has to store the updates &quot;somewhere&quot; so that they<br>
can be forwarded to other servers.<br>
<br>
&gt; It seems reasonable for log-based implementations.<br>
<br>
I would expect there to be administrator concerns regardless of the style<br>
of implementation. One reason for setting up a fractional replica is to<br>
protect<br>
certain information held by the supplier from being seen by the consumer.<br>
<br>
&gt;<br>
&gt; 2. Add a resriction to the model &amp; info model to effect that a fractional<br>
&gt; replica cannot act as a supplier in LDUP.<br>
&gt;<br>
&gt; In your scenario that implies S3 cannot be a supplier to S1. Thus S2 must<br>
&gt; be a supplier to S2 and U1 and U2 are both replicated from S2 to S1. I'm<br>
&gt; not sure how this would be done -- either the configuration is rejected<br>
&gt; (preferred), or a fractional replica simply ignores requests to act as a<br>
&gt; supplier. I prefer rejecting the configuration -- why let someone set up<br>
&gt; a replication path that will never be used?<br>
&gt;<br>
&gt; 3. Add a restriction that a fractional replica can act as a supplier only<br>
&gt; to another fractional replica, where the consumers fractional<br>
specification<br>
&gt; is a subset of the suppliers fractional specification (i.e. the supplier<br>
&gt; replica holds all entries/attributes held by the consumer, and may hold<br>
more).<br>
&gt;<br>
&gt; For your scenario, this would preclude S3 acting as a supplier to S2<br>
&gt; (S2 - a full replica - does not hold a subset of the attributes held be<br>
S3).<br>
&gt; I'm not sure where/when this restriction would be enforced. It seems that<br>
&gt; either the configuration has to be rejected outright -- topic for<br>
management<br>
&gt; draft -- or that a supplier would have to evaluate the fractional<br>
&gt; specifications (if any) for itself and the consumer and determine whether<br>
&gt; it should, in fact use the agreement at all.<br>
<br>
Solutions 2 and 3 both kill any possibility of updateable sparse and/or<br>
fractional replicas. This seriously limits LDUP's usefulness in database<br>
synchronization since external sources of data with which a directory may<br>
be required to synchronize are likely to be both updateable and<br>
sparse/fractional.<br>
<br>
They also outlaw secondary shadowing topologies allowed by X.500<br>
replication,<br>
and which I already support. For instance, it would not be possible for a<br>
server to shadow portions from two different naming contexts. In X.500,<br>
administrative areas, e.g. for access control or schema, can and do span<br>
naming contexts (replication contexts in LDUP). The administrative policy<br>
inherited from superior naming contexts is called prefix information and<br>
is included in X.500 replication updates. Prefix information is effectively<br>
a read-only, sparse and fractional copy of information from a superior<br>
naming context. The prefix information for two different naming contexts<br>
will overlap, but neither will be a subset of the other. Solutions 2 and 3<br>
will disallow the prefix information from two such naming contexts to be<br>
replicated to the same shadow DSA.<br>
<br>
&gt;<br>
&gt; Assuming state-based replication remains in the standards, I think (2)<br>
&gt; would be a much cleaner solution, and most easily implemented.<br>
<br>
.. and very limiting. I don't want to have to choose between flexible<br>
replication topologies and multiple masters. I want both.<br>
<br>
You didn't enumerate the fourth solution: have an update vector per<br>
replication area in a replication context and use the cascade rule to<br>
maintain the update vectors.<br>
<br>
P.S. I'm about to go off for a short break. I'll respond to any<br>
follow-ups after I get back in two weeks.<br>
<br>
Regards,<br>
Steven<br>
<br>
</font><br>
<br>
</body></html>
--0__=09BBE1BADFD451CC8f9e8a93df938690918c09BBE1BADFD451CC--



From owner-ietf-ldup@mail.imc.org  Fri Dec 21 12:23:05 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA27068
	for <ldup-archive@odin.ietf.org>; Fri, 21 Dec 2001 12:23:00 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBLH6Xm12868
	for ietf-ldup-bks; Fri, 21 Dec 2001 09:06:33 -0800 (PST)
Received: from smtp.oncalldba.com (roc-24-169-98-153.rochester.rr.com [24.169.98.153])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBLH6V212861
	for <ietf-ldup@imc.org>; Fri, 21 Dec 2001 09:06:32 -0800 (PST)
Received: from RMINC_DOM-MTA by smtp.oncalldba.com
	with Novell_GroupWise; Fri, 21 Dec 2001 11:55:35 -0700
Message-Id: <sc232337.002@smtp.oncalldba.com>
X-Mailer: Novell GroupWise Internet Agent 6.0
Date: Fri, 21 Dec 2001 11:55:29 -0700
From: "Ed Reed" <eer@OnCallDBA.COM>
To: <steven.legg@adacel.com.au>, <jmcmeek@us.ibm.com>
Cc: <ietf-ldup@imc.org>
Subject: RE: Supporting Partial Replication
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by above.proper.com id fBLH6W212864
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>
Content-Transfer-Encoding: 8bit


John - I think you're onto the right track in determining where the disconnect
exists.  

To begin, note that the model only talks about fractional, and not sparse,
replicas.  

Fractional replicas are those that hold entries having a subset
of all the attributes defined on compete replicas holding those entries.  All
entries of a replication context are present in a fractional replica, but not
all of their attributes are there.  It is the entries which are fractional.

Sparse replicas, we agreed, are those where not every entry of the
replication context are held on the sparse replica.  But every entry
that IS held is complete.  It is the namespace that is sparse.

A Sparse and/or Fractional replica (an incomplete replica, in earlier drafts
of the model) hold some of the entries of the replication context, and they
may or may not be complete (ie, some entries may be fractional entries).
In other words, the namespace is sparse and the entries held may be
fractional.

We dropped sparse replicas from the Arch Model when we couldn't
persuade ourselves that we knew how to deal with update vectors and
purge vectors for sparse replicas.  

Now, I've never contemplated a scenario in which a server holds two
replicas of same replication context.  There may be utility in
such a scenario, but it has never occured to me.  Thus, there is one
update vector for the replication context, on the replicaSubentry.

The LDUP model in my mind partitions naming contexts with replication
contexts.  Note that both naming contexts and replication contexts are
subjects of the DIT, not of any particular DIB.  A replica is a set of
entries from a naming context held in a particular DIB by a particular
LDAP server.  There may be multiple replicas of any replication context.

But as I said, it had never occured to me to consider a DSA holding
multiple replicas of the same replication context in its DIB.

Rather, I assumed that a DSA would be considered to hold in its DIB
all the entries from a replication context that it needed for its own
purposes, and for the purposes of forwarding updates to other servers
that it supplies updates to, if any.

So, subordinate to the replicaSubentry representing the replica in the
DIB of a DSA, there could be several replicationAgreements - some
of which place no additional filters on the updates being sent, and 
some that DO further restrict which attributes (for fractional entries)
are to be forwarded to the specific other DSA replica pointed to by
each replicationAgreementSubentry.

Thus - a replicaSubentry documents the entries held by the DSA and
whether there are any limits on the attributes held for those entries,
and replicationAgreementSubentries documents the flow of
entries (and their attributes, possibly filtered) between a replica
and another.

If the set of entries in replication area A held by replica 1 is 
designated A1 on Server S1, and there is a subset of A held on another
server S2 designated A2, A1 is a complete replica and A2 is an
incomplete replica.  It would be redundant to say that A2 and A1
are both held by S1, though they are, because all the entries of A2
(being a subset of A) are in A1 (the full set of A).

Now, A1 on S1 might very well have a replicationAgreement with A2
on S2 such that A1 only sends what A2 needs because of the filters
on the replication agreement, or even because of filters on the
replicaSubentry for A2.

But the notion that both A1 and A2 would be considered to both
be on a single server is simply not something I have ever considered.
I don't know if the model could work in such a scenario or not.

Ed
>>> "John McMeeking" <jmcmeek@us.ibm.com> 12/21/01 09:00AM >>>
Before I go on vacation too...

We seem to be operating on different understandings of LDUP with respect to
areas of replication, replication contexts, and update vectors.

You mention a "fourth option", being to maintain "an update vector per
replication area in a replication context and use the cascade rule to
maintain the update vectors."  As I understand LDUP: an area of replication
and a replication context are the same thing, with replication context
being the current LDUP  terminology.  They identify an area of the DIT that
is replicated -- a replication context has a single root entry and is
bounded by subordinate replication contexts (ldup-model 3.5 - Terms and
Definitions).  LDUP already defines a separate update vector per
replication context (per replica).  On the surface at least, your "fourth
option" appears to be LDUP as currently defined.

For a server to participate in a cycle (which ldup-replica-req now refers
to as a "replica group" precisely because of the misunderstanding I had
about your use of cyle), the server must hold a copy of the replication
context.  And just to make sure my assumptions about what this means are
clear:
- a replica group is defined in the context of a replication context.  It
is the servers that hold instances of a particular area of replication.  A
server may be part of several replica-groups.
- replication agreements are defined in the context of a replication
context.

In your example, with three servers and two replication contexts,
agreements between all the servers implies that there are two sets of
agreements between each server -- a set for each replication context.  The
existance of a replication agreement in one replication context does not
imply a corresponding agreement in other replication contexts.

I'm going to go out on a limb, and guess that part of our misunderstandings
has to do with "overlapping" replication contexts -- you mentioned
replicating ACL via sparse or fractional replication, while having a full
replica of some subtree).  This may be required by ldup-replica-req
(mentioned in terminology, but not in specific requirements), but is
currently listed as a non-objective of ldup-model (section 3.3e).  Would it
be fair to state that you think ldup-model (and friends) needs to address
overlapping replication contexts?  My responses have been in the context of
what ldup-model claims to support, and in that context I seem to be having
a problem understanding your concerns and properly communicating my
understanding.


John  McMeeking



                                                                                                                           
                      "Steven Legg"                                                                                        
                      <steven.legg@adac        To:       John McMeeking/Rochester/IBM@IBMUS                                
                      el.com.au>               cc:       <ietf-ldup@imc.org>                                               
                                               Subject:  RE: Supporting Partial Replication                                
                      12/20/2001 11:51                                                                                     
                      PM                                                                                                   
                      Please respond to                                                                                    isingle server is simply not something I 
                      steven.legg                                                                                          
                                                                                                                           
                                                                                                                           




John,

John McMeeking wrote:
> See responses marked <JAM>

> John,
>
> John McMeeking wrote:
> > For either S1
> > or S2 to replicate U1 to S3, replication context R1 must be added to
S3,
> > and the replication context properly initialized on S3 -- either via a
> full
> > update replication session, or via some other means (i.e. LDIF). At
that
> > point, U1 (and the rest of the entries in R1) are present on S3 and
> > replication continues normally. Replication of R1 is independent of R2.
>
> You're alluding to the flip side of what I'm saying. If the current
> architecture can't support a replication topology where the servers
> in a cycle hold different replication areas in the same replication
> context then the choices are to not replicate, or to force all the
> servers in the cycle to have the same replication area(s). Too bad
> if I don't want S3 to see stuff in R1.
>
> <JAM>
> Are you talking about setting up something like this?

No. The choices are:

1) don't replicate, i.e. break the cycle by throwing out S3 (in my original
example),

2) force all servers in the cycle to have the same replication area,
i.e. S1 holds R1, S2 holds R1 and S3 holds R1, forget about R2,

3) change the LDUP architecture to support the original topology.


> Server S1 hold R1 and R2
> Server S2 holds R2
> Server S3 holds R1 and R2
> Set up replication agreements such that S1 supplies S2, S2 supplies S3
> and S3 supplies S1.
>
> As defined (and I think we agree this is the current behavior), LDUP
> allows this to be done only for R2. As S2 does not hold R1, you can
> not set up replication for R1 to/from S2. As I understand it, the
> agreements for R1 and R2 are completely independent. For example, if
> I add R2 to S2, and then set up the cycle described above, there would
> be at least 6 replication agreements S1->S2(R1), S1->S2(R2), S2->S3(R1),
> ... Going back to the scenario described above, under LDUP you would
> set up two independent cycles: S1->S2->S1 (for R1) and S1->S2->S3->S1
> (for R2).

They're not independent since S1 and S2 each have a single update vector
for both R1 and R2 in the current architecture. Events in one cycle
affect the other.

>
> I don't see a problem.
> </JAM>
>
> > If R2 is a sparse/fractional replica of R1, R2 would not be considered
a
> > separate replication context. In this case, sparse/fractional
replication
> > is an attribute of the replicaSubentry for S3. If U1 falls within the
> > attributes and/or entries specified for S3, it will be replicated under
> > the replication agreements targeting S3 under R1, and the UV for S3
updated
> > accordingly.
> >
> > What happens when S3 is a fractional replica, and U1 does not contain
any
> > attributes replicated to S3? draft-ietf-ldup-model-06, section 8.2,
> > specifies "When fully populating or incrementally bringing up to date a
> > Fractional Replica each of the Replication Updates must only
> > contain updates to the attributes in the Fractional Entry
Specification."
> > This implies that S3 will never see U1, and thus not fully update its
> > update vector until such time as it receives an update originating at
the
> > same server.
>Ile server is simply not something I 
> Do you agree that S1 will also never see U1 ?
> This breaks eventual convergence.
>
> <JAM>
>
> Okay, now I think I understand... Let me restate this scenario:
> S1 holds full replica of R1
> S2 hold full replica of R1
> S3 holds fractional replica of R1
> Replication agreements are defined such that S1 supplies S3, S3 supplies
S2,
> and S2 supplies S1.

I've assumed symmetry in the replication agreements for my original
example,
so the topology is a undirected graph. S1 supplies S2, S1 supplies S3,
S2 supplies S1, S2 supplies S3, S3 supplies S1 and S3 supplies S2. The
subset of these agreements that are significant to the example are S2
supplies S1, S2 supplies S3 and S3 supplies S1. The other agreements are
invoked but end up sending nothing new.

>
> Under such an configuration, U1 is not seen by S3, as S1 doesn't
replicate
> it to S2. Before proceeding, let me restate that there is a difference
> between holding a subtree of an area of replication and holding a
> fractional replica. As I understand it, holding a subtree implies the
> existance of another area of replication corresponding to that subtree
> -- as opposed to a sparse replica (not supported by the ldup model) which
> holds some entries in an area of replication.
>
> I see three solutions to the problem you describe:
>
> 1. Replace the restriction in ldup-model-06 8.2 such that all updates are
> sent to fractional replicas. When acting as a supplier, a fractional
replica
> replicates all replication updates, even those that are not within the
set
> of attributes held by the fractional replica. Also, the fractional
replica
> is responsible for applying only those update primitives that are within
> the fractional replica specification.
>
> I think this would cause major problems for state-based implementations.

Agreed. The server has to store the updates "somewhere" so that they
can be forwarded to other servers.

> It seems reasonable for log-based implementations.

I would expect there to be administrator concerns regardless of the style
of implementation. One reason for setting up a fractional replica is to
protect
certain information held by the supplier from being seen by the consumer.

>
> 2. Add a resriction to the model & info model to effect that a fractional
> replica cannot act as a supplier in LDUP.
>
> In your scenario that implies S3 cannot be a supplier to S1. Thus S2 must
> be a supplier to S2 and U1 and U2 are both replicated from S2 to S1. I'm
> not sure how this would be done -- either the configuration is rejected
> (preferred), or a fractional replica simply ignores requests to act as a
> supplier. I prefer rejecting the configuration -- why let someone set up
> a replication path that will never be used?
>
> 3. Add a restriction that a fractional replica can act as a supplier only
> to another fractional replica, where the consumers fractional
specification
> is a subset of the suppliers fractional specification (i.e. the supplier
> replica holds all entries/attributes held by the consumer, and may hold
more).
>
> For your scenario, this would preclude S3 acting as a supplier to S2
> (S2 - a full replica - does not hold a subset of the attributes held be
S3).
> I'm not sure where/when this restriction would be enforced. It seems that
> either the configuration has to be rejected outright -- topic for
management
> draft -- or that a supplier would have to evaluate the fractional
> specifications (if any) for itself and the consumer and determine whether
> it should, in fact use the agreement at all.

Solutions 2 and 3 both kill any possibility of updateable sparse and/or
fractional replicas. This seriously limits LDUP's usefulness in database
synchronization since external sources of data with which a directory may
be required to synchronize are likely to be both updateable and
sparse/fractional.

They also outlaw secondary shadowing topologies allowed by X.500
replication,
and which I already support. For instance, it would not be possible for a
server to shadow portions from two different naming contexts. In X.500,
administrative areas, e.g. for access control or schema, can and do span
naming contexts (replication contexts in LDUP). The administrative policy
inherited from superior naming contexts is called prefix information and
is included in X.500 replication updates. Prefix information is effectively
a read-only, sparse and fractional copy of information from a superior
naming context. The prefix information for two different naming contexts
will overlap, but neither will be a subset of the other. Solutions 2 and 3
will disallow the prefix information from two such naming contexts to be
replicated to the same shadow DSA.

>
> Assuming state-based replication remains in the standards, I think (2)
> would be a much cleaner solution, and most easily implemented.

... and very limiting. I don't want to have to choose between flexible
replication topologies and multiple masters. I want both.

You didn't enumerate the fourth solution: have an update vector per
replication area in a replication context and use the cascade rule to
maintain the update vectors.

P.S. I'm about to go off for a short break. I'll respond to any
follow-ups after I get back in two weeks.

Regards,
Steven




=================
Ed Reed
Reed-Matthews, Inc.
+1 585 624 2402
http://www.Reed-Matthews.COM
Note:  Area code is 585


From owner-ietf-ldup@mail.imc.org  Fri Dec 21 12:32:33 2001
Received: from above.proper.com (above.proper.com [208.184.76.39])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA27360
	for <ldup-archive@odin.ietf.org>; Fri, 21 Dec 2001 12:32:23 -0500 (EST)
Received: from localhost (localhost [[UNIX: localhost]])
	by above.proper.com (8.11.6/8.11.3) id fBLHEtC13593
	for ietf-ldup-bks; Fri, 21 Dec 2001 09:14:55 -0800 (PST)
Received: from e4.ny.us.ibm.com (e4.ny.us.ibm.com [32.97.182.104])
	by above.proper.com (8.11.6/8.11.3) with ESMTP id fBLHEr213587;
	Fri, 21 Dec 2001 09:14:53 -0800 (PST)
Received: from northrelay02.pok.ibm.com (northrelay02.pok.ibm.com [9.117.200.22])
	by e4.ny.us.ibm.com (8.9.3/8.9.3) with ESMTP id MAA124342;
	Fri, 21 Dec 2001 12:11:24 -0500
Received: from d27ml001.rchland.ibm.com (d27ml001.rchland.ibm.com [9.5.39.28])
	by northrelay02.pok.ibm.com (8.11.1m3/NCO v5.01) with ESMTP id fBLHEEU49358;
	Fri, 21 Dec 2001 12:14:14 -0500
Subject: RE: Supporting Partial Replication
To: "John McMeeking" <jmcmeek@us.ibm.com>
Cc: ietf-ldup@imc.org, owner-ietf-ldup@mail.imc.org, steven.legg@adacel.com.au
X-Mailer: Lotus Notes Release 5.0.9  November 16, 2001
Message-ID: <OF57346AD0.F50B137E-ON86256B29.005D28BA@rchland.ibm.com>
From: "John McMeeking" <jmcmeek@us.ibm.com>
Date: Fri, 21 Dec 2001 11:18:03 -0600
X-MIMETrack: Serialize by Router on d27ml001/27/M/IBM(Build M10_08082001 Beta 3|August
 08, 2001) at 12/21/2001 11:18:05 AM
MIME-Version: 1.0
Content-type: multipart/alternative; 
	Boundary="0__=09BBE1BADFCEAE2A8f9e8a93df938690918c09BBE1BADFCEAE2A"
Content-Disposition: inline
Sender: owner-ietf-ldup@mail.imc.org
Precedence: bulk
List-Archive: <http://www.imc.org/ietf-ldup/mail-archive/>
List-ID: <ietf-ldup.imc.org>
List-Unsubscribe: <mailto:ietf-ldup-request@imc.org?body=unsubscribe>


--0__=09BBE1BADFCEAE2A8f9e8a93df938690918c09BBE1BADFCEAE2A
Content-type: text/plain; charset=US-ASCII

Updated to add some thoughts on overlapping replication contexts...

John McMeeking


Before I go on vacation too...

We seem to be operating on different understandings of LDUP with respect to
areas of replication, replication contexts, and update vectors.

You mention a "fourth option", being to maintain "an update vector per
replication area in a replication context and use the cascade rule to
maintain the update vectors." As I understand LDUP: an area of replication
and a replication context are the same thing, with replication context
being the current LDUP terminology. They identify an area of the DIT that
is replicated -- a replication context has a single root entry and is
bounded by subordinate replication contexts (ldup-model 3.5 - Terms and
Definitions). LDUP already defines a separate update vector per replication
context (per replica). On the surface at least, your "fourth option"
appears to be LDUP as currently defined.

For a server to participate in a cycle (which ldup-replica-req now refers
to as a "replica group" precisely because of the misunderstanding I had
about your use of cyle), the server must hold a copy of the replication
context. And just to make sure my assumptions about what this means are
clear:
- a replica group is defined in the context of a replication context. It is
the servers that hold instances of a particular area of replication. A
server may be part of several replica-groups.
- replication agreements are defined in the context of a replication
context.

In your example, with three servers and two replication contexts,
agreements between all the servers implies that there are two sets of
agreements between each server -- a set for each replication context. The
existance of a replication agreement in one replication context does not
imply a corresponding agreement in other replication contexts.

I'm going to go out on a limb, and guess that part of our misunderstandings
has to do with "overlapping" replication contexts -- you mentioned
replicating ACL via sparse or fractional replication, while having a full
replica of some subtree). This may be required by ldup-replica-req
(mentioned in terminology, but not in specific requirements), but is
currently listed as a non-objective of ldup-model (section 3.3e). Would it
be fair to state that you think ldup-model (and friends) needs to address
overlapping replication contexts? My responses have been in the context of
what ldup-model claims to support, and in that context I seem to be having
a problem understanding your concerns and properly communicating my
understanding.

After thinking about overlapping replication contexts a bit more, IF we are
going to tackle it, I think we need to address the following:
1.  How do we define the bounds of a replication context that is not
bounded by nested replication contexts?  Kurt Zielenga's ldap-subentry
draft would be useful here (subtree specification).
2.  If a client update falls within multiple replication contexts, how
should LDUP behave?  Let's start with replicating changes under all
appropriate replication contexts, meaning that the same update will be sent
multiple times under different replication sessions (they are idempotent,
so this should be okay).  This should keep update vectors in the correct
state, as an update under one replication context may be replicated before
earlier updates (by CSN) that fall within other overlapping replication
contexts.
3.  Do we allow multiple replication contexts with different bounds to have
the same root?  I'd like to withdraw the question, because I'm sure that
once asked, the answer will be "YES!"  This makes my head hurt more than I
need just before Christmas, so I'll leave that for others to gnaw on.


John McMeeking

"Steven Legg" <steven.legg@adacel.com.au>

                                                                          
                                                                          
                                "Steven Legg"                             
                                <steven.legg@adac To: John                
                                el.com.au>        McMeeking/Rochester/IBM 
                                                  @IBMUS                  
                                                  cc: <ietf-ldup@imc.org> 
                                12/20/2001 11:51  Subject: RE: Supporting 
                                PM                Partial Replication     
                                Please respond to                         
                                steven.legg                               
                                                                          




John,

John McMeeking wrote:
> See responses marked <JAM>

> John,
>
> John McMeeking wrote:
> > For either S1
> > or S2 to replicate U1 to S3, replication context R1 must be added to
S3,
> > and the replication context properly initialized on S3 -- either via a
> full
> > update replication session, or via some other means (i.e. LDIF). At
that
> > point, U1 (and the rest of the entries in R1) are present on S3 and
> > replication continues normally. Replication of R1 is independent of R2.
>
> You're alluding to the flip side of what I'm saying. If the current
> architecture can't support a replication topology where the servers
> in a cycle hold different replication areas in the same replication
> context then the choices are to not replicate, or to force all the
> servers in the cycle to have the same replication area(s). Too bad
> if I don't want S3 to see stuff in R1.
>
> <JAM>
> Are you talking about setting up something like this?

No. The choices are:

1) don't replicate, i.e. break the cycle by throwing out S3 (in my original
example),

2) force all servers in the cycle to have the same replication area,
i.e. S1 holds R1, S2 holds R1 and S3 holds R1, forget about R2,

3) change the LDUP architecture to support the original topology.


> Server S1 hold R1 and R2
> Server S2 holds R2
> Server S3 holds R1 and R2
> Set up replication agreements such that S1 supplies S2, S2 supplies S3
> and S3 supplies S1.
>
> As defined (and I think we agree this is the current behavior), LDUP
> allows this to be done only for R2. As S2 does not hold R1, you can
> not set up replication for R1 to/from S2. As I understand it, the
> agreements for R1 and R2 are completely independent. For example, if
> I add R2 to S2, and then set up the cycle described above, there would
> be at least 6 replication agreements S1->S2(R1), S1->S2(R2), S2->S3(R1),
> ... Going back to the scenario described above, under LDUP you would
> set up two independent cycles: S1->S2->S1 (for R1) and S1->S2->S3->S1
> (for R2).

They're not independent since S1 and S2 each have a single update vector
for both R1 and R2 in the current architecture. Events in one cycle
affect the other.

>
> I don't see a problem.
> </JAM>
>
> > If R2 is a sparse/fractional replica of R1, R2 would not be considered
a
> > separate replication context. In this case, sparse/fractional
replication
> > is an attribute of the replicaSubentry for S3. If U1 falls within the
> > attributes and/or entries specified for S3, it will be replicated under
> > the replication agreements targeting S3 under R1, and the UV for S3
updated
> > accordingly.
> >
> > What happens when S3 is a fractional replica, and U1 does not contain
any
> > attributes replicated to S3? draft-ietf-ldup-model-06, section 8.2,
> > specifies "When fully populating or incrementally bringing up to date a
> > Fractional Replica each of the Replication Updates must only
> > contain updates to the attributes in the Fractional Entry
Specification."
> > This implies that S3 will never see U1, and thus not fully update its
> > update vector until such time as it receives an update originating at
the
> > same server.
>
> Do you agree that S1 will also never see U1 ?
> This breaks eventual convergence.
>
> <JAM>
>
> Okay, now I think I understand... Let me restate this scenario:
> S1 holds full replica of R1
> S2 hold full replica of R1
> S3 holds fractional replica of R1
> Replication agreements are defined such that S1 supplies S3, S3 supplies
S2,
> and S2 supplies S1.

I've assumed symmetry in the replication agreements for my original
example,
so the topology is a undirected graph. S1 supplies S2, S1 supplies S3,
S2 supplies S1, S2 supplies S3, S3 supplies S1 and S3 supplies S2. The
subset of these agreements that are significant to the example are S2
supplies S1, S2 supplies S3 and S3 supplies S1. The other agreements are
invoked but end up sending nothing new.

>
> Under such an configuration, U1 is not seen by S3, as S1 doesn't
replicate
> it to S2. Before proceeding, let me restate that there is a difference
> between holding a subtree of an area of replication and holding a
> fractional replica. As I understand it, holding a subtree implies the
> existance of another area of replication corresponding to that subtree
> -- as opposed to a sparse replica (not supported by the ldup model) which
> holds some entries in an area of replication.
>
> I see three solutions to the problem you describe:
>
> 1. Replace the restriction in ldup-model-06 8.2 such that all updates are
> sent to fractional replicas. When acting as a supplier, a fractional
replica
> replicates all replication updates, even those that are not within the
set
> of attributes held by the fractional replica. Also, the fractional
replica
> is responsible for applying only those update primitives that are within
> the fractional replica specification.
>
> I think this would cause major problems for state-based implementations.

Agreed. The server has to store the updates "somewhere" so that they
can be forwarded to other servers.

> It seems reasonable for log-based implementations.

I would expect there to be administrator concerns regardless of the style
of implementation. One reason for setting up a fractional replica is to
protect
certain information held by the supplier from being seen by the consumer.

>
> 2. Add a resriction to the model & info model to effect that a fractional
> replica cannot act as a supplier in LDUP.
>
> In your scenario that implies S3 cannot be a supplier to S1. Thus S2 must
> be a supplier to S2 and U1 and U2 are both replicated from S2 to S1. I'm
> not sure how this would be done -- either the configuration is rejected
> (preferred), or a fractional replica simply ignores requests to act as a
> supplier. I prefer rejecting the configuration -- why let someone set up
> a replication path that will never be used?
>
> 3. Add a restriction that a fractional replica can act as a supplier only
> to another fractional replica, where the consumers fractional
specification
> is a subset of the suppliers fractional specification (i.e. the supplier
> replica holds all entries/attributes held by the consumer, and may hold
more).
>
> For your scenario, this would preclude S3 acting as a supplier to S2
> (S2 - a full replica - does not hold a subset of the attributes held be
S3).
> I'm not sure where/when this restriction would be enforced. It seems that
> either the configuration has to be rejected outright -- topic for
management
> draft -- or that a supplier would have to evaluate the fractional
> specifications (if any) for itself and the consumer and determine whether
> it should, in fact use the agreement at all.

Solutions 2 and 3 both kill any possibility of updateable sparse and/or
fractional replicas. This seriously limits LDUP's usefulness in database
synchronization since external sources of data with which a directory may
be required to synchronize are likely to be both updateable and
sparse/fractional.

They also outlaw secondary shadowing topologies allowed by X.500
replication,
and which I already support. For instance, it would not be possible for a
server to shadow portions from two different naming contexts. In X.500,
administrative areas, e.g. for access control or schema, can and do span
naming contexts (replication contexts in LDUP). The administrative policy
inherited from superior naming contexts is called prefix information and
is included in X.500 replication updates. Prefix information is effectively
a read-only, sparse and fractional copy of information from a superior
naming context. The prefix information for two different naming contexts
will overlap, but neither will be a subset of the other. Solutions 2 and 3
will disallow the prefix information from two such naming contexts to be
replicated to the same shadow DSA.

>
> Assuming state-based replication remains in the standards, I think (2)
> would be a much cleaner solution, and most easily implemented.

.. and very limiting. I don't want to have to choose between flexible
replication topologies and multiple masters. I want both.

You didn't enumerate the fourth solution: have an update vector per
replication area in a replication context and use the cascade rule to
maintain the update vectors.

P.S. I'm about to go off for a short break. I'll respond to any
follow-ups after I get back in two weeks.

Regards,
Steven





--0__=09BBE1BADFCEAE2A8f9e8a93df938690918c09BBE1BADFCEAE2A
Content-type: text/html; charset=US-ASCII
Content-Disposition: inline

<html><body>
<p><font size="4" face="Times New Roman">Updated to add some thoughts on overlapping replication contexts...</font><br>
<br>
<font size="4" face="Times New Roman">John McMeeking</font><br>
<br>
<br>
<font size="4" face="Times New Roman">Before I go on vacation too...<br>
<br>
We seem to be operating on different understandings of LDUP with respect to areas of replication, replication contexts, and update vectors.<br>
<br>
You mention a &quot;fourth option&quot;, being to maintain &quot;an update vector per replication area in a replication context and use the cascade rule to maintain the update vectors.&quot; As I understand LDUP: an area of replication and a replication context are the same thing, with replication context being the current LDUP terminology. They identify an area of the DIT that is replicated -- a replication context has a single root entry and is bounded by subordinate replication contexts (ldup-model 3.5 - Terms and Definitions). LDUP already defines a separate update vector per replication context (per replica). On the surface at least, your &quot;fourth option&quot; appears to be LDUP as currently defined.<br>
<br>
For a server to participate in a cycle (which ldup-replica-req now refers to as a &quot;replica group&quot; precisely because of the misunderstanding I had about your use of cyle), the server must hold a copy of the replication context. And just to make sure my assumptions about what this means are clear:<br>
- a replica group is defined in the context of a replication context. It is the servers that hold instances of a particular area of replication. A server may be part of several replica-groups.<br>
- replication agreements are defined in the context of a replication context.<br>
<br>
In your example, with three servers and two replication contexts, agreements between all the servers implies that there are two sets of agreements between each server -- a set for each replication context. The existance of a replication agreement in one replication context does not imply a corresponding agreement in other replication contexts.<br>
<br>
I'm going to go out on a limb, and guess that part of our misunderstandings has to do with &quot;overlapping&quot; replication contexts -- you mentioned replicating ACL via sparse or fractional replication, while having a full replica of some subtree). This may be required by ldup-replica-req (mentioned in terminology, but not in specific requirements), but is currently listed as a non-objective of ldup-model (section 3.3e). Would it be fair to state that you think ldup-model (and friends) needs to address overlapping replication contexts? My responses have been in the context of what ldup-model claims to support, and in that context I seem to be having a problem understanding your concerns and properly communicating my understanding.<br>
</font><br>
<font size="4" face="Times New Roman">After thinking about overlapping replication contexts a bit more, IF we are going to tackle it, I think we need to address the following:</font><br>
<font size="4" face="Times New Roman">1.  How do we define the bounds of a replication context that is not bounded by nested replication contexts?  Kurt Zielenga's ldap-subentry draft would be useful here (subtree specification).</font><br>
<font size="4" face="Times New Roman">2.  If a client update falls within multiple replication contexts, how should LDUP behave?  Let's start with replicating changes under all appropriate replication contexts, meaning that the same update will be sent multiple times under different replication sessions (they are idempotent, so this should be okay).  This should keep update vectors in the correct state, as an update under one replication context may be replicated before earlier updates (by CSN) that fall within other overlapping replication contexts.</font><br>
<font size="4" face="Times New Roman">3.  Do we allow multiple replication contexts with different bounds to have the same root?  </font><font size="4" face="Times New Roman">I'd like to withdraw the question, because I'm sure that once asked, the answer will be &quot;YES!&quot;  This makes my head hurt more than I need just before Christmas, so I'll leave that for others to gnaw on.</font><br>
<font size="4" face="Times New Roman"><br>
<br>
John McMeeking<br>
<br>
</font><font size="4" face="Times New Roman">&quot;Steven Legg&quot; &lt;steven.legg@adacel.com.au&gt;<br>
<br>
</font>
<table width="100%" border="0" cellspacing="0" cellpadding="0">
<tr valign="top"><td width="9%"><img src="/icons/ecblank.gif" border="0" height="1" width="1" alt=""></td><td width="58%"><ul><ul><ul><ul><ul><ul><ul><ul><b><font face="Times New Roman">&quot;Steven Legg&quot; &lt;steven.legg@adacel.com.au&gt;</font></b><font size="4" face="Times New Roman"> </font>
<p><font face="Times New Roman">12/20/2001 11:51 PM<br>
Please respond to steven.legg</font></ul></ul></ul></ul></ul></ul></ul></ul></td><td width="33%"><font size="4" face="Times New Roman"><br>
</font><font face="Times New Roman"><br>
To: John McMeeking/Rochester/IBM@IBMUS<br>
cc: &lt;ietf-ldup@imc.org&gt;<br>
Subject: RE: Supporting Partial Replication</font><font size="4" face="Times New Roman"><br>
</font></td></tr>
</table>
<font size="4" face="Courier New"><br>
<br>
John,<br>
<br>
John McMeeking wrote:<br>
&gt; See responses marked &lt;JAM&gt;<br>
<br>
&gt; John,<br>
&gt;<br>
&gt; John McMeeking wrote:<br>
&gt; &gt; For either S1<br>
&gt; &gt; or S2 to replicate U1 to S3, replication context R1 must be added to S3,<br>
&gt; &gt; and the replication context properly initialized on S3 -- either via a<br>
&gt; full<br>
&gt; &gt; update replication session, or via some other means (i.e. LDIF). At that<br>
&gt; &gt; point, U1 (and the rest of the entries in R1) are present on S3 and<br>
&gt; &gt; replication continues normally. Replication of R1 is independent of R2.<br>
&gt;<br>
&gt; You're alluding to the flip side of what I'm saying. If the current<br>
&gt; architecture can't support a replication topology where the servers<br>
&gt; in a cycle hold different replication areas in the same replication<br>
&gt; context then the choices are to not replicate, or to force all the<br>
&gt; servers in the cycle to have the same replication area(s). Too bad<br>
&gt; if I don't want S3 to see stuff in R1.<br>
&gt;<br>
&gt; &lt;JAM&gt;<br>
&gt; Are you talking about setting up something like this?<br>
<br>
No. The choices are:<br>
<br>
1) don't replicate, i.e. break the cycle by throwing out S3 (in my original<br>
example),<br>
<br>
2) force all servers in the cycle to have the same replication area,<br>
i.e. S1 holds R1, S2 holds R1 and S3 holds R1, forget about R2,<br>
<br>
3) change the LDUP architecture to support the original topology.<br>
<br>
<br>
&gt; Server S1 hold R1 and R2<br>
&gt; Server S2 holds R2<br>
&gt; Server S3 holds R1 and R2<br>
&gt; Set up replication agreements such that S1 supplies S2, S2 supplies S3<br>
&gt; and S3 supplies S1.<br>
&gt;<br>
&gt; As defined (and I think we agree this is the current behavior), LDUP<br>
&gt; allows this to be done only for R2. As S2 does not hold R1, you can<br>
&gt; not set up replication for R1 to/from S2. As I understand it, the<br>
&gt; agreements for R1 and R2 are completely independent. For example, if<br>
&gt; I add R2 to S2, and then set up the cycle described above, there would<br>
&gt; be at least 6 replication agreements S1-&gt;S2(R1), S1-&gt;S2(R2), S2-&gt;S3(R1),<br>
&gt; ... Going back to the scenario described above, under LDUP you would<br>
&gt; set up two independent cycles: S1-&gt;S2-&gt;S1 (for R1) and S1-&gt;S2-&gt;S3-&gt;S1<br>
&gt; (for R2).<br>
<br>
They're not independent since S1 and S2 each have a single update vector<br>
for both R1 and R2 in the current architecture. Events in one cycle<br>
affect the other.<br>
<br>
&gt;<br>
&gt; I don't see a problem.<br>
&gt; &lt;/JAM&gt;<br>
&gt;<br>
&gt; &gt; If R2 is a sparse/fractional replica of R1, R2 would not be considered a<br>
&gt; &gt; separate replication context. In this case, sparse/fractional<br>
replication<br>
&gt; &gt; is an attribute of the replicaSubentry for S3. If U1 falls within the<br>
&gt; &gt; attributes and/or entries specified for S3, it will be replicated under<br>
&gt; &gt; the replication agreements targeting S3 under R1, and the UV for S3<br>
updated<br>
&gt; &gt; accordingly.<br>
&gt; &gt;<br>
&gt; &gt; What happens when S3 is a fractional replica, and U1 does not contain<br>
any<br>
&gt; &gt; attributes replicated to S3? draft-ietf-ldup-model-06, section 8.2,<br>
&gt; &gt; specifies &quot;When fully populating or incrementally bringing up to date a<br>
&gt; &gt; Fractional Replica each of the Replication Updates must only<br>
&gt; &gt; contain updates to the attributes in the Fractional Entry<br>
Specification.&quot;<br>
&gt; &gt; This implies that S3 will never see U1, and thus not fully update its<br>
&gt; &gt; update vector until such time as it receives an update originating at<br>
the<br>
&gt; &gt; same server.<br>
&gt;<br>
&gt; Do you agree that S1 will also never see U1 ?<br>
&gt; This breaks eventual convergence.<br>
&gt;<br>
&gt; &lt;JAM&gt;<br>
&gt;<br>
&gt; Okay, now I think I understand... Let me restate this scenario:<br>
&gt; S1 holds full replica of R1<br>
&gt; S2 hold full replica of R1<br>
&gt; S3 holds fractional replica of R1<br>
&gt; Replication agreements are defined such that S1 supplies S3, S3 supplies<br>
S2,<br>
&gt; and S2 supplies S1.<br>
<br>
I've assumed symmetry in the replication agreements for my original example,<br>
so the topology is a undirected graph. S1 supplies S2, S1 supplies S3,<br>
S2 supplies S1, S2 supplies S3, S3 supplies S1 and S3 supplies S2. The<br>
subset of these agreements that are significant to the example are S2<br>
supplies S1, S2 supplies S3 and S3 supplies S1. The other agreements are<br>
invoked but end up sending nothing new.<br>
<br>
&gt;<br>
&gt; Under such an configuration, U1 is not seen by S3, as S1 doesn't replicate<br>
&gt; it to S2. Before proceeding, let me restate that there is a difference<br>
&gt; between holding a subtree of an area of replication and holding a<br>
&gt; fractional replica. As I understand it, holding a subtree implies the<br>
&gt; existance of another area of replication corresponding to that subtree<br>
&gt; -- as opposed to a sparse replica (not supported by the ldup model) which<br>
&gt; holds some entries in an area of replication.<br>
&gt;<br>
&gt; I see three solutions to the problem you describe:<br>
&gt;<br>
&gt; 1. Replace the restriction in ldup-model-06 8.2 such that all updates are<br>
&gt; sent to fractional replicas. When acting as a supplier, a fractional<br>
replica<br>
&gt; replicates all replication updates, even those that are not within the set<br>
&gt; of attributes held by the fractional replica. Also, the fractional replica<br>
&gt; is responsible for applying only those update primitives that are within<br>
&gt; the fractional replica specification.<br>
&gt;<br>
&gt; I think this would cause major problems for state-based implementations.<br>
<br>
Agreed. The server has to store the updates &quot;somewhere&quot; so that they<br>
can be forwarded to other servers.<br>
<br>
&gt; It seems reasonable for log-based implementations.<br>
<br>
I would expect there to be administrator concerns regardless of the style</font><br>
<font size="4" face="Courier New">of implementation. One reason for setting up a fractional replica is to<br>
protect<br>
certain information held by the supplier from being seen by the consumer.<br>
<br>
&gt;<br>
&gt; 2. Add a resriction to the model &amp; info model to effect that a fractional<br>
&gt; replica cannot act as a supplier in LDUP.<br>
&gt;<br>
&gt; In your scenario that implies S3 cannot be a supplier to S1. Thus S2 must<br>
&gt; be a supplier to S2 and U1 and U2 are both replicated from S2 to S1. I'm<br>
&gt; not sure how this would be done -- either the configuration is rejected<br>
&gt; (preferred), or a fractional replica simply ignores requests to act as a<br>
&gt; supplier. I prefer rejecting the configuration -- why let someone set up<br>
&gt; a replication path that will never be used?<br>
&gt;<br>
&gt; 3. Add a restriction that a fractional replica can act as a supplier only<br>
&gt; to another fractional replica, where the consumers fractional<br>
specification<br>
&gt; is a subset of the suppliers fractional specification (i.e. the supplier<br>
&gt; replica holds all entries/attributes held by the consumer, and may hold<br>
more).<br>
&gt;<br>
&gt; For your scenario, this would preclude S3 acting as a supplier to S2<br>
&gt; (S2 - a full replica - does not hold a subset of the attributes held be<br>
S3).<br>
&gt; I'm not sure where/when this restriction would be enforced. It seems that<br>
&gt; either the configuration has to be rejected outright -- topic for<br>
management<br>
&gt; draft -- or that a supplier would have to evaluate the fractional<br>
&gt; specifications (if any) for itself and the consumer and determine whether<br>
&gt; it should, in fact use the agreement at all.<br>
<br>
Solutions 2 and 3 both kill any possibility of updateable sparse and/or<br>
fractional replicas. This seriously limits LDUP's usefulness in database<br>
synchronization since external sources of data with which a directory may<br>
be required to synchronize are likely to be both updateable and<br>
sparse/fractional.<br>
<br>
They also outlaw secondary shadowing topologies allowed by X.500<br>
replication,<br>
and which I already support. For instance, it would not be possible for a<br>
server to shadow portions from two different naming contexts. In X.500,<br>
administrative areas, e.g. for access control or schema, can and do span<br>
naming contexts (replication contexts in LDUP). The administrative policy<br>
inherited from superior naming contexts is called prefix information and<br>
is included in X.500 replication updates. Prefix information is effectively<br>
a read-only, sparse and fractional copy of information from a superior<br>
naming context. The prefix information for two different naming contexts<br>
will overlap, but neither will be a subset of the other. Solutions 2 and 3<br>
will disallow the prefix information from two such naming contexts to be<br>
replicated to the same shadow DSA.<br>
<br>
&gt;<br>
&gt; Assuming state-based replication remains in the standards, I think (2)<br>
&gt; would be a much cleaner solution, and most easily implemented.<br>
<br>
. and very limiting. I don't want to have to choose between flexible<br>
replication topologies and multiple masters. I want both.<br>
<br>
You didn't enumerate the fourth solution: have an update vector per<br>
replication area in a replication context and use the cascade rule to<br>
maintain the update vectors.<br>
<br>
P.S. I'm about to go off for a short break. I'll respond to any<br>
follow-ups after I get back in two weeks.<br>
<br>
Regards,<br>
Steven<br>
</font><font size="4" face="Times New Roman"><br>
<br>
</font><br>
<br>
</body></html>
--0__=09BBE1BADFCEAE2A8f9e8a93df938690918c09BBE1BADFCEAE2A--



