From mailman-bounces@ietf.org  Fri Oct  1 08:07:11 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id IAA22159
	for <speechsc-web-archive@ietf.org>; Fri, 1 Oct 2004 08:07:11 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CDMKM-0005gy-DO
	for speechsc-web-archive@ietf.org; Fri, 01 Oct 2004 08:15:58 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CDJwP-00066p-7B
	for speechsc-web-archive@ietf.org; Fri, 01 Oct 2004 05:43:05 -0400
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Subject: ietf.org mailing list memberships reminder
From: mailman-owner@ietf.org
To: speechsc-web-archive@ietf.org
X-No-Archive: yes
Message-ID: <mailman.17123.1096622248.3166.mailman@lists.ietf.org>
Date: Fri, 01 Oct 2004 05:17:28 -0400
Precedence: bulk
X-BeenThere: mailman@lists.ietf.org
X-Mailman-Version: 2.1.5
List-Id: Mailman site list <mailman.lists.ietf.org>
X-List-Administrivia: yes
Sender: mailman-bounces@ietf.org
Errors-To: mailman-bounces@ietf.org
X-Spam-Score: 0.3 (/)
X-Scan-Signature: 3e15cc4fdc61d7bce84032741d11c8e5
Content-Transfer-Encoding: 7bit

This is a reminder, sent out once a month, about your ietf.org mailing
list memberships.  It includes your subscription info and how to use
it to change it or unsubscribe from a list.

You can visit the URLs to change your membership status or
configuration, including unsubscribing, setting digest-style delivery
or disabling delivery altogether (e.g., for a vacation), and so on.

In addition to the URL interfaces, you can also use email to make such
changes.  For more info, send a message to the '-request' address of
the list (for example, mailman-request@ietf.org) containing just the
word 'help' in the message body, and an email message will be sent to
you with instructions.

**********************************************************************

NOTE WELL:

Any submission to the IETF intended by the Contributor for publication
as all or part of an IETF Internet-Draft or RFC and any statement made
within the context of an IETF activity is considered an "IETF
Contribution". Such statements include oral statements in IETF
sessions, as well as written and electronic communications made at any
time or place, which are addressed to:

o the IETF plenary session, o any IETF working group or portion
thereof, o the IESG, or any member thereof on behalf of the IESG, o
the IAB or any member thereof on behalf of the IAB, o any IETF mailing
list, including the IETF list itself, any working group
  or design team list, or any other list functioning under IETF
auspices,
o the RFC Editor or the Internet-Drafts function

All IETF Contributions are subject to the rules of RFC 3667 and RFC
3668.

Statements made outside of an IETF session, mailing list or other
function, that are clearly not intended to be input to an IETF
activity, group or function, are not IETF Contributions in the context
of this notice.

Please consult RFC 3667 for details.

*******************************************************************************


If you have questions, problems, comments, etc, send them to
mailman-owner@ietf.org.  Thanks!

Passwords for speechsc-web-archive@ietf.org:

List                                     Password // URL
----                                     --------  
speechsc@ietf.org                        mHtT      
https://www1.ietf.org/mailman/options/speechsc/speechsc-web-archive%40ietf.org


From speechsc-bounces@ietf.org  Tue Oct  5 13:43:06 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id NAA19240
	for <speechsc-web-archive@ietf.org>; Tue, 5 Oct 2004 13:43:06 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CEtUT-0007L3-2w
	for speechsc-web-archive@ietf.org; Tue, 05 Oct 2004 13:52:46 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CEtBX-00051f-Vo; Tue, 05 Oct 2004 13:33:11 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CEt6d-0004Ek-Bn
	for speechsc@megatron.ietf.org; Tue, 05 Oct 2004 13:28:07 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id NAA18237
	for <speechsc@ietf.org>; Tue, 5 Oct 2004 13:28:04 -0400 (EDT)
Received: from mx1.scansoft.com ([198.71.64.81])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CEtFy-00052r-0D
	for speechsc@ietf.org; Tue, 05 Oct 2004 13:37:46 -0400
Received: from pb-exchcon.pb.scansoft.com ([10.1.4.73]) by mx1 with
	trend_isnt_name_B; Tue, 05 Oct 2004 13:34:41 -0400
Received: by pb-exchcon.pb.scansoft.com with Internet Mail Service
	(5.5.2653.19) id <S4TM55GJ>; Tue, 5 Oct 2004 13:27:33 -0400
Message-ID: <BBF29C9B95E52E4DB5C29A0ACC94E83BA62319@ac-exch1.eu.scansoft.com>
From: "Reifenrath, Klaus" <Klaus.Reifenrath@Scansoft.com>
To: speechsc@ietf.org
Subject: RE: [Speechsc] XML schema for verification result
Date: Tue, 5 Oct 2004 13:27:29 -0400 
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
Content-Type: text/plain;
	charset="iso-8859-1"
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 8b431ad66d60be2d47c7bfeb879db82c
Cc: "'zilca@us.ibm.com'" <zilca@us.ibm.com>,
        "'Sarvi Shanmugham'" <sarvi@cisco.com>,
        "'forgues@nuance.com'" <forgues@nuance.com>
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 82c9bddb247d9ba4471160a9a865a5f3

Hi group,

aside from the discussion which schema we should use, I like to resolve the
open issues concerning the verification result elements:
According to the current spec the <verification-score> element is mandatory
also for training results. What is the meaning of a verification score in
the context of training (see 4))?
What is your opinion on 2) and 3)? 

Regards,
Klaus
   
-----Original Message-----
From: Reifenrath, Klaus [mailto:Klaus.Reifenrath@Scansoft.com]
Sent: Mittwoch, 21. Juli 2004 18:11
To: 'Sarvi Shanmugham'; Daniel Burnett
Cc: speechsc@ietf.org
Subject: [Speechsc] XML schema for verification result


Hi Sarvi and Dan,

I created a XML schema for the verification result. Like the RelaxNG schema
of Dan it can be improved. I also adapted the sample results of Dan.

Compared to Dan's proposal I changed the following:
1) Because of XML Schema limitations I need to distinguish between
verification and training results. 
2) The <num-frames> element is no longer child element of the <voice-print>
element, because in case of multi-verification, the value of num-frames is
the same for all voiceprints.
3) The <adapted> and <need-more-data> are child elements of the
<voice-print> element.
4) For training results the <verification-score> is now optional. I also
added a new optional parameter for training results: consistent.
5) The <voice-print> element has 2 mandatory attributes: repository-uri and
identifier
6) Currently I included the <result> element of NLSML, to allow validation
of sample results with validation tools. But I still do not see the benefit
of using NLSML for verification results, because a speaker verification
engine is not a semantic interpretation component.


Suggestions for improvement are welcome!

Klaus
 <<ver.xsd>>  <<mrcpver20040721example.xml>> 





_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


From speechsc-bounces@ietf.org  Wed Oct  6 07:35:17 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id HAA14014
	for <speechsc-web-archive@ietf.org>; Wed, 6 Oct 2004 07:35:17 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CFAEF-0002uw-6d
	for speechsc-web-archive@ietf.org; Wed, 06 Oct 2004 07:45:07 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CFA3z-0004ZR-Vu; Wed, 06 Oct 2004 07:34:31 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CFA04-0003l9-Cz
	for speechsc@megatron.ietf.org; Wed, 06 Oct 2004 07:30:28 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id HAA13467
	for <speechsc@ietf.org>; Wed, 6 Oct 2004 07:30:26 -0400 (EDT)
Received: from mx1.scansoft.com ([198.71.64.81])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CFA9Y-0002eG-6M
	for speechsc@ietf.org; Wed, 06 Oct 2004 07:40:16 -0400
Received: from pb-exchcon.pb.scansoft.com ([10.1.4.73]) by mx1 with
	trend_isnt_name_B; Wed, 06 Oct 2004 07:37:04 -0400
Received: by pb-exchcon.pb.scansoft.com with Internet Mail Service
	(5.5.2653.19) id <S4TM77YV>; Wed, 6 Oct 2004 07:29:55 -0400
Message-ID: <BBF29C9B95E52E4DB5C29A0ACC94E83BA6231B@ac-exch1.eu.scansoft.com>
From: "Reifenrath, Klaus" <Klaus.Reifenrath@Scansoft.com>
To: "'Eric Burger'" <eburger@brooktrout.com>
Subject: RE: [Speechsc] FW: Liaison statement from IETF SPEECHSC WG to W3C
	Multimedia Interaction WG
Date: Wed, 6 Oct 2004 07:29:52 -0400 
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
Content-Type: text/plain;
	charset="iso-8859-1"
X-Spam-Score: 0.0 (/)
X-Scan-Signature: a92270ba83d7ead10c5001bb42ec3221
Cc: "'speechsc@ietf.org'" <speechsc@ietf.org>
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 31b28e25e9d13a22020d8b7aedc9832c

Hi Eric,

what are the next steps? As you pointed out, the published NLSML is only
close to what we use in MRCP today (even closer is the unpublished version
of NLSML that is referenced in the current spec, which contains DTD and XML
Schema), but the differences are not documented.

Will you come up with a first version of "MRCP-NLSML" (for ASR and SV
results) that can be reviewed by the group or are you waiting for a
volunteer to do that?

Klaus 

-----Original Message-----
From: Eric Burger [mailto:eburger@brooktrout.com]
Sent: Mittwoch, 29. September 2004 22:55
To: speechsc@ietf.org
Subject: [Speechsc] FW: Liaison statement from IETF SPEECHSC WG to W3C
Multimedia Int eraction WG


Formal response from W3C.

> -----Original Message-----
> From: Deborah Dahl [mailto:dahl@conversational-technologies.com]
> Sent: Wednesday, September 29, 2004 4:15 PM
> To: 'Leslie Daigle'; 'Dave Raggett Dave Raggett'
> Cc: 'Eric Burger'; 'Dave Oran'; 'Allison Mankin'; 'Peterson, Jon';
> statements@ietf.org; Martin Duerst; Max Froumentin; Philipp 
> Hoschka; Wu
> Chou; johnston@research.att.com
> Subject: RE: Liaison statement from IETF SPEECHSC WG to W3C Multimedia
> Interaction WG
> 
> 
> Leslie,
> Thank you for the liason statement and for your interest in EMMA.
> We have just published the third Working Draft of EMMA. Our
> current plan is to publish a Last Call Working Draft in December,
> and a Candidate Recommendation in June, 2005. We would be very
> grateful for any comments or feedback you might have on the 
> current Working
> Draft (http://www.w3.org/TR/emma/). This kind of feedback 
> should be very 
> useful in both helping us progress EMMA through the W3C Recommendation
> process,
> as well as in insuring that it becomes a useful and accepted 
> standard which
> meets the requirements of the speech processing industry. The 
> overlapping
> membership between the W3C Multimodal Interaction Working Group and
> the MRCPv2 group should be helpful in facilitating 
> communication across
> groups.
> 
> Although NLSML had only reached a 1st WD status, it
> has nonetheless proved a useful interim specification. We
> hope that MRCPv2 can be revised to normatively reference
> EMMA in preference to NLSML once EMMA reaches Candidate 
> Recommendation.
> 
> Note that the W3C and the IETF have a formal liason process, so I
> am copying Martin Duerst, who is responsible for this liason on the
> W3C side, on this message. I am also copying Wu Chao and Michael
> Johnston, the primary editors of EMMA.
> 
> Best wishes on your work with MRCPv2, and looking forward to 
> continued interaction with your group.
> 
> regards,
> 
> Debbie Dahl
> Chair, W3C Multimodal Interaction Working Group
> 
> 
> > -----Original Message-----
> > From: Leslie Daigle [mailto:leslie@thinkingcat.com] 
> > Sent: Tuesday, September 28, 2004 3:55 PM
> > To: Deborah Dahl Deborah Dahl; Dave Raggett Dave Raggett
> > Cc: Eric Burger; Dave Oran; Allison Mankin; Peterson, Jon; 
> > statements@ietf.org
> > Subject: Liaison statement from IETF SPEECHSC WG to W3C 
> > Multimedia Interaction WG
> > 
> > 
> > 
> > Please find below a liaison statement from the IETF SPEECHSC working
> > group to the W3C Multimedia Interaction work group.
> > 
> > Best,
> > Leslie Daigle,
> > IETF liaison to the W3C.
> > 
> > ==========
> > 
> > 
> > Title: SPEECHSC Requirements for W3C MMI Standards
> > Source: IETF SPEECHSC Work Group
> > To: W3C Multimedia Interaction Work Group
> > 
> > Contact Persons:
> > Name:        Eric Burger
> > Tel. Number: +1 603 890 7587
> > E-Mail:      eburger@brooktrout.com
> > Name:        Dave Oran
> > Tel. Number: +1 978 264 2048
> > E-Mail:      oran@cisco.com
> > 
> > 
> > 1. Overall Description:
> > The speechsc Work Group is tasked with developing protocols 
> to support
> > distributed media processing of audio streams.  The focus of 
> > this working
> > group is to develop protocols to support ASR, TTS, and SV.  
> > The working
> > group will only focus on the secure distributed control of 
> > these servers.
> > 
> > The full description of the SPEECSC Charter, including documents and
> > milestones, can be found here:
> > http://ietf.org/html.charters/speechsc-charter.html
> > 
> > A supplemental work group web page can be found here:
> > http://flyingfox.snowshore.com/i-d/speechsc
> > 
> > 
> > 2. Background to Request
> > The consensus of the SPEECHSC work group is that EMMA is 
> the preferred
> > encoding for speech recognition results for MRCPv2 transport.
> > 
> > It is our plan to require MRCPv2-conforming protocol 
> > endpoints to support
> > EMMA (currently referenced at <http://www.w3.org/TR/emma/>).  
> > This will
> > occur once EMMA reaches Candidate Recommendation status.  
> > However, it is our
> > understanding that this will not happen until mid-2005.
> > 
> > Because there is a demand for MRCPv2 today, we need a 
> > normative reference to
> > a recognition markup.  The consensus of the SPEECHSC work 
> > group is that
> > NLSML (currently referenced at 
> > <http://www.w3.org/TR/nl-spec/>) is closest
> > to our needs.  However, there is no normative (Candidate 
> > Recommendation or
> > later) for NLSML.
> > 
> > Until the W3C publishes EMMA as a Candidate Recommendation, 
> > MRCPv2 endpoints
> > will be required to support a different markup.  However, 
> once EMMA is
> > published, it will be the default, required markup for 
> > returning recognition
> > results.  This markup will not be called NLSML.
> > 
> > MRCPv2 will include a negotiation mechanism such that 
> > follow-on markups to
> > EMMA will be automatically supported by the protocol.
> > 
> > 
> > 3. Requests to W3C:
> > A) Please keep us informed to the progress of EMMA.
> > B) Please progress EMMA to Candidate Recommendation as fast 
> > as possible.
> > C) It is our intention to reference the NLSML specification, 
> > per the W3C
> > copyright rules specified in the NLSML specification, found at
> > <http://www.w3.org/Consortium/Legal/copyright-documents-19990405>
> > 
> > -- 
> > 
> > -------------------------------------------------------------------
> > "Reality:
> >       Yours to discover."
> >                                  -- ThinkingCat
> > Leslie Daigle
> > leslie@thinkingcat.com
> > -------------------------------------------------------------------
> > 
> 
> 

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


From speechsc-bounces@ietf.org  Fri Oct  8 08:59:12 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id IAA21029
	for <speechsc-web-archive@ietf.org>; Fri, 8 Oct 2004 08:59:12 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CFuUy-00064n-0o
	for speechsc-web-archive@ietf.org; Fri, 08 Oct 2004 09:09:28 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CFuKC-0004cd-Ix; Fri, 08 Oct 2004 08:58:20 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CFgiS-0005Dj-JN
	for speechsc@megatron.ietf.org; Thu, 07 Oct 2004 18:26:28 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id SAA05451
	for <speechsc@ietf.org>; Thu, 7 Oct 2004 18:26:25 -0400 (EDT)
Received: from 206-169-193-40.gen.twtelecom.net ([206.169.193.40]
	helo=ProgressiveComputingLLC.com)
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CFgsF-0000np-3Q
	for speechsc@ietf.org; Thu, 07 Oct 2004 18:36:35 -0400
Content-class: urn:content-classes:message
MIME-Version: 1.0
Subject: RE: [Speechsc] FW: Liaison statement from IETF SPEECHSC WG to
	W3CMultimedia Interaction WG
X-MimeOLE: Produced By Microsoft Exchange V6.0.6249.0
Date: Thu, 7 Oct 2004 15:25:47 -0700
Message-ID: <2D0CA64CDC33E14DA7AB043B8CC4D2BB02213DFA@svr-exc.domain.com>
Thread-Topic: [Speechsc] FW: Liaison statement from IETF SPEECHSC WG to
	W3CMultimedia Interaction WG
thread-index: AcSrl3vhEoyR5tHpSMa96MiUBwyi1gBJLuMz
From: "Thomas Gal" <Thomas@ProgressiveComputingLLC.com>
To: "Reifenrath, Klaus" <Klaus.Reifenrath@Scansoft.com>
X-Spam-Score: 0.0 (/)
X-Scan-Signature: b1c41982e167b872076d0018e4e1dc3c
X-Mailman-Approved-At: Fri, 08 Oct 2004 08:58:19 -0400
Cc: speechsc@ietf.org
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1984642137=="
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: d16ce744298aacf98517bc7c108bd198

--===============1984642137==
Content-class: urn:content-classes:message
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: base64

SSBlbWFpbGVkIGRpYW5lIGFuZCBzaGUgZm9yd2FyZGVkIGl0IHRvIHRoZSBXM0MgbWFuYWdlbWVu
dCBhIGNvdXBsZSB3ZWVrcyBhZ28gYW5kIEkndmUgbm90IGhlYXJkIGJhY2ssIGJ1dCBJIHJlYWxp
emUgc29tZW9uZSBlbHNlIGhhZCBhbHJlYWR5IGRvbmUgdGhpcy4gSSdtIGhhcHB5IHRvIHB1bGwg
aXQgb3V0IG9mIHRoZSBzcGVjIGFuZCB3b3JrIGl0IGludG8gb3VycyB3aXRoIG91ciBtYXRobWV0
aWNpYW4vZ3JhbW1hciBleHBlcnQgaWYgdGhpcyBpcyBkZXNpcmVkLg0KIA0KVG9tDQoNCgktLS0t
LU9yaWdpbmFsIE1lc3NhZ2UtLS0tLSANCglGcm9tOiBzcGVlY2hzYy1ib3VuY2VzQGlldGYub3Jn
IG9uIGJlaGFsZiBvZiBSZWlmZW5yYXRoLCBLbGF1cyANCglTZW50OiBXZWQgMTAvNi8yMDA0IDQ6
MjkgQU0gDQoJVG86ICdFcmljIEJ1cmdlcicgDQoJQ2M6ICdzcGVlY2hzY0BpZXRmLm9yZycgDQoJ
U3ViamVjdDogUkU6IFtTcGVlY2hzY10gRlc6IExpYWlzb24gc3RhdGVtZW50IGZyb20gSUVURiBT
UEVFQ0hTQyBXRyB0byBXM0NNdWx0aW1lZGlhIEludGVyYWN0aW9uIFdHDQoJDQoJDQoNCglIaSBF
cmljLA0KCQ0KCXdoYXQgYXJlIHRoZSBuZXh0IHN0ZXBzPyBBcyB5b3UgcG9pbnRlZCBvdXQsIHRo
ZSBwdWJsaXNoZWQgTkxTTUwgaXMgb25seQ0KCWNsb3NlIHRvIHdoYXQgd2UgdXNlIGluIE1SQ1Ag
dG9kYXkgKGV2ZW4gY2xvc2VyIGlzIHRoZSB1bnB1Ymxpc2hlZCB2ZXJzaW9uDQoJb2YgTkxTTUwg
dGhhdCBpcyByZWZlcmVuY2VkIGluIHRoZSBjdXJyZW50IHNwZWMsIHdoaWNoIGNvbnRhaW5zIERU
RCBhbmQgWE1MDQoJU2NoZW1hKSwgYnV0IHRoZSBkaWZmZXJlbmNlcyBhcmUgbm90IGRvY3VtZW50
ZWQuDQoJDQoJV2lsbCB5b3UgY29tZSB1cCB3aXRoIGEgZmlyc3QgdmVyc2lvbiBvZiAiTVJDUC1O
TFNNTCIgKGZvciBBU1IgYW5kIFNWDQoJcmVzdWx0cykgdGhhdCBjYW4gYmUgcmV2aWV3ZWQgYnkg
dGhlIGdyb3VwIG9yIGFyZSB5b3Ugd2FpdGluZyBmb3IgYQ0KCXZvbHVudGVlciB0byBkbyB0aGF0
Pw0KCQ0KCUtsYXVzDQoJDQoJLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCglGcm9tOiBFcmlj
IEJ1cmdlciBbbWFpbHRvOmVidXJnZXJAYnJvb2t0cm91dC5jb21dDQoJU2VudDogTWl0dHdvY2gs
IDI5LiBTZXB0ZW1iZXIgMjAwNCAyMjo1NQ0KCVRvOiBzcGVlY2hzY0BpZXRmLm9yZw0KCVN1Ympl
Y3Q6IFtTcGVlY2hzY10gRlc6IExpYWlzb24gc3RhdGVtZW50IGZyb20gSUVURiBTUEVFQ0hTQyBX
RyB0byBXM0MNCglNdWx0aW1lZGlhIEludCBlcmFjdGlvbiBXRw0KCQ0KCQ0KCUZvcm1hbCByZXNw
b25zZSBmcm9tIFczQy4NCgkNCgk+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQoJPiBGcm9t
OiBEZWJvcmFoIERhaGwgW21haWx0bzpkYWhsQGNvbnZlcnNhdGlvbmFsLXRlY2hub2xvZ2llcy5j
b21dDQoJPiBTZW50OiBXZWRuZXNkYXksIFNlcHRlbWJlciAyOSwgMjAwNCA0OjE1IFBNDQoJPiBU
bzogJ0xlc2xpZSBEYWlnbGUnOyAnRGF2ZSBSYWdnZXR0IERhdmUgUmFnZ2V0dCcNCgk+IENjOiAn
RXJpYyBCdXJnZXInOyAnRGF2ZSBPcmFuJzsgJ0FsbGlzb24gTWFua2luJzsgJ1BldGVyc29uLCBK
b24nOw0KCT4gc3RhdGVtZW50c0BpZXRmLm9yZzsgTWFydGluIER1ZXJzdDsgTWF4IEZyb3VtZW50
aW47IFBoaWxpcHANCgk+IEhvc2Noa2E7IFd1DQoJPiBDaG91OyBqb2huc3RvbkByZXNlYXJjaC5h
dHQuY29tDQoJPiBTdWJqZWN0OiBSRTogTGlhaXNvbiBzdGF0ZW1lbnQgZnJvbSBJRVRGIFNQRUVD
SFNDIFdHIHRvIFczQyBNdWx0aW1lZGlhDQoJPiBJbnRlcmFjdGlvbiBXRw0KCT4NCgk+DQoJPiBM
ZXNsaWUsDQoJPiBUaGFuayB5b3UgZm9yIHRoZSBsaWFzb24gc3RhdGVtZW50IGFuZCBmb3IgeW91
ciBpbnRlcmVzdCBpbiBFTU1BLg0KCT4gV2UgaGF2ZSBqdXN0IHB1Ymxpc2hlZCB0aGUgdGhpcmQg
V29ya2luZyBEcmFmdCBvZiBFTU1BLiBPdXINCgk+IGN1cnJlbnQgcGxhbiBpcyB0byBwdWJsaXNo
IGEgTGFzdCBDYWxsIFdvcmtpbmcgRHJhZnQgaW4gRGVjZW1iZXIsDQoJPiBhbmQgYSBDYW5kaWRh
dGUgUmVjb21tZW5kYXRpb24gaW4gSnVuZSwgMjAwNS4gV2Ugd291bGQgYmUgdmVyeQ0KCT4gZ3Jh
dGVmdWwgZm9yIGFueSBjb21tZW50cyBvciBmZWVkYmFjayB5b3UgbWlnaHQgaGF2ZSBvbiB0aGUN
Cgk+IGN1cnJlbnQgV29ya2luZw0KCT4gRHJhZnQgKGh0dHA6Ly93d3cudzMub3JnL1RSL2VtbWEv
KS4gVGhpcyBraW5kIG9mIGZlZWRiYWNrDQoJPiBzaG91bGQgYmUgdmVyeQ0KCT4gdXNlZnVsIGlu
IGJvdGggaGVscGluZyB1cyBwcm9ncmVzcyBFTU1BIHRocm91Z2ggdGhlIFczQyBSZWNvbW1lbmRh
dGlvbg0KCT4gcHJvY2VzcywNCgk+IGFzIHdlbGwgYXMgaW4gaW5zdXJpbmcgdGhhdCBpdCBiZWNv
bWVzIGEgdXNlZnVsIGFuZCBhY2NlcHRlZA0KCT4gc3RhbmRhcmQgd2hpY2gNCgk+IG1lZXRzIHRo
ZSByZXF1aXJlbWVudHMgb2YgdGhlIHNwZWVjaCBwcm9jZXNzaW5nIGluZHVzdHJ5LiBUaGUNCgk+
IG92ZXJsYXBwaW5nDQoJPiBtZW1iZXJzaGlwIGJldHdlZW4gdGhlIFczQyBNdWx0aW1vZGFsIElu
dGVyYWN0aW9uIFdvcmtpbmcgR3JvdXAgYW5kDQoJPiB0aGUgTVJDUHYyIGdyb3VwIHNob3VsZCBi
ZSBoZWxwZnVsIGluIGZhY2lsaXRhdGluZw0KCT4gY29tbXVuaWNhdGlvbiBhY3Jvc3MNCgk+IGdy
b3Vwcy4NCgk+DQoJPiBBbHRob3VnaCBOTFNNTCBoYWQgb25seSByZWFjaGVkIGEgMXN0IFdEIHN0
YXR1cywgaXQNCgk+IGhhcyBub25ldGhlbGVzcyBwcm92ZWQgYSB1c2VmdWwgaW50ZXJpbSBzcGVj
aWZpY2F0aW9uLiBXZQ0KCT4gaG9wZSB0aGF0IE1SQ1B2MiBjYW4gYmUgcmV2aXNlZCB0byBub3Jt
YXRpdmVseSByZWZlcmVuY2UNCgk+IEVNTUEgaW4gcHJlZmVyZW5jZSB0byBOTFNNTCBvbmNlIEVN
TUEgcmVhY2hlcyBDYW5kaWRhdGUNCgk+IFJlY29tbWVuZGF0aW9uLg0KCT4NCgk+IE5vdGUgdGhh
dCB0aGUgVzNDIGFuZCB0aGUgSUVURiBoYXZlIGEgZm9ybWFsIGxpYXNvbiBwcm9jZXNzLCBzbyBJ
DQoJPiBhbSBjb3B5aW5nIE1hcnRpbiBEdWVyc3QsIHdobyBpcyByZXNwb25zaWJsZSBmb3IgdGhp
cyBsaWFzb24gb24gdGhlDQoJPiBXM0Mgc2lkZSwgb24gdGhpcyBtZXNzYWdlLiBJIGFtIGFsc28g
Y29weWluZyBXdSBDaGFvIGFuZCBNaWNoYWVsDQoJPiBKb2huc3RvbiwgdGhlIHByaW1hcnkgZWRp
dG9ycyBvZiBFTU1BLg0KCT4NCgk+IEJlc3Qgd2lzaGVzIG9uIHlvdXIgd29yayB3aXRoIE1SQ1B2
MiwgYW5kIGxvb2tpbmcgZm9yd2FyZCB0bw0KCT4gY29udGludWVkIGludGVyYWN0aW9uIHdpdGgg
eW91ciBncm91cC4NCgk+DQoJPiByZWdhcmRzLA0KCT4NCgk+IERlYmJpZSBEYWhsDQoJPiBDaGFp
ciwgVzNDIE11bHRpbW9kYWwgSW50ZXJhY3Rpb24gV29ya2luZyBHcm91cA0KCT4NCgk+DQoJPiA+
IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQoJPiA+IEZyb206IExlc2xpZSBEYWlnbGUgW21h
aWx0bzpsZXNsaWVAdGhpbmtpbmdjYXQuY29tXQ0KCT4gPiBTZW50OiBUdWVzZGF5LCBTZXB0ZW1i
ZXIgMjgsIDIwMDQgMzo1NSBQTQ0KCT4gPiBUbzogRGVib3JhaCBEYWhsIERlYm9yYWggRGFobDsg
RGF2ZSBSYWdnZXR0IERhdmUgUmFnZ2V0dA0KCT4gPiBDYzogRXJpYyBCdXJnZXI7IERhdmUgT3Jh
bjsgQWxsaXNvbiBNYW5raW47IFBldGVyc29uLCBKb247DQoJPiA+IHN0YXRlbWVudHNAaWV0Zi5v
cmcNCgk+ID4gU3ViamVjdDogTGlhaXNvbiBzdGF0ZW1lbnQgZnJvbSBJRVRGIFNQRUVDSFNDIFdH
IHRvIFczQw0KCT4gPiBNdWx0aW1lZGlhIEludGVyYWN0aW9uIFdHDQoJPiA+DQoJPiA+DQoJPiA+
DQoJPiA+IFBsZWFzZSBmaW5kIGJlbG93IGEgbGlhaXNvbiBzdGF0ZW1lbnQgZnJvbSB0aGUgSUVU
RiBTUEVFQ0hTQyB3b3JraW5nDQoJPiA+IGdyb3VwIHRvIHRoZSBXM0MgTXVsdGltZWRpYSBJbnRl
cmFjdGlvbiB3b3JrIGdyb3VwLg0KCT4gPg0KCT4gPiBCZXN0LA0KCT4gPiBMZXNsaWUgRGFpZ2xl
LA0KCT4gPiBJRVRGIGxpYWlzb24gdG8gdGhlIFczQy4NCgk+ID4NCgk+ID4gPT09PT09PT09PQ0K
CT4gPg0KCT4gPg0KCT4gPiBUaXRsZTogU1BFRUNIU0MgUmVxdWlyZW1lbnRzIGZvciBXM0MgTU1J
IFN0YW5kYXJkcw0KCT4gPiBTb3VyY2U6IElFVEYgU1BFRUNIU0MgV29yayBHcm91cA0KCT4gPiBU
bzogVzNDIE11bHRpbWVkaWEgSW50ZXJhY3Rpb24gV29yayBHcm91cA0KCT4gPg0KCT4gPiBDb250
YWN0IFBlcnNvbnM6DQoJPiA+IE5hbWU6ICAgICAgICBFcmljIEJ1cmdlcg0KCT4gPiBUZWwuIE51
bWJlcjogKzEgNjAzIDg5MCA3NTg3DQoJPiA+IEUtTWFpbDogICAgICBlYnVyZ2VyQGJyb29rdHJv
dXQuY29tDQoJPiA+IE5hbWU6ICAgICAgICBEYXZlIE9yYW4NCgk+ID4gVGVsLiBOdW1iZXI6ICsx
IDk3OCAyNjQgMjA0OA0KCT4gPiBFLU1haWw6ICAgICAgb3JhbkBjaXNjby5jb20NCgk+ID4NCgk+
ID4NCgk+ID4gMS4gT3ZlcmFsbCBEZXNjcmlwdGlvbjoNCgk+ID4gVGhlIHNwZWVjaHNjIFdvcmsg
R3JvdXAgaXMgdGFza2VkIHdpdGggZGV2ZWxvcGluZyBwcm90b2NvbHMNCgk+IHRvIHN1cHBvcnQN
Cgk+ID4gZGlzdHJpYnV0ZWQgbWVkaWEgcHJvY2Vzc2luZyBvZiBhdWRpbyBzdHJlYW1zLiAgVGhl
IGZvY3VzIG9mDQoJPiA+IHRoaXMgd29ya2luZw0KCT4gPiBncm91cCBpcyB0byBkZXZlbG9wIHBy
b3RvY29scyB0byBzdXBwb3J0IEFTUiwgVFRTLCBhbmQgU1YuIA0KCT4gPiBUaGUgd29ya2luZw0K
CT4gPiBncm91cCB3aWxsIG9ubHkgZm9jdXMgb24gdGhlIHNlY3VyZSBkaXN0cmlidXRlZCBjb250
cm9sIG9mDQoJPiA+IHRoZXNlIHNlcnZlcnMuDQoJPiA+DQoJPiA+IFRoZSBmdWxsIGRlc2NyaXB0
aW9uIG9mIHRoZSBTUEVFQ1NDIENoYXJ0ZXIsIGluY2x1ZGluZyBkb2N1bWVudHMgYW5kDQoJPiA+
IG1pbGVzdG9uZXMsIGNhbiBiZSBmb3VuZCBoZXJlOg0KCT4gPiBodHRwOi8vaWV0Zi5vcmcvaHRt
bC5jaGFydGVycy9zcGVlY2hzYy1jaGFydGVyLmh0bWwNCgk+ID4NCgk+ID4gQSBzdXBwbGVtZW50
YWwgd29yayBncm91cCB3ZWIgcGFnZSBjYW4gYmUgZm91bmQgaGVyZToNCgk+ID4gaHR0cDovL2Zs
eWluZ2ZveC5zbm93c2hvcmUuY29tL2ktZC9zcGVlY2hzYw0KCT4gPg0KCT4gPg0KCT4gPiAyLiBC
YWNrZ3JvdW5kIHRvIFJlcXVlc3QNCgk+ID4gVGhlIGNvbnNlbnN1cyBvZiB0aGUgU1BFRUNIU0Mg
d29yayBncm91cCBpcyB0aGF0IEVNTUEgaXMNCgk+IHRoZSBwcmVmZXJyZWQNCgk+ID4gZW5jb2Rp
bmcgZm9yIHNwZWVjaCByZWNvZ25pdGlvbiByZXN1bHRzIGZvciBNUkNQdjIgdHJhbnNwb3J0Lg0K
CT4gPg0KCT4gPiBJdCBpcyBvdXIgcGxhbiB0byByZXF1aXJlIE1SQ1B2Mi1jb25mb3JtaW5nIHBy
b3RvY29sDQoJPiA+IGVuZHBvaW50cyB0byBzdXBwb3J0DQoJPiA+IEVNTUEgKGN1cnJlbnRseSBy
ZWZlcmVuY2VkIGF0IDxodHRwOi8vd3d3LnczLm9yZy9UUi9lbW1hLz4pLiANCgk+ID4gVGhpcyB3
aWxsDQoJPiA+IG9jY3VyIG9uY2UgRU1NQSByZWFjaGVzIENhbmRpZGF0ZSBSZWNvbW1lbmRhdGlv
biBzdGF0dXMuIA0KCT4gPiBIb3dldmVyLCBpdCBpcyBvdXINCgk+ID4gdW5kZXJzdGFuZGluZyB0
aGF0IHRoaXMgd2lsbCBub3QgaGFwcGVuIHVudGlsIG1pZC0yMDA1Lg0KCT4gPg0KCT4gPiBCZWNh
dXNlIHRoZXJlIGlzIGEgZGVtYW5kIGZvciBNUkNQdjIgdG9kYXksIHdlIG5lZWQgYQ0KCT4gPiBu
b3JtYXRpdmUgcmVmZXJlbmNlIHRvDQoJPiA+IGEgcmVjb2duaXRpb24gbWFya3VwLiAgVGhlIGNv
bnNlbnN1cyBvZiB0aGUgU1BFRUNIU0Mgd29yaw0KCT4gPiBncm91cCBpcyB0aGF0DQoJPiA+IE5M
U01MIChjdXJyZW50bHkgcmVmZXJlbmNlZCBhdA0KCT4gPiA8aHR0cDovL3d3dy53My5vcmcvVFIv
bmwtc3BlYy8+KSBpcyBjbG9zZXN0DQoJPiA+IHRvIG91ciBuZWVkcy4gIEhvd2V2ZXIsIHRoZXJl
IGlzIG5vIG5vcm1hdGl2ZSAoQ2FuZGlkYXRlDQoJPiA+IFJlY29tbWVuZGF0aW9uIG9yDQoJPiA+
IGxhdGVyKSBmb3IgTkxTTUwuDQoJPiA+DQoJPiA+IFVudGlsIHRoZSBXM0MgcHVibGlzaGVzIEVN
TUEgYXMgYSBDYW5kaWRhdGUgUmVjb21tZW5kYXRpb24sDQoJPiA+IE1SQ1B2MiBlbmRwb2ludHMN
Cgk+ID4gd2lsbCBiZSByZXF1aXJlZCB0byBzdXBwb3J0IGEgZGlmZmVyZW50IG1hcmt1cC4gIEhv
d2V2ZXIsDQoJPiBvbmNlIEVNTUEgaXMNCgk+ID4gcHVibGlzaGVkLCBpdCB3aWxsIGJlIHRoZSBk
ZWZhdWx0LCByZXF1aXJlZCBtYXJrdXAgZm9yDQoJPiA+IHJldHVybmluZyByZWNvZ25pdGlvbg0K
CT4gPiByZXN1bHRzLiAgVGhpcyBtYXJrdXAgd2lsbCBub3QgYmUgY2FsbGVkIE5MU01MLg0KCT4g
Pg0KCT4gPiBNUkNQdjIgd2lsbCBpbmNsdWRlIGEgbmVnb3RpYXRpb24gbWVjaGFuaXNtIHN1Y2gg
dGhhdA0KCT4gPiBmb2xsb3ctb24gbWFya3VwcyB0bw0KCT4gPiBFTU1BIHdpbGwgYmUgYXV0b21h
dGljYWxseSBzdXBwb3J0ZWQgYnkgdGhlIHByb3RvY29sLg0KCT4gPg0KCT4gPg0KCT4gPiAzLiBS
ZXF1ZXN0cyB0byBXM0M6DQoJPiA+IEEpIFBsZWFzZSBrZWVwIHVzIGluZm9ybWVkIHRvIHRoZSBw
cm9ncmVzcyBvZiBFTU1BLg0KCT4gPiBCKSBQbGVhc2UgcHJvZ3Jlc3MgRU1NQSB0byBDYW5kaWRh
dGUgUmVjb21tZW5kYXRpb24gYXMgZmFzdA0KCT4gPiBhcyBwb3NzaWJsZS4NCgk+ID4gQykgSXQg
aXMgb3VyIGludGVudGlvbiB0byByZWZlcmVuY2UgdGhlIE5MU01MIHNwZWNpZmljYXRpb24sDQoJ
PiA+IHBlciB0aGUgVzNDDQoJPiA+IGNvcHlyaWdodCBydWxlcyBzcGVjaWZpZWQgaW4gdGhlIE5M
U01MIHNwZWNpZmljYXRpb24sIGZvdW5kIGF0DQoJPiA+IDxodHRwOi8vd3d3LnczLm9yZy9Db25z
b3J0aXVtL0xlZ2FsL2NvcHlyaWdodC1kb2N1bWVudHMtMTk5OTA0MDU+DQoJPiA+DQoJPiA+IC0t
DQoJPiA+DQoJPiA+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCgk+ID4gIlJlYWxpdHk6DQoJPiA+ICAgICAgIFlvdXJz
IHRvIGRpc2NvdmVyLiINCgk+ID4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgLS0g
VGhpbmtpbmdDYXQNCgk+ID4gTGVzbGllIERhaWdsZQ0KCT4gPiBsZXNsaWVAdGhpbmtpbmdjYXQu
Y29tDQoJPiA+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0NCgk+ID4NCgk+DQoJPg0KCQ0KCV9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQoJU3BlZWNoc2MgbWFpbGluZyBsaXN0DQoJ
U3BlZWNoc2NAaWV0Zi5vcmcNCglodHRwczovL3d3dzEuaWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5m
by9zcGVlY2hzYw0KCQ0KCV9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fDQoJU3BlZWNoc2MgbWFpbGluZyBsaXN0DQoJU3BlZWNoc2NAaWV0Zi5vcmcNCglodHRw
czovL3d3dzEuaWV0Zi5vcmcvbWFpbG1hbi9saXN0aW5mby9zcGVlY2hzYyANCg0K


--===============1984642137==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

--===============1984642137==--


From speechsc-bounces@ietf.org  Fri Oct  8 09:16:12 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id JAA21888
	for <speechsc-web-archive@ietf.org>; Fri, 8 Oct 2004 09:16:11 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CFulP-0006MH-Bj
	for speechsc-web-archive@ietf.org; Fri, 08 Oct 2004 09:26:28 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CFuTF-0006IC-3r; Fri, 08 Oct 2004 09:07:41 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CFuQp-0005g2-DM
	for speechsc@megatron.ietf.org; Fri, 08 Oct 2004 09:05:11 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id JAA21246
	for <speechsc@ietf.org>; Fri, 8 Oct 2004 09:05:10 -0400 (EDT)
Received: from sj-iport-4.cisco.com ([171.68.10.86])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CFuaj-00069t-Ly
	for speechsc@ietf.org; Fri, 08 Oct 2004 09:15:27 -0400
Received: from sj-core-1.cisco.com (171.71.177.237)
	by sj-iport-4.cisco.com with ESMTP; 08 Oct 2004 06:05:27 -0700
X-BrightmailFiltered: true
Received: from mira-sjc5-d.cisco.com (IDENT:mirapoint@mira-sjc5-d.cisco.com
	[171.71.163.28])
	by sj-core-1.cisco.com (8.12.10/8.12.6) with ESMTP id i98D4ZEE015361
	for <speechsc@ietf.org>; Fri, 8 Oct 2004 06:04:35 -0700 (PDT)
Received: from [10.32.245.151] (stealth-10-32-245-151.cisco.com
	[10.32.245.151]) by mira-sjc5-d.cisco.com (MOS 3.4.6-GR)
	with SMTP id AEX25292; Fri, 8 Oct 2004 06:04:35 -0700 (PDT)
Mime-Version: 1.0 (Apple Message framework v619)
Message-Id: <9A04088E-192A-11D9-95AC-000A95C73842@cisco.com>
To: "'speechsc@ietf.org'" <speechsc@ietf.org>
From: David R Oran <oran@cisco.com>
Date: Fri, 8 Oct 2004 09:04:34 -0400
X-Pgp-Agent: GPGMail 1.0.2
X-Mailer: Apple Mail (2.619)
X-Spam-Score: 0.0 (/)
X-Scan-Signature: cab78e1e39c4b328567edb48482b6a69
Subject: [Speechsc] Speechsc is not planning to meet at the November IETF
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1521774538=="
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: c0bedb65cce30976f0bf60a0a39edea4


--===============1521774538==
Content-Type: multipart/signed; protocol="application/pgp-signature";
	micalg=pgp-sha1; boundary="Apple-Mail-8-821874174"


--Apple-Mail-8-821874174
Content-Type: text/plain; charset=US-ASCII; format=flowed
Content-Transfer-Encoding: 7bit

After some deliberation, Eric and I are recommending that SPEECHSC not 
meet at IETF61 in Washington.

MRCPv2 is rapidly getting to the point where we can issue a last call, 
and the remaining work concerns getting the document itself in good 
enough shape to be approved in a last call and pass muster with the 
IESG. While there are a few remaining open issues, it does not appear 
that any of them require high bandwidth face-to-face discussion. They 
are being dealt with adequately by email on the list (and a lot of work 
by the author and WG participants!).

With a decent push, we should be able it issue a last call just around 
the time of IETF and I would encourage the WG participants to put our 
efforts into achieving that goal.

If there are any objections to this course of action, please speak up 
now!

Thanks,

Dave & Eric
SPEECHSC co-chairs.

--Apple-Mail-8-821874174
content-type: application/pgp-signature; x-mac-type=70674453;
	name=PGP.sig
content-description: This is a digitally signed message part
content-disposition: inline; filename=PGP.sig
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (Darwin)

iD8DBQFBZpBijWaEtlTdKuYRAl+6AKDPyjxzAvi5PhiugzJsHisXaOJSWwCfdRsS
LE938n7/Wk7IylA8QwFEqfI=
=9+7S
-----END PGP SIGNATURE-----

--Apple-Mail-8-821874174--



--===============1521774538==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

--===============1521774538==--




From speechsc-bounces@ietf.org  Fri Oct  8 15:12:06 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id PAA25709
	for <speechsc-web-archive@ietf.org>; Fri, 8 Oct 2004 15:12:05 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CG0Jt-0006av-Jc
	for speechsc-web-archive@ietf.org; Fri, 08 Oct 2004 15:22:25 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CG06x-0006V7-5v; Fri, 08 Oct 2004 15:09:03 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CFzvw-0003OY-Rn
	for speechsc@megatron.ietf.org; Fri, 08 Oct 2004 14:57:41 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id OAA23188
	for <speechsc@ietf.org>; Fri, 8 Oct 2004 14:57:38 -0400 (EDT)
Received: from [66.46.69.18] (helo=letter.nuance.com)
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CG05i-00066Z-Je
	for speechsc@ietf.org; Fri, 08 Oct 2004 15:07:57 -0400
Received: from postcard.nuance.com ([10.3.6.20]:40436)
	by letter.nuance.com with esmtp id 1CFzvs-0001Qm-K2;
	Fri, 08 Oct 2004 11:57:36 -0700
Received: from mtb1exch01.nuance.com ([10.3.2.6]) by postcard.nuance.com with
	Microsoft SMTPSVC(6.0.3790.0); Fri, 8 Oct 2004 14:54:38 -0400
X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Subject: RE: [Speechsc] XML schema for verification result
Date: Fri, 8 Oct 2004 14:54:37 -0400
Message-ID: <7DE7C4EF3B7C8B4B82955191378290D801FF292E@mtb1exch01.nuance.com>
Thread-Topic: [Speechsc] XML schema for verification result
Thread-Index: AcSrAN5bwiqa1w/tT0ia6WZZfHVGjwCQlVuw
From: "Pierre Forgues" <forgues@nuance.com>
To: "Reifenrath, Klaus" <Klaus.Reifenrath@Scansoft.com>, <speechsc@ietf.org>
X-OriginalArrivalTime: 08 Oct 2004 18:54:38.0573 (UTC)
	FILETIME=[430399D0:01C4AD68]
X-FromHost: postcard.nuance.com [10.3.6.20]:40436
Lines: 80
X-Spam-Score: 0.0 (/)
X-Scan-Signature: bdc523f9a54890b8a30dd6fd53d5d024
Content-Transfer-Encoding: quoted-printable
Cc: Sarvi Shanmugham <sarvi@cisco.com>, zilca@us.ibm.com
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 3002fc2e661cd7f114cb6bae92fe88f1
Content-Transfer-Encoding: quoted-printable

Klaus,

We do not use <verification-score> in the context of training and would
therefore not have an issue with this being optional.

Also, we are OK for items 2 and 3 below.

Pierre

-----Original Message-----
From: Reifenrath, Klaus [mailto:Klaus.Reifenrath@Scansoft.com]=20
Sent: Tuesday, October 05, 2004 1:27 PM
To: speechsc@ietf.org
Cc: 'Sarvi Shanmugham'; Pierre Forgues; 'zilca@us.ibm.com'
Subject: RE: [Speechsc] XML schema for verification result

Hi group,

aside from the discussion which schema we should use, I like to resolve
the
open issues concerning the verification result elements:
According to the current spec the <verification-score> element is
mandatory
also for training results. What is the meaning of a verification score
in
the context of training (see 4))?
What is your opinion on 2) and 3)?=20

Regards,
Klaus
  =20
-----Original Message-----
From: Reifenrath, Klaus [mailto:Klaus.Reifenrath@Scansoft.com]
Sent: Mittwoch, 21. Juli 2004 18:11
To: 'Sarvi Shanmugham'; Daniel Burnett
Cc: speechsc@ietf.org
Subject: [Speechsc] XML schema for verification result


Hi Sarvi and Dan,

I created a XML schema for the verification result. Like the RelaxNG
schema
of Dan it can be improved. I also adapted the sample results of Dan.

Compared to Dan's proposal I changed the following:
1) Because of XML Schema limitations I need to distinguish between
verification and training results.=20
2) The <num-frames> element is no longer child element of the
<voice-print>
element, because in case of multi-verification, the value of num-frames
is
the same for all voiceprints.
3) The <adapted> and <need-more-data> are child elements of the
<voice-print> element.
4) For training results the <verification-score> is now optional. I also
added a new optional parameter for training results: consistent.
5) The <voice-print> element has 2 mandatory attributes: repository-uri
and
identifier
6) Currently I included the <result> element of NLSML, to allow
validation
of sample results with validation tools. But I still do not see the
benefit
of using NLSML for verification results, because a speaker verification
engine is not a semantic interpretation component.


Suggestions for improvement are welcome!

Klaus
 <<ver.xsd>>  <<mrcpver20040721example.xml>>=20





=20
 =20


_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


From speechsc-bounces@ietf.org  Mon Oct 11 15:09:55 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id PAA05342
	for <speechsc-web-archive@ietf.org>; Mon, 11 Oct 2004 15:09:55 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CH5j4-0006sh-1W
	for speechsc-web-archive@ietf.org; Mon, 11 Oct 2004 15:20:54 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CH5W7-0004Hc-Bg; Mon, 11 Oct 2004 15:07:31 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CH5Qy-00071x-Kz
	for speechsc@megatron.ietf.org; Mon, 11 Oct 2004 15:02:12 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id PAA04319
	for <speechsc@ietf.org>; Mon, 11 Oct 2004 15:02:10 -0400 (EDT)
Received: from e6.ny.us.ibm.com ([32.97.182.106])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CH5bX-0006g7-5X
	for speechsc@ietf.org; Mon, 11 Oct 2004 15:13:09 -0400
Received: from northrelay02.pok.ibm.com (northrelay02.pok.ibm.com
	[9.56.224.150])
	by e6.ny.us.ibm.com (8.12.10/8.12.9) with ESMTP id i9BJ1cSD462772
	for <speechsc@ietf.org>; Mon, 11 Oct 2004 15:01:38 -0400
Received: from d01ml605.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64])
	by northrelay02.pok.ibm.com (8.12.10/NCO/VER6.6) with ESMTP id
	i9BJ2rA9144762
	for <speechsc@ietf.org>; Mon, 11 Oct 2004 15:02:53 -0400
Importance: Normal
X-Mailer: Lotus Notes Release 6.0.2CF1 June 9, 2003
MIME-Version: 1.0
To: speechsc-admin@ietf.org, <speechsc@ietf.org>
Message-ID: <OFA6BE7EAC.E15763A1-ON85256F2A.00683119-85256F2A.0068847C@us.ibm.com>
From: Ran Zilca <zilca@us.ibm.com>
Date: Mon, 11 Oct 2004 15:01:36 -0400
X-MIMETrack: Serialize by Router on D01ML605/01/M/IBM(Release 6.51HF632 |
	October 8, 2004) at 10/11/2004 15:01:38
Content-type: text/plain; charset=US-ASCII
X-Spam-Score: 3.5 (+++)
X-Scan-Signature: f60d0f7806b0c40781eee6b9cd0b2135
Subject: [Speechsc] INTERMEDIATE-RECOG-RESULT
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: fb6060cb60c0cea16e3f7219e40a0a81





Hi all,

There was some correspondence going on some time ago about the possibility
of adding a INTERMEDIATE-RECOG-RESULT method to the recognition resource. I
enclosed what I have of this correspomdence below.

There is a corresponding method for the verification resource, and there
seems to be enough justification to have the same method for recognition. I
realize that this sparks up some issues wrt transcription etc. However, in
a finite state grammar system there could be plenty of scenarios where it
will be useful for the application to get the intermediate text without
waiting for the user to finish speaking. It seems to me that the v2 spec
may not be complete without it. This could be an optional method.

Any thoughts anyone?

Thanks,

-- Ran.





-----Original Message-----
From: Jeff Kusnitz [mailto:jk@us.ibm.com]
Sent: Freitag, 16. April 2004 20:29
To: Reifenrath, Klaus
Cc: 'speechsc@ietf.org'; speechsc-admin@ietf.org
Subject: Re: [Speechsc] INTERMEDIATE-RECOG-RESULT


It certainly seems reasonable.  I assume the body of the
INTERMEDIATE-RECOG-RESULT message would contain the intermediate results
in NLSML (or EMMA) format?

Klaus wrote on 04/15/2004 02:10:52 AM:

> The mail of Ran Zilca on intermediate results for SV/SI reminded me of
one
> of the open questions (see minutes of last WG meeting):
> Do we need an INTERMEDIATE-RECOG-RESULT?
>
> Intermediate results for ASR make sense as well.
> Imagine a multimodal application where the caller can fill several input
> fields: after
> the recognizer recognized valid input for the first slot, the recognized
> value could be visualized, as soon the recognizer recognized input for
> another slot, this slot can be filled on the screen as well and so on.
> Another use case for intermediate results would be e-mail/SMS dictation.
>
> Therefore I suggest to add an optional INTERMEDIATE-RECOG-RESULT event
to
> the speech recognizer resource.
>
> Klaus
>
>
>
> _______________________________________________
> Speechsc mailing list
> Speechsc@ietf.org
 > https://www1.ietf.org/mailman/listinfo/speechsc


_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


From speechsc-bounces@ietf.org  Mon Oct 18 07:04:09 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id HAA05544
	for <speechsc-web-archive@ietf.org>; Mon, 18 Oct 2004 07:04:07 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CJVV4-0005dp-0E
	for speechsc-web-archive@ietf.org; Mon, 18 Oct 2004 07:16:33 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CJVHf-0001XM-FI; Mon, 18 Oct 2004 07:02:35 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CJOYH-0006eJ-PQ
	for speechsc@megatron.ietf.org; Sun, 17 Oct 2004 23:51:18 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id XAA21701
	for <speechsc@ietf.org>; Sun, 17 Oct 2004 23:51:13 -0400 (EDT)
Received: from sj-iport-4.cisco.com ([171.68.10.86])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CJOk1-0006E0-07
	for speechsc@ietf.org; Mon, 18 Oct 2004 00:03:35 -0400
Received: from sj-core-4.cisco.com (171.68.223.138)
	by sj-iport-4.cisco.com with ESMTP; 17 Oct 2004 20:50:36 -0700
X-BrightmailFiltered: true
Received: from vtg-um-e2k1.sj21ad.cisco.com (vtg-um-e2k1.cisco.com
	[171.70.93.55])
	by sj-core-4.cisco.com (8.12.10/8.12.6) with ESMTP id i9I3oW7o026469
	for <speechsc@ietf.org>; Sun, 17 Oct 2004 20:50:32 -0700 (PDT)
Received: from cisco.com ([10.82.241.102]) by vtg-um-e2k1.sj21ad.cisco.com
	with Microsoft SMTPSVC(5.0.2195.6713); 
	Sun, 17 Oct 2004 20:50:13 -0700
Message-ID: <41733D60.2030502@cisco.com>
Date: Sun, 17 Oct 2004 20:49:52 -0700
From: Sarvi Shanmugham <sarvi@cisco.com>
Organization: Cisco Systems Inc.
User-Agent: Mozilla Thunderbird 0.5 (Windows/20040207)
X-Accept-Language: en-us, en
MIME-Version: 1.0
To: "IETF SPEECHSC (E-mail)" <speechsc@ietf.org>
Content-Type: multipart/mixed; boundary="------------070205000707050908050604"
X-OriginalArrivalTime: 18 Oct 2004 03:50:13.0944 (UTC)
	FILETIME=[92E7D380:01C4B4C5]
X-Spam-Score: 0.4 (/)
X-Scan-Signature: 1b456073e3e172141079c035c35c129f
X-Mailman-Approved-At: Mon, 18 Oct 2004 07:02:34 -0400
Subject: [Speechsc] MRCPv2 draft -05 
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.4 (/)
X-Scan-Signature: e5e1dbdf0592a45a17cf4f1433790e6d

This is a multi-part message in MIME format.
--------------070205000707050908050604
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit



Hi all,
     I just submitted the -05 version of the mrcpv2 draft.

Some of the changes in this draft are as following.
      1. Address Magnus Westerlands comments. I will be responding to 
his email with the specific issues he raised that have not resulted in 
modification.
      2. A lot of formatting cleanup work.
      3. Issues resolved on the alias so far.
          1. Server Error codes.
          2.  Verification result changes  from klaus(needmoredata etc).
      4. Included the NLSML text and DTD
      5. Created a NLSML schema based on the DTD.
      6. Formated the header summary into a table per Magnus's suggestion.
      7. Ran the ABNF through the parser.
      8. Behaviour of VERIFY-FROM-BUFFER when buffer is in use.
      9. Editorial comments received so far.
     
Pending:     IANA Considerations section
Pending:     Convert the Verification and Enrollment Schema to W3C 
Schema(Not sure if this is needed or if we can leave it in its current 
schema format.)

I can send the Word document with change tracking on. But am not sure if 
it is apropriate to be sending a word document to this mailing list.
Let me know if it is ok and I can send it out.

Thanks,
Sarvi

--------------070205000707050908050604
Content-Type: text/plain;
 name="draft-ietf-speechsc-mrcpv2-05.txt"
Content-Disposition: inline;
 filename="draft-ietf-speechsc-mrcpv2-05.txt"
Content-Transfer-Encoding: 7bit



 Internet Engineering Task Force                    Saravanan Shanmugham 
 Internet-Draft                                       Cisco Systems Inc. 
 draft-ietf-speechsc-mrcpv2-05                          October 18, 2004 
 Expires: April 18, 2005                                                 
                                                                         
                                                                         
                                                                         
  
  
  
              Media Resource Control Protocol Version 2(MRCPv2) 
                                           
  
 Status of this Memo  
     
    By submitting this Internet-Draft, we certify that any applicable 
    patent or other IPR claims of which we are aware have been 
    disclosed, and any of which we become aware will be disclosed, in 
    accordance with RFC 3668.  
         
    Internet-Drafts are working documents of the Internet Engineering 
    Task Force (IETF), its areas, and its working groups.  Note that 
    other groups may also distribute working documents as Internet-
    Drafts.  
         
    Internet-Drafts are draft documents valid for a maximum of six 
    months and may be updated, replaced, or obsoleted by other documents 
    at any time.  It is inappropriate to use Internet-Drafts as 
    reference material or to cite them other than as "work in progress".  
         
    The list of current Internet-Drafts can be accessed at 
    http://www.ietf.org/ietf/1id-abstracts.txt .  
         
    The list of Internet-Draft Shadow Directories can be accessed at 
    http://www.ietf.org/shadow.html .  
         
    This Internet-Draft will expire on April 18, 2005.  
     
           
 Copyright Notice 
     
    Copyright (C) The Internet Society (2004).  All Rights Reserved. 
                 
        
 Abstract 
   
    This document describes a proposal for a Media Resource Control 
    Protocol Version 2 (MRCPv2) and aims to meet the requirements 
    specified in the SPEECHSC working group requirements document. It is 
    based on the Media Resource Control Protocol (MRCP), also called 

  
 S. Shanmugham, et. al.                                          Page 1 

                            MRCPv2 Protocol              October, 2004 

    MRCPv1 developed jointly by Cisco Systems, Inc., Nuance 
    Communications, and Speechworks Inc.  
     
    The MRCPv2 protocol will control media service resources like speech 
    synthesizers, recognizers, signal generators, signal detectors, fax 
    servers etc. over a network. This protocol depends on a session 
    management protocol such as the Session Initiation Protocol (SIP) to 
    establish a separate MRCPv2 control session between the client and 
    the server. It also depends on SIP to establish the media pipe and 
    associated parameters between the media source or sink and the media 
    server. Once this is done, the MRCPv2 protocol exchange can happen 
    over the control session established above allowing the client to 
    command and control the media processing resources that may exist on 
    the media server.  
     
     
 Table of Contents 
     
      Status of this Memo..............................................1 
      Copyright Notice.................................................1 
      Abstract.........................................................1 
      Table of Contents................................................2 
      1.   Introduction:...............................................4 
      2.   Notational Convention.......................................5 
      3.   Architecture:...............................................5 
      3.1.  MRCPv2 Media Resources:....................................7 
      3.2.  Server and Resource Addressing.............................8 
      4.   MRCPv2 Protocol Basics......................................8 
      4.1.  Connecting to the Server...................................8 
      4.2.  Managing Resource Control Channels.........................8 
      4.3.  Media Streams and RTP Ports...............................15 
      4.4.  MRCPv2 Message Transport..................................16 
      4.5.  Resource Types............................................17 
      5.   MRCPv2 Specification.......................................17 
      5.1.  Request...................................................18 
      5.2.  Response..................................................19 
      5.3.  Event.....................................................20 
      6.   MRCP Generic Features......................................21 
      6.1.  Generic Message Headers...................................21 
      6.2.  SET-PARAMS................................................30 
      6.3.  GET-PARAMS................................................30 
      7.   Resource Discovery.........................................31 
      8.   Speech Synthesizer Resource................................32 
      8.1.  Synthesizer State Machine.................................33 
      8.2.  Synthesizer Methods.......................................33 
      8.3.  Synthesizer Events........................................34 
      8.4.  Synthesizer Header Fields.................................34 
      8.5.  Synthesizer Message Body..................................40 
      8.6.  SPEAK.....................................................43 
      8.7.  STOP......................................................44 
      8.8.  BARGE-IN-OCCURRED.........................................45 
  
 S Shanmugham                  IETF-Draft                        Page 2 

                            MRCPv2 Protocol              October, 2004 

      8.9.  PAUSE.....................................................47 
      8.10. RESUME....................................................48 
      8.11. CONTROL...................................................49 
      8.12. SPEAK-COMPLETE............................................50 
      8.13. SPEECH-MARKER.............................................51 
      8.14. DEFINE-LEXICON............................................52 
      9.   Speech Recognizer Resource.................................53 
      9.1.  Recognizer State Machine..................................54 
      9.2.  Recognizer Methods........................................54 
      9.3.  Recognizer Events.........................................55 
      9.4.  Recognizer Header Fields..................................55 
      9.5.  Recognizer Message Body...................................69 
      9.6.  DEFINE-GRAMMAR............................................83 
      9.7.  RECOGNIZE.................................................87 
      9.8.  STOP......................................................89 
      9.9.  GET-RESULT................................................90 
      9.10. START-OF-SPEECH...........................................91 
      9.11. START-INPUT-TIMERS........................................92 
      9.12. RECOGNITION-COMPLETE......................................92 
      9.13. START-PHRASE-ENROLLMENT...................................94 
      9.14. ENROLLMENT-ROLLBACK.......................................95 
      9.15. END-PHRASE-ENROLLMENT.....................................96 
      9.16. MODIFY-PHRASE.............................................96 
      9.17. DELETE-PHRASE.............................................97 
      9.18. INTERPRET.................................................97 
      9.19. INTERPRETATION-COMPLETE...................................98 
      9.20. DTMF Detection...........................................100 
      10.  Recorder Resource.........................................100 
      10.1. Recorder State Machine...................................100 
      10.2. Recorder Methods.........................................100 
      10.3. Recorder Events..........................................100 
      10.4. Recorder Header Fields...................................101 
      10.5. Recorder Message Body....................................105 
      10.6. RECORD...................................................105 
      10.7. STOP.....................................................106 
      10.8. RECORD-COMPLETE..........................................107 
      10.9. START-INPUT-TIMERS.......................................107 
      11.  Speaker Verification and Identification...................109 
      11.1. Speaker Verification State Machine.......................110 
      11.2. Speaker Verification Methods.............................110 
      11.3. Verification Events......................................111 
      11.4. Verification Header Fields...............................111 
      11.5. Verification Result Elements.............................119 
      11.6. START-SESSION............................................123 
      11.7. END-SESSION..............................................124 
      11.8. QUERY-VOICEPRINT.........................................124 
      11.9. DELETE-VOICEPRINT........................................125 
      11.10. VERIFY..................................................126 
      11.11. VERIFY-FROM-BUFFER......................................126 
      11.12. VERIFY-ROLLBACK.........................................129 
      11.13. STOP....................................................130 
  
 S Shanmugham                  IETF-Draft                        Page 3 

                            MRCPv2 Protocol              October, 2004 

      11.14. START-INPUT-TIMERS......................................131 
      11.15. VERIFICATION-COMPLETE...................................131 
      11.16. START-OF-SPEECH.........................................132 
      11.17. CLEAR-BUFFER............................................132 
      11.18. GET-INTERMEDIATE-RESULT.................................132 
      12.  Security Considerations...................................133 
      13.  Examples:.................................................133 
      14.  Reference Documents.......................................145 
      15.  Appendix..................................................146 
      15.1. ABNF Message Definitions.................................146 
      15.2. XML Schema and DTD.......................................161 
      Full Copyright Statement.......................................168 
      Intellectual Property..........................................169 
      Contributors...................................................169 
      Acknowledgements...............................................170 
      Editors' Addresses.............................................170 
     
  
 1.   Introduction: 
     
    The MRCPv2 protocol is designed for a client device to control media 
    processing resources on the network allowing to process and 
    audio/video stream. Some of these media processing resources could 
    be speech recognition, speech synthesis engines, speaker 
    verification or speaker identification engines. This allows a vendor 
    to implement distributed Interactive Voice Response platforms such 
    as VoiceXML [7] browsers. 
     
       The protocol requirements of SPEECHSC require that the protocol 
    is capable of reaching a media processing server and setting up 
    communication channels to the media resources, to send/recieve 
    control messages and media streams to/from the server. The Session 
    Initiation Protocol (SIP) protocol described in [4] meets these 
    requirements and is used to setup and tear down media and control 
    pipes to the server. In addition, the SIP re-INVITE can be used to 
    change the characteristics of these media and control pipes mid-
    session.  The MRCPv2 protocol hence is designed to leverage and 
    build upon a session management protocols such as Session Initiation 
    Protocol (SIP) and Session Description Protocol (SDP). SDP is used 
    to describe the parameters of the media pipe associated with that 
    session. It is mandatory to support SIP as the session level 
    protocol to ensure interoperability. Other protocols can be used at 
    the session level by prior agreement. 
     
       The MRCPv2 protocol depends on SIP and SDP to create the session, 
    and setup the media channels to the server. It also depends on SIP 
    and SDP to establish MRCPv2 control channels between the client and 
    the server for each media processing resource required for that 
    session. The MRCPv2 protocol exchange between the client and the 
    media resource can then happen on that control channel. The MRCPv2 

  
 S Shanmugham                  IETF-Draft                        Page 4 

                            MRCPv2 Protocol              October, 2004 

    protocol exchange happening on this control channel does not change 
    the state of the SIP session, the media or other parameters of the 
    session SIP initiated. It merely controls and affects the state of 
    the media processing resource associated with that MRCPv2 channel. 
     
       The MRCPv2 protocol defines the messages to control the different 
    media processing resources and the state machines required to guide 
    their operation. It also describes how these messages are carried 
    over a transport layer such as TCP, SCTP or TLS.  
  
     
 2.   Notational Convention 
     
    The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 
    "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY" and "OPTIONAL" in this 
    document are to be interpreted as described in RFC 2119[9].  
     
    Since many of the definitions and syntax are identical to HTTP/1.1, 
    this specification only points to the section where they are defined 
    rather than copying it. For brevity, [HX.Y] is to be taken to refer 
    to Section X.Y of the current HTTP/1.1 specification (RFC 2616 [1]). 
     
    All the mechanisms specified in this document are described in both 
    prose and an augmented Backus-Naur form (ABNF). It is described in 
    detail in RFC 2234 [3]. 
     
    The complete message format in ABNF form is provided in Appendix 
    section 12.1 and is the normative format definition. 
     
    Media Resource 
         An entity on the MRCP Server that can be controlled through the 
         MRCP protocol 
     
    MRCP Server  
         Aggregate of one or more "Media Resource" entities on a Server, 
         exposed through the MRCP protocol.("Server" for short) 
     
    MRCP Client  
         An entity controlling one or more Media Resources through the 
         MRCP protocol. ("Client" for short) 
     
     
     
 3.   Architecture: 
     
    The system consists of a client that requires the generation of 
    media streams or requires the processing of media streams and a 
    media resource server that has the resources or engines to process 
    or generate these streams. The client establishes a session using 
    SIP and SDP with the server to use its media processing resources. A 
    SIP URI refers to the MRCPv2 server.  
  
 S Shanmugham                  IETF-Draft                        Page 5 

                            MRCPv2 Protocol              October, 2004 

     
    The session management protocol (SIP) will use SDP with the 
    offer/answer model described RFC 3264 to describe and setup the 
    MRCPv2 control channels. Separate MRCPv2 control channels are need 
    for controlling the different media processing resources associated 
    with that session. Within a SIP session, the individual resource 
    control channels for the different resources are added or removed 
    through the SDP offer/answer model and the SIP re-INVITE dialog. 
     
    The server, through the SDP exchange, provides the client with a 
    unique channel identifier and a port number(TCP or SCTP). The client 
    MAY then open a new TCP connection with the server using this port 
    number. Multiple MRCPv2 channels can share a TCP connection between 
    the client and the server. All MRCPv2 messages exchanged between the 
    client and the server will also carry the specified channel 
    identifier that MUST be unique among all MRCPv2 control channels 
    that are active on that server. The client can use this channel to 
    control the media processing resource associated with that channel. 
     
    The session management protocol (SIP) will also establish media 
    pipes between the client (or source/sink of media) and the MRCP 
    server using SDP m-lines. A media pipe maybe shared by one or more 
    media processing resources under that SIP session or each media 
    processing resource may have its own media pipe.  
     
         MRCPv2 client                  MRCPv2 Media Resource Server 
      |--------------------|             |-----------------------------| 
      ||------------------||             ||---------------------------|| 
      || Application Layer||             || TTS  | ASR  | SV   | SI   ||  
      ||------------------||             ||Engine|Engine|Engine|Engine|| 
      ||Media Resource API||             ||---------------------------|| 
      ||------------------||             || Media Resource Management || 
      || SIP  |  MRCPv2   ||             ||---------------------------|| 
      ||Stack |           ||             ||   SIP  |    MRCPv2        || 
      ||      |           ||             ||  Stack |                  || 
      ||------------------||             ||---------------------------|| 
      ||   TCP/IP Stack   ||----MRCPv2---||       TCP/IP Stack        || 
      ||                  ||             ||                           || 
      ||------------------||-----SIP-----||---------------------------|| 
      |--------------------|             |-----------------------------|               
               |                             / 
              SIP                           / 
               |                           /            
      |-------------------|              RTP 
      |                   |              / 
      | Media Source/Sink |-------------/ 
      |                   | 
      |-------------------| 
  
                     Fig 1: Architectural Diagram 
     
  
 S Shanmugham                  IETF-Draft                        Page 6 

                            MRCPv2 Protocol              October, 2004 

   MRCPv2 Media Resource Types: 
     
    The MRCP server may offer one or more of the following media 
    processing resources to its clients. 
     
    Basic Synthesizer 
     
    A speech synthesizer resource with very limited capabilities, that 
    can be achieved through the playing out concatenated audio file 
    clips. The speech data is described as SSML data but with limited 
    support for its elements. It MUST support <speak>, <audio>, <sayas> 
    and <mark> tags in SSML. 
     
     
    Speech Synthesizer 
     
    A full capability speech synthesizer capable of rendering regular 
    speech and SHOULD have full SSML support.  
     
     
    Recorder 
     
    A resource capable of recording audio and saving it to an URI. It 
    also has some end-pointing capabilities for detecting beginning 
    speech and silence at the end of recording. 
     
     
    DTMF Recognizer 
     
    A limited DTMF only recognizer that is able to recognize DTMF digits 
    in the input stream to match supplied digit grammar. It could also 
    do a semantic interpretation based on semantic tags in the grammar. 
     
     
    Speech Recognizer 
     
    A full speech recognizer that is capable of receiving audio and 
    interpreting it to recognition results. It also has a natural 
    language semantic interpreter to post process the recognized data 
    according to the semantic data in the grammar and provide semantic 
    results along with the recognized input. The recognizer may also 
    support enrolled grammars, where the client can enroll and create 
    new personal grammars for use in future grammars. 
     
     
    Speaker Verification    
     
    A resource capable of verifying the authenticity of a person by 
    matching his voice to a saved voice-print. This may also involve 
    matching the callers voice with more than one voice-print, also 
    called multi-verification or speaker identification. 
  
 S Shanmugham                  IETF-Draft                        Page 7 

                            MRCPv2 Protocol              October, 2004 

  
     
 3.1. Server and Resource Addressing 
     
    The MRCPv2 server as a whole is a generic SIP server and addressed 
    by a specific SIP URL registered by the server.  
     
    Example: 
     
      sip:mrcpv2@mediaserver.com 
  
     
 4.   MRCPv2 Protocol Basics 
     
    MRCPv2 requires the use of a connection oriented transport layer 
    protocol such as TCP or SCTP to guarantee reliable sequencing and 
    delivery of MRCPv2 control messages between the client and the 
    server. If security is needed a TLS connection is used to carry 
    MRCPv2 messages. One or more TCP,  SCTP or TLS connections between 
    the client and the server can be shared between different MRCPv2 
    channels to the server. The individual messages carry the channel 
    identifier to differentiate messages on different channels. The 
    message format for MRCPv2 is text based with mechanisms to carry 
    embedded binary data. This allows data like recognition grammars, 
    recognition results, synthesizer speech markup etc. to be carried in 
    the MRCPv2 message between the client and the server resource. The 
    protocol does not address session and media establishment and 
    management and relies of SIP and SDP to do this.  
     
 4.1. Connecting to the Server 
     
    The MRCPv2 protocol depends on a session establishment and 
    management protocol such as SIP in conjunction with SDP. The client 
    finds and reaches a MRCPv2 server across the SIP network using the 
    INVITE and other SIP dialog exchanges. The SDP offer/answer exchange 
    model over SIP is used to establish resource control channels for 
    each resource. The SDP offer/answer exchange is also used to 
    establish media pipes between the source or sink of audio and the 
    server.  
     
      
 4.2. Managing Resource Control Channels 
     
    The client needs a separate MRCPv2 resource control channel to 
    control each media processing resource under the SIP session. A 
    unique channel identifier string identifies these resource control 
    channels. The channel identifier string consists of a hexadecimal 
    number specifying the channel ID followed by a string token 
    specifying the type of resource separated by an "@". The server 
    generates the hexadecimal channel ID and MUST make sure it does not 
    clash with any other MRCP channel allocated to that server. MRCPv2 
  
 S Shanmugham                  IETF-Draft                        Page 8 

                            MRCPv2 Protocol              October, 2004 

    defines the following type of media processing resources. Additional 
    resource types, their associated methods/events and state machines 
    can be added by future specification proposing to extend the 
    capabilities of MRCPv2. 
     
           Resource Type       Resource Description 
            speechrecog         Speech Recognition 
            dtmfrecog           DTMF Recognition 
            speechsynth         Speech Synthesis 
            basicsynth          Poorman's Speech Synthesizer 
            speakverify         Speaker Verification 
            recorder            Speech Recording 
  
    Additional resource types, their associated methods/events and state 
    machines can be added by future specification proposing to extend 
    the capabilities of MRCPv2. 
     
    The SIP INVITE or re-INVITE dialog exchange and the SDP offer/answer 
    exchange it carries, will contain m-lines describing the resource 
    control channel it wants to allocate. There MUST be one SDP m-line 
    for each MRCPv2 resource that needs to be controlled. This m-line 
    will have a media type field of "control" and a transport type field 
    of "TCP", "SCTP" or "TCP/TLS". The port number field of the m-line 
    MUST contain the discard port of the transport protocol (say port 9 
    for TCP) in the SDP offer from the client and MUST contain the TCP 
    listen port on the server in the SDP answer. The client may then 
    setup a TCP or TLS connection to that server port or share an 
    already established connection to that port. The format field of the 
    m-line MUST contain "application/mrcpv2". The client must specify 
    the resource type identifier in the resource attribute associated 
    with the control m-line of the SDP offer. The server MUST respond 
    with the full Channel-Identifier (which includes the resource type 
    identifier and an unique hexadecimal identifier), in the "channel" 
    attribute associated with the control m-line of the SDP answer. 
     
    All servers MUST support TLS, SHOULD support TCP and MAY support 
    SCTP and it is up to the client to choose which mode of transport it 
    wants to use for an MRCPv2 session. When using TCP, SCTP or TLS the 
    m-lines MUST conform to IETF draft[20] which describes the usage of 
    SDP for connection oriented transport. When using TLS the SDP m-line 
    for the control pipe MUST conform to the IETF draft[21] in addition 
    to the IETF draft[20]. IETF draft[21] specifies the usage of SDP for 
    establishing a secure connection oriented transport over TLS. 
     
    When the client wants to add a media processing resource to the 
    session, it MUST initiate a re-INVITE dialog. The SDP offer/answer 
    exchange contained in this SIP dialog will contain an additional 
    control m-line for the new resource that needs to be allocated. The 
    server, on seeing the new m-line, will allocate the resource and 
    respond with a corresponding control m-line in the SDP answer 
    response.  
  
 S Shanmugham                  IETF-Draft                        Page 9 

                            MRCPv2 Protocol              October, 2004 

     
    The a=setup attribute as described in [20] MUST be "active" for the 
    offer from the client and MUST be "passive" for the answer from the 
    MRCP server. The a=connection attribute MUST have a value of "new" 
    on the very first control m-line offer from the client to a MRCP 
    server. Subsequent control m-lines offers from the client to the 
    MRCP server MAY contain "new" or "existing", depending on whether 
    the client wants to share a existing connection oriented pipe. The 
    value of "existing" tells the server that the client wants to reuse 
    an existing transport connection between the client and the server. 
    The server can respond with a value of "existing", if wants to allow 
    sharing of existing pipes or can reply with a value of "new", in 
    which case the client MUST initiate new connection oriented pipe.   
     
    Note: Only SDP m-lines having a common SDP format field of 
    "application/mrcpv2" can share connection orient pipes between them. 
    Such pipe is reserved exclusively for MRCPv2 communication and 
    cannot be shared with any other protocol.  
     
    When the client wants to de-allocate the resource from this session, 
    it MUST initiate a SIP re-INVITE dialog with the server and MUST 
    offer the control m-line with a port 0. The server MUST then answer 
    the control m-line with a response of port 0. This de-allocates the 
    usage of the associated MRCP identifier and resource. But may not 
    close the TCP, SCTP or TLS connection if it is currently being 
    shared among multiple MRCP channels. When all MRCP channels that may 
    be sharing the connection are released and the associated SIP 
    connections are closed, the client or server disconnect the shared 
    connection oriented pipe. 
     
    Example 1:  
    This exchange adds a resource control channel for a synthesizer. 
    Since a synthesizer would be generating an audio stream, this 
    interaction also creates a receive-only audio stream for the server 
    to send audio to. 
      
    C->S:  
           INVITE sip:mresources@mediaserver.com SIP/2.0  
           Via: SIP/2.0/TCP client.atlanta.example.com:5060;  
                branch=z9hG4bK74bf9  
           Max-Forwards: 6  
           To: MediaServer <sip:mresources@mediaserver.com>  
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774  
           Call-ID: a84b4c76e66710  
           CSeq: 314161 INVITE  
           Contact: <sip:sarvi@cisco.com>  
           Content-Type: application/sdp  
           Content-Length: ... 
                        
           v=0  
           o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4  
  
 S Shanmugham                  IETF-Draft                       Page 10 

                            MRCPv2 Protocol              October, 2004 

           s=-  
           c=IN IP4 224.2.17.12 
           m=control 9 TCP application/mrcpv2 
           a=setup:active 
           a=connection:new 
           a=resource:speechsynth 
           a=cmid:1 
           m=audio 49170 RTP/AVP 0 96  
           a=rtpmap:0 pcmu/8000  
           a=recvonly  
           a=mid:1 
          
    S->C:  
           SIP/2.0 200 OK  
           Via: SIP/2.0/TCP client.atlanta.example.com:5060; 
                branch=z9hG4bK74bf9 
           To: MediaServer <sip:mresources@mediaserver.com>  
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774  
           Call-ID: a84b4c76e66710  
           CSeq: 314161 INVITE  
           Contact: <sip:sarvi@cisco.com>  
           Content-Type: application/sdp  
           Content-Length: ...  
                        
           v=0  
           o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4  
           s=-  
           c=IN IP4 224.2.17.12 
           m=control 32416 TCP application/mrcpv2 
           a=setup:passive 
           a=connection:new 
           a=channel:32AECB234338@speechsynth  
           a=cmid:1 
           m=audio 48260 RTP/AVP 00 96  
           a=rtpmap:0 pcmu/8000  
           a=sendonly  
           a=mid:1  
          
    C->S:  
           ACK sip:mresources@mediaserver.com SIP/2.0  
           Via: SIP/2.0/TCP client.atlanta.example.com:5060; 
                branch=z9hG4bK74bf9 
           Max-Forwards: 6  
           To: MediaServer <sip:mresources@mediaserver.com>;tag=a6c85cf  
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774  
           Call-ID: a84b4c76e66710  
           CSeq: 314162 ACK  
           Content-Length: 0  
     
    Example 2:  

  
 S Shanmugham                  IETF-Draft                       Page 11 

                            MRCPv2 Protocol              October, 2004 

    This exchange continues from example 1 allocates an additional 
    resource control channel for a recognizer. Since a recognizer would 
    need to receive an audio stream for recognition, this interaction 
    also updates the audio stream to sendrecv making it a 2-way audio 
    stream. 
      
    C->S:  
           INVITE sip:mresources@mediaserver.com SIP/2.0  
           Via: SIP/2.0/TCP client.atlanta.example.com:5060; 
                branch=z9hG4bK74bf9 
           Max-Forwards: 6  
           To: MediaServer <sip:mresources@mediaserver.com>  
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774  
           Call-ID: a84b4c76e66710  
           CSeq: 314163 INVITE  
           Contact: <sip:sarvi@cisco.com>  
           Content-Type: application/sdp  
           Content-Length: ...  
                 
           v=0  
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4  
           s=- 
           c=IN IP4 224.2.17.12 
           m=control 9 TCP application/mrcpv2 
           a=setup:active 
           a=connection:existing 
           a=resource:speechrecog 
           a=cmid:1 
           m=control 9 TCP application/mrcpv2 
           a=setup:active 
           a=connection:existing 
           a=resource:speechsynth 
           a=cmid:1 
           m=audio 49170 RTP/AVP 0 96  
           a=rtpmap:0 pcmu/8000  
           a=rtpmap:96 telephone-event/8000  
           a=fmtp:96 0-15  
           a=sendrecv  
           a=mid:1 
          
    S->C:  
           SIP/2.0 200 OK  
           Via: SIP/2.0/TCP client.atlanta.example.com:5060; 
                branch=z9hG4bK74bf9 
           To: MediaServer <sip:mresources@mediaserver.com>  
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774  
           Call-ID: a84b4c76e66710  
           CSeq: 314163 INVITE  
           Contact: <sip:sarvi@cisco.com>  
           Content-Type: application/sdp  
           Content-Length: 131  
  
 S Shanmugham                  IETF-Draft                       Page 12 

                            MRCPv2 Protocol              October, 2004 

                        
           v=0  
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4  
           s=- 
           c=IN IP4 224.2.17.12 
           m=control 32416 TCP application/mrcpv2 
           a=setup:passive 
           a=connection:existing 
           a=channel:32AECB234338@speechrecog 
           a=cmid:1 
           m=control 32416 TCP application/mrcpv2 
           a=setup:passive 
           a=connection:existing 
           a=channel:32AECB234339@speechsynth 
           a=cmid:1 
           m=audio 48260 RTP/AVP 0 96  
           a=rtpmap:0 pcmu/8000  
           a=rtpmap:96 telephone-event/8000  
           a=fmtp:96 0-15  
           a=sendrecv  
           a=mid:1 
          
    C->S:  
           ACK sip:mresources@mediaserver.com SIP/2.0  
           Via: SIP/2.0/TCP client.atlanta.example.com:5060; 
                branch=z9hG4bK74bf9 
           Max-Forwards: 6  
           To: MediaServer <sip:mresources@mediaserver.com>;tag=a6c85cf  
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774  
           Call-ID: a84b4c76e66710  
           CSeq: 314164 ACK  
           Content-Length: 0  
            
     
    Example 3:  
    This exchange continues from example 2 and de-allocates recognizer 
    channel. Since a recognizer would not need to receive an audio 
    stream any more, this interaction also updates the audio stream to 
    recvonly. 
      
    C->S:  
           INVITE sip:mresources@mediaserver.com SIP/2.0  
           Via: SIP/2.0/TCP client.atlanta.example.com:5060; 
                branch=z9hG4bK74bf9 
           Max-Forwards: 6  
           To: MediaServer <sip:mresources@mediaserver.com>  
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774  
           Call-ID: a84b4c76e66710  
           CSeq: 314163 INVITE  
           Contact: <sip:sarvi@cisco.com>  
           Content-Type: application/sdp  
  
 S Shanmugham                  IETF-Draft                       Page 13 

                            MRCPv2 Protocol              October, 2004 

           Content-Length: ... 
                        
           v=0  
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4  
           s=- 
           c=IN IP4 224.2.17.12 
           m=control 0 TCP application/mrcpv2 
           a=resource:speechrecog  
           a=cmid:1 
           m=control 9 TCP application/mrcpv2 
           a=resource:speechsynth  
           a=cmid:1   
           m=audio 49170 RTP/AVP 0 96  
           a=rtpmap:0 pcmu/8000  
           a=recvonly  
           a=mid:1 
          
    S->C:  
           SIP/2.0 200 OK  
           Via: SIP/2.0/TCP client.atlanta.example.com:5060; 
                branch=z9hG4bK74bf9 
           To: MediaServer <sip:mresources@mediaserver.com>  
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774  
           Call-ID: a84b4c76e66710  
           CSeq: 314163 INVITE  
           Contact: <sip:sarvi@cisco.com>  
           Content-Type: application/sdp  
           Content-Length: 131  
                 
           v=0  
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4  
           s=- 
           c=IN IP4 224.2.17.12 
           m=control 0 TCP application/mrcpv2 
           a=channel:32AECB234338@speechrecog  
           a=cmid:1 
           m=control 32416 TCP application/mrcpv2 
           a=channel:32AECB234339@speechsynth  
           a=cmid:1 
           m=audio 48260 RTP/AVP 0 96  
           a=rtpmap:0 pcmu/8000  
           a=sendonly  
           a=mid:1 
          
    C->S:  
           ACK sip:mresources@mediaserver.com SIP/2.0  
           Via: SIP/2.0/TCP client.atlanta.example.com:5060; 
                branch=z9hG4bK74bf9 
           Max-Forwards: 6  
           To: MediaServer <sip:mresources@mediaserver.com>;tag=a6c85cf  
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774  
  
 S Shanmugham                  IETF-Draft                       Page 14 

                            MRCPv2 Protocol              October, 2004 

           Call-ID: a84b4c76e66710  
           CSeq: 314164 ACK  
           Content-Length: 0  
  
 4.3. Media Streams and RTP Ports 
     
    The client or the server would need to add audio (or other media) 
    pipes between the client and the server and associate them with the 
    resource that would process or generate the media. One or more 
    resources could be associated with a single media channel or each 
    resource could be assigned a separate media channel. For example, a 
    synthesizer and a recognizer could be associated to the same media 
    pipe(m=audio line), if it is opened in "sendrecv" mode. 
    Alternatively, the recognizer could have its own "sendonly" audio 
    pipe and the synthesizer could have its own "recvonly" audio pipe. 
      
    The association between control channels and their corresponding 
    media channels is established through the mid attribute defined in 
    RFC 3388[20]. If there are more than 1 audio m-line, then each audio 
    m-line MUST have a "mid" attribute. Each control m-line MUST have a 
    "cmid" attribute that matches the "mid" attribute of the audio m-
    line it is associated with.  
     
      cmid-attribute      =    "a=cmid:" identification-tag 
       
      identification-tag = token 
     
    A single audio m-line can be associated with multiple resources or 
    each resource can have its own audio m-line. For example, if the 
    client wants to allocate a recognizer and a synthesizer and 
    associate them to a single 2-way audio pipe, the SDP offer should 
    contain two control m-lines and a single audio m-line with an 
    attribute of "sendrecv". Each of the control m-lines should have a 
    "cmid" attribute whose value matches the "mid" of the audio m-line. 
    If the client wants to allocate a recognizer and a synthesizer each 
    with its own separate audio pipe, the SDP offer would carry two 
    control m-lines (one for the recognizer and another for the 
    synthesizer) and two audio m-lines (one with the attribute 
    "sendonly" and another with attribute "recvonly"). The "cmid" 
    attribute of the recognizer control m-line would match the "mid" 
    value of the "sendonly" audio m-line and the "cmid" attribute of the 
    synthesizer control m-line would match the "mid" attribute of the 
    "recvonly" m-line.   
     
    When a server receives media(say audio) on a media pipe that is 
    associated with more than one media processing resource, it is the 
    responsibility of the server to receive and fork it to the resources 
    that need it. If the multiple resources in a session are generating 
    audio (or other media), that needs to be sent on a single associated 
    media pipe, it is the responsibility of the server to mix the 
    streams before sending on the media pipe. The media stream in either 
  
 S Shanmugham                  IETF-Draft                       Page 15 

                            MRCPv2 Protocol              October, 2004 

    direction may contain more than one Synchronized Source (SSRC) 
    identifier due to multiple sources contributing to the media on the 
    pipe and the client or server SHOULD be able to deal with it. 
     
    If a server does not have the capability to mix or fork media, in 
    the above cases, then the server SHOULD disallow the client from 
    associating multiple such resources to a single audio pipe, by 
    rejecting the SIP INVITE with a SIP 501 "Not Implemented" error.  
     
 4.4. MRCPv2 Message Transport 
     
    The MRCPv2 resource messages defined in this document are 
    transported over a TCP, SCTP or TLS pipe between the client and the 
    server. The setting up of this transport pipe and the resource 
    control channel is discussed in Section 3.2. Multiple resource 
    control channels between a client and a server that belong to 
    different SIP sessions can share one or more TLS, TCP or SCTP pipes 
    between them and the server and client MUST support this operation. 
    The individual MRCPv2 messages carry the MRCPv2 channel identifier 
    in their Channel-Identifier header field MUST be used to 
    differentiate MRCPv2 messages from different resource channels. All 
    MRCPv2 servers MUST support TLS, SHOULD support TCP and MAY support 
    SCTP and it is up to the client to choose which mode of transport it 
    wants to use for an MRCPv2 session.  
  
    Example 1: 
  
    C->S:  MRCP/2.0 483 SPEAK 543257 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender: neutral 
           Voice-category: teenager 
           Prosody-volume: medium 
           Content-Type: application/synthesis+ssml 
           Content-Length: 104 
     
           <?xml version="1.0"?> 
           <speak> 
            <paragraph> 
              <sentence>You have 4 new messages.</sentence> 
              <sentence>The first is from <say-as  
              type="name">Stephanie Williams</say-as> 
              and arrived at <break/> 
              <say-as type="time">3:45pm</say-as>.</sentence> 
     
              <sentence>The subject is <prosody 
              rate="-20%">ski trip</prosody></sentence> 
            </paragraph> 
           </speak> 
     
    S->C:  MRCP/2.0 81 543257 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
  
 S Shanmugham                  IETF-Draft                       Page 16 

                            MRCPv2 Protocol              October, 2004 

     
    S->C:  MRCP/2.0 89 SPEAK-COMPLETE 543257 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
  
    Most examples from here on show only the MRCPv2 messages and do not 
    show the SIP messages and headers that may have been used to 
    establish the MRCPv2 control channel.  
     
  
 5.   MRCPv2 Specification 
     
    The MRCPv2 PDU is textual using an ISO 10646 character set in the 
    UTF-8 encoding (RFC 2044) to allow many different languages to be 
    represented. However, to assist in compact representations, MRCPv2 
    also allows other character sets such as ISO 8859-1 to be used when 
    desired. The MRCPv2 protocol headers(the first line of an MRCP 
    message) and field names use only the US-ASCII subset of UTF-8. 
    Internationalization only applies to certain fields like grammar, 
    results, speech markup etc, and not to MRCPv2 as a whole.   
     
    Lines are terminated by CRLF. Also, some parameters in the PDU may 
    contain binary data or a record spanning multiple lines. Such fields 
    have a length value associated with the parameter, which indicates 
    the number of octets immediately following the parameter. 
     
    All MRCPv2 messages, responses and events MUST carry the Channel-
    Identifier header field in it, for the server or client to 
    differentiate messages from different control channels that may 
    share the same transport connection. 
     
    The MRCPv2 message set consists of requests from the client to the 
    server, responses from the server to the client and asynchronous 
    events from the server to the client. All these messages consist of 
    a start-line, one or more header fields (also known as "headers"), 
    an empty line (i.e. a line with nothing preceding the CRLF) 
    indicating the end of the header fields, and an optional message 
    body. 
     
      generic-message  =    start-line 
                            message-header 
                            CRLF 
                            [ message-body ] 
     
      start-line       =    request-line / response-line / event-line 
  
      message-header   =   1*(generic-header / resource-header) 
     
      resource-header  =    recognizer-header 
                       /    synthesizer-header 
                       /    recorder-header 
                       /    verifier-header 
  
 S Shanmugham                  IETF-Draft                       Page 17 

                            MRCPv2 Protocol              October, 2004 

                       /    extension-header 
     
      header-extension =    1*(ALPHANUM / "-") CRLF      
     
    The message-body contains resource-specific and message-specific 
    data that needs to be carried between the client and server as a 
    MIME entity. The information contained here and the actual MIME-
    types used to carry the data are specified later when addressing the 
    specific messages.  
     
    If a message contains data in the message body, the header fields 
    will contain content-headers indicating the MIME-type and encoding 
    of the data in the message body. 
     
 5.1. Request 
     
    A MRCPv2 request consists of a Request line followed by zero or more 
    message headers and an optional message body containing data 
    specific to the request message.  
     
    The Request message from a client to the server includes within the 
    first line, the method to be applied, a method tag for that request 
    and the version of protocol in use. 
     
      request-line   =    mrcp-version SP message-length SP method-name 
                          SP request-id CRLF 
     
    The mrcp-version field is the MRCPv2 protocol version that is being 
    used by the client. Request, response and event messages include the 
    version of MRCP in use, and follow [H3.1] (with HTTP replaced by 
    MRCP, and HTTP/1.1 replaced by MRCP/2.0) regarding version ordering, 
    compliance requirements, and upgrading of version numbers. To be 
    compliant with this specification, applications sending MRCP 
    messages MUST include a mrcp-version of "MRCP/2.0". 
     
     
      mrcp-version   =    "MRCP" "/" 1*DIGIT "." 1*DIGIT 
     
    The message-length field specifies the length of the message and 
    MUST be the 2nd token from the beginning of the message. This is to 
    make the framing and parsing of the message simpler to do. 
     
      message-length =    1*DIGIT 
    
    The request-id field is a unique identifier representable as a 
    unsigned 32 bit integer created by the client and sent to the 
    server. The initial value of the request-id is arbitrary. 
    Consecutive requests within a MRCP session MUST contain strictly 
    monotonically increasing and contiguous request-id's. The server 
    resource MUST use this identifier in its response to this request. 
    If the request does not complete with the response future 
  
 S Shanmugham                  IETF-Draft                       Page 18 

                            MRCPv2 Protocol              October, 2004 

    asynchronous events associated with this request MUST carry the 
    request-id. 
     
      request-id    =    1*DIGIT 
  
    The method-name field identifies the specific request that the 
    client is making to the server. Each resource supports a certain 
    list of requests or methods that can be issued to it, and will be 
    addressed in later sections.  
     
      method-name    =    generic-method      ; Section 6 
                     /    synthesizer-method 
                     /    recorder-method 
                     /    recognizer-method 
                     /    verifier-method 
                     /    extension-methods 
     
      extension-methods = 1*(ALPHA / "-") 
     
 5.2. Response 
     
    After receiving and interpreting the request message, the server 
    resource responds with an MRCPv2 response message. It consists of a 
    status line optionally followed by a message body. 
     
      response-line  =    mrcp-version SP message-length SP request-id 
                     SP status-code SP request-state CRLF 
     
    The mrcp-version field used here MUST be the same as the one used in 
    the Request Line and specifies the version of MRCPv2 protocol 
    running on the server. 
     
    The request-id used in the response MUST match the one sent in the 
    corresponding request message. 
     
    The status-code field is a 3-digit code representing the success or 
    failure or other status of the request. 
  
    The request-state field indicates if the job initiated by the 
    Request is PENDING, IN-PROGRESS or COMPLETE. The COMPLETE status 
    means that the Request was processed to completion and that there 
    are will be no more events from that resource to the client with 
    that request-id. The PENDING status means that the job has been 
    placed on a queue and will be processed in first-in-first-out order. 
    The IN-PROGRESS status means that the request is being processed and 
    is not yet complete. A PENDING or IN-PROGRESS status indicates that 
    further Event messages will be delivered with that request-id. 
     
      request-state    =  "COMPLETE" 
                       /  "IN-PROGRESS"        
                       /  "PENDING" 
  
 S Shanmugham                  IETF-Draft                       Page 19 

                            MRCPv2 Protocol              October, 2004 

 Status Codes 
     
    The status codes are classified under the Success(2XX) codes,  
    Client Failure(4XX) codes and Server Failure (5XX). 
     
 Success 2xx 
     
       200       Success 
       201       Success with some optional headers ignored. 
     
 Client Failure 4xx 
     
       401       Method not allowed 
       402       Method not valid in this state 
       403       Unsupported Header 
       404       Illegal Value for Header 
       405       Not found (e.g. Resource URI not initialized  
                 or doesn't exist) 
       406       Mandatory Header Missing 
       407       Method or Operation Failed(e.g. Grammar compilation 
                 failed in the recognizer. Detailed cause codes MAY BE 
                 available through a resource specific header field.) 
       408       Unrecognized or unsupported message entity 
       409       Unsupported Header Value 
       421-499   Resource specific Failure codes 
     
 Server Failure 5xx 
     
       501       Server Internal Error 
       502       Protocol Version not supported 
       503       Proxy Timeout. The MRCP Proxy did not receive a 
                 response from the MRCP server. 
       504       Message too large.  
     
     
 5.3. Event 
     
    The server resource may need to communicate a change in state or the 
    occurrence of a certain event to the client. These messages are used 
    when a request does not complete immediately and the response 
    returns a status of PENDING or IN-PROGRESS. The intermediate results 
    and events of the request are indicated to the client through the 
    event message from the server. Events have the request-id of the 
    request that is in progress and generating these events and status 
    value. The status value is COMPLETE if the request is done and this 
    was the last event, else it is IN-PROGRESS.  
     
      event-line       =  mrcp-version SP message-length SP event-name 
                          SP request-id SP request-state CRLF 
     

  
 S Shanmugham                  IETF-Draft                       Page 20 

                            MRCPv2 Protocol              October, 2004 

    The mrcp-version used here is identical to the one used in the 
    Request/Response Line and indicates the version of MRCPv2 protocol 
    running on the server. 
     
    The request-id used in the event MUST match the one sent in the 
    request that caused this event. 
     
    The request-state indicates if the Request/Command causing this 
    event is complete or still in progress, and is the same as the one 
    mentioned in section 5.3. The final event will contain a COMPLETE 
    status indicating the completion of the request. 
     
    The event-name identifies the nature of the event generated by the 
    media resource. The set of valid event names are dependent on the 
    resource generating it, and will be addressed in later sections. 
     
      event-name       =  synthesizer-event 
                       /  recognizer-event 
                       /  recorder-event 
                       /  verifier-event 
                       /  extension-event 
     
      extension-event  =  1*(ALPHA /"-") 
     
 6.   MRCP Generic Features 
    The protocol supports a set of methods, and headers that are common 
    to all resources and are discussed in this section 
     
      generic-method      =    "SET-PARAMS" 
                          /    "GET-PARAMS" 
     
 6.1. Generic Message Headers 
     
    MRCPv2 header fields, which include general-header (section 5.5) and 
    resource-specific-header (section 7.4 and section 8.4), follow the 
    same generic format as that given in Section 3.1 of RFC 822 [8]. 
    Each header field consists of a name followed by a colon (":") and 
    the field value. Field names are case-insensitive. The field value 
    MAY be preceded by any amount of LWS, though a single SP is 
    preferred. Header fields can be extended over multiple lines by 
    preceding each extra line with at least one SP or HT. 
     
      message-header = field-name ":" [ field-value ] 
      field-name     = token 
      field-value    = *LWS field-content *( CRLF 1*LWS field-content) 
      field-content  = <the OCTETs making up the field-value 
                        and consisting of either *TEXT or combinations 
                        of token, separators, and quoted-string> 
     
    The field-content does not include any leading or trailing LWS: 
    linear white space occurring before the first non-whitespace 
  
 S Shanmugham                  IETF-Draft                       Page 21 

                            MRCPv2 Protocol              October, 2004 

    character of the field-value or after the last non-whitespace 
    character of the field-value. Such leading or trailing LWS MAY be 
    removed without changing the semantics of the field value. Any LWS 
    that occurs between field-content MAY be replaced with a single SP 
    before interpreting the field value or forwarding the message 
    downstream. 
     
    The order in which header fields with differing field names are 
    received is not significant. However, it is "good practice" to send 
    general-header fields first, followed by request-header or response-
    header fields, and ending with the entity-header fields. 
     
    Multiple message-header fields with the same field-name MAY be 
    present in a message if and only if the entire field-value for that 
    header field is defined as a comma-separated list [i.e., #(values)]. 
     
    It MUST be possible to combine the multiple header fields into one 
    "field-name: field-value" pair, without changing the semantics of 
    the message, by appending each subsequent field-value to the first, 
    each separated by a comma. The order in which header fields with the 
    same field-name are received is therefore significant to the 
    interpretation of the combined field value, and thus a proxy MUST 
    NOT change the order of these field values when a message is 
    forwarded. 
     
      generic-header      =    channel-identifier 
                          /    active-request-id-list 
                          /    proxy-sync-id 
                          /    content-id 
                          /    content-type 
                          /    content-length 
                          /    content-base 
                          /    content-location 
                          /    content-encoding 
                          /    cache-control 
                          /    logging-tag  
                          /    set-cookie  
                          /    set-cookie2 
                          /    vendor-specific      
     
    Header field          where     s  g  A 
    __________________________________________________________ 
    Channel-Identifier      R       m  m  m 
    Channel-Identifier      r       m  m  m 
    Active-Request-Id-List  R       -  -  O 
    Active-Request-Id-List  r       -  -  O 
    Proxy-Sync-Id           R       -  -  O 
    Content-Id              R       o  o  o 
    Content-Type            R       o  o  o 
    Content-Length          R       o  o  o 
    Content-Base            R       o  o  o 
  
 S Shanmugham                  IETF-Draft                       Page 22 

                            MRCPv2 Protocol              October, 2004 

    Content-Location        R       o  o  o 
    Content-Encoding        R       o  o  o 
    Cache-Control           R       o  o  o 
    Logging-Tag             R       o  o  - 
    Set-Cookie              R       o  o  o 
    Set-Cookie2             R       o  o  o 
    Vendor-Specific         R       o  o  o 
     
    Legend:   (s) - SET-PARAMS, (g) - GET-PARAMS, (A) - Generic MRCP 
    message, (B) - BARGE-IN-OCCURED, (C) - START-OF-SPEECH, (o) - 
    Optional(Refer text for further constraints), (R) - Request, (r) - 
    Response 
     
    All headers in MRCPv2 will be case insensitive consistent with HTTP 
    and SIP protocol header definitions. 
     
 Channel-Identifier 
     
    All MRCPv2 methods, responses and events MUST contain the Channel-
    Identifier header field. The value of this field is a hexadecimal 
    string and is allocated by the server when the control channel was 
    added to the session through a SDP offer/answer exchange. This field 
    consists of 2 parts separated by the '@' symbol. The first part is a 
    32 bit hexadecimal integer that is positive, identifying the MRCP 
    session. The second part is a string token which specifies one of 
    the media processing resource types listed in Section 3.2. The 
    hexadecimal digit string MUST BE unique within the server and is 
    common to all resource channels established through a single SIP 
    session. 
     
      channel-identifier  = "Channel-Identifier" ":" channel-id CRLF 
     
      Channel-id          = 1*HEXDIG "@" 1*VCHAR 
     
 Active-Request-Id-List 
     
    In a request, this field indicates the list of request-ids that the 
    request should apply to. This is useful when there are multiple 
    Requests that are PENDING or IN-PROGRESS and you want this request 
    to apply to one or more of these specifically.  
     
    In a response, this field returns the list of request-ids that the 
    operation modified or affected. There could be one or more requests 
    that returned a request-state of PENDING or IN-PROGRESS. When a 
    method affecting one or more PENDING or IN-PROGRESS requests is sent 
    from the client to the server, the response MUST contain the list of 
    request-ids that were affected or modified by this command in its 
    header field. 
     
    The active-request-id-list is only used in requests and responses, 
    not in events. 
  
 S Shanmugham                  IETF-Draft                       Page 23 

                            MRCPv2 Protocol              October, 2004 

     
    For example, if a STOP request with no active-request-id-list is 
    sent to a synthesizer resource(a wildcard STOP) which has one or 
    more SPEAK requests in the PENDING or IN-PROGRESS state, all SPEAK 
    requests MUST be cancelled, including the one IN-PROGRESS and the 
    response to the STOP request would contain the request-id of all the 
    SPEAK requests that were terminated in the active-request-id-list.  
    In this case, no SPEAK-COMPLETE or RECOGNITION-COMPLETE events will 
    be sent for these terminated requests. 
     
      active-request-id-list  =  "Active-Request-Id-List" ":"  
                                  request-id *("," request-id) CRLF 
     
 Proxy-Sync-Id 
     
    When any server resource generates a barge-in-able event, it will 
    generate a unique Tag and send it as a header field in an event to 
    the client. The client then acts as a proxy to the server resource 
    and sends a BARGE-IN-OCCURRED method to the synthesizer server 
    resource with the Proxy-Sync-Id it received from the server 
    resource. When the recognizer and synthesizer resources are part of  
    the same session, they may choose to work together to achieve 
    quicker interaction and response. Here the proxy-sync-id helps the 
    resource receiving the event, proxied by the client, to decide if 
    this event has been processed through a direct interaction of the 
    resources. 
     
      proxy-sync-id    =  "Proxy-Sync-Id" ":" 1*VCHAR CRLF    
     
 Accept-Charset 
     
    See [H14.2]. This specifies the acceptable character set for 
    entities returned in the response or events associated with this 
    request. This is useful in specifying the character set to use in 
    the NLSML results of a RECOGNITION-COMPLETE event.  
     
 Content-Type 
     
    See [H14.17]. Note that the content types suitable for MRCPv2 are 
    restricted to speech markup, grammar, recognition results etc. and 
    are specified later in this document. The multi-part content type 
    "multi-part/mixed" is supported to communicate multiple of the above 
    mentioned contents, in which case the body parts cannot contain any 
    MRCPv2 specific headers. 
     
 Content-Id 
     
    This field contains an ID or name for the content, by which it can 
    be referred to.  The definition of this field is in full compliance 
    with RFC 2111[15] and is needed in multi-part messages. In MRCPv2 
    whenever the content needs to be stored, by either the client or the 
  
 S Shanmugham                  IETF-Draft                       Page 24 

                            MRCPv2 Protocol              October, 2004 

    server, it is stored associated with this ID. Such content can be 
    referenced during the session in URI form using the session: URI 
    scheme described in a later section.  
     
 Content-Base 
     
    The content-base entity-header field may be used to specify the base 
    URI for resolving relative URLs within the entity. 
     
      content-base      = "Content-Base" ":" absoluteURI CRLF 
     
    Note, however, that the base URI of the contents within the entity-
    body may be redefined within that entity-body. An example of this 
    would be a multi-part MIME entity, which in turn can have multiple 
    entities within it. 
     
 Content-Encoding 
     
    The content-encoding entity-header field is used as a modifier to 
    the media-type. When present, its value indicates what additional 
    content coding have been applied to the entity-body, and thus what 
    decoding mechanisms must be applied in order to obtain the media-
    type referenced by the content-type header field. Content-encoding 
    is primarily used to allow a document to be compressed without 
    losing the identity of its underlying media type. 
     
      content-encoding  = "Content-Encoding" ":"  
                               *WSP content-coding  
                               *(*WSP "," *WSP content-coding *WSP ) 
                               CRLF 
     
    Content coding is defined in [H3.5]. An example of its use is 
     
      Content-Encoding: gzip 
     
    If multiple encoding have been applied to an entity, the content 
    coding MUST be listed in the order in which they were applied.  
  
 Content-Location 
     
    The content-location entity-header field MAY BE used to supply the 
    resource location for the entity enclosed in the message when that 
    entity is accessible from a location separate from the requested 
    resource's URI. Refer [H14.14] 
     
      content-location =  "Content-Location" ":" 
                          ( absoluteURI / relativeURI ) CRLF 
     
    The content-location value is a statement of the location of the 
    resource corresponding to this particular entity at the time of the 
    request. The server MAY use this header field to optimize certain 
  
 S Shanmugham                  IETF-Draft                       Page 25 

                            MRCPv2 Protocol              October, 2004 

    operations. When providing this header field the entity being sent 
    should not have been modified, from what was retrieved from the 
    content-location URI. 
     
    For example, if the client provided a grammar markup inline, and it 
    had previously retrieved it from a certain URI, that URI can be 
    provided as part of the entity, using the content-location header 
    field. This allows a resource like the recognizer to look into its 
    cache to see if this grammar was previously retrieved, compiled and 
    cached. In which case, it might optimize by using the previously 
    compiled grammar object. 
     
    If the content-location is a relative URI, the relative URI is 
    interpreted relative to the content-base URI. 
     
     
 Content-Length 
     
    This field contains the length of the content of the message body 
    (i.e. after the double CRLF following the last header field). Unlike 
    HTTP, it MUST be included in all messages that carry content beyond 
    the header portion of the message. If it is missing, a default value 
    of zero is assumed. It is interpreted according to [H14.13]. 
     
 Cache-Control 
     
    If the server plans on implementing caching it MUST adhere to the 
    cache correctness rules of HTTP 1.1 (RFC2616), when accessing and 
    caching HTTP URI. In particular, the expires and cache-control 
    headers of the cached URI or document must be honored and will 
    always take precedence over the Cache-Control defaults set by this 
    header field. The cache-control directives are used to define the 
    default caching algorithms on the server for the session or request. 
    The scope of the directive is based on the method it is sent on. If 
    the directives are sent on a SET-PARAMS method, it MUST apply for 
    all requests for external documents the server makes during that 
    session. If the directives are sent on any other messages they MUST 
    only apply to external document requests the server makes for that 
    method. An empty cache-control header on the GET-PARAMS method is a 
    request for the server to return the current cache-control 
    directives setting on the server. 
     
      cache-control       = "Cache-Control" ":" cache-directive  
                                       *("," *LWS cache-directive) CRLF 
     
      cache-directive     = "max-age" "=" delta-seconds     
                          / "max-stale" [ "=" delta-seconds ] 
                          / "min-fresh" "=" delta-seconds  
     
      delta-seconds       = 1*DIGIT     
     
  
 S Shanmugham                  IETF-Draft                       Page 26 

                            MRCPv2 Protocol              October, 2004 

    Here delta-seconds is a decimal time value to be specified as the 
    number of seconds from the time that the message response or data 
    was received by the server. 
     
    These directives allow the server to override the basic expiration 
    mechanism. 
     
    max-age 
     
    Indicates that the client is ok with the server using a response 
    whose age is no greater than the specified time in seconds. Unless a 
    max-stale directive is also included, the client is not willing to 
    accept the media server using a stale response. 
     
    min-fresh 
     
    Indicates that the client is willing to accept the server using a 
    response whose freshness lifetime is no less than its current age 
    plus the specified time in seconds. That is, the client wants the 
    server to use a response that will still be fresh for at least the 
    specified number of seconds. 
     
    max-stale 
     
    Indicates that the client is willing to accept the server using a 
    response or data that has exceeded its expiration time. If max-stale 
    is assigned a value, then the client is willing to accept the server 
    using a response that has exceeded its expiration time by no more 
    than the specified number of seconds. If no value is assigned to 
    max-stale, then the client is willing to accept the server using a 
    stale response of any age. 
     
     
    The server cache MAY BE requested to use stale response/data without 
    validation, but only if this does not conflict with any "MUST"-level 
    requirements concerning cache validation (e.g., a "must-revalidate" 
    cache-control directive) in the HTTP 1.1 specification pertaining 
    the URI. 
     
    If both the MRCPv2 cache-control directive and the cached entry on 
    the server include "max-age" directives, then the lesser of the two 
    values is used for determining the freshness of the cached entry for 
    that request. 
     
 Logging-Tag 
     
    This header field MAY BE sent as part of a SET-PARAMS/GET-PARAMS 
    method to set the logging tag for logs generated by the server. Once 
    set, the value persists until a new value is set or the session is 
    ended.  The MRCPv2 server SHOULD provide a mechanism to subset its 
    output logs so that system administrators can examine or extract 
  
 S Shanmugham                  IETF-Draft                       Page 27 

                            MRCPv2 Protocol              October, 2004 

    only the log file portion during which the logging tag was set to a 
    certain value. 
     
    MRCPv2 clients using this feature SHOULD take care to ensure that no 
    two clients specify the same logging tag.  In the event that two 
    clients specify the same logging tag, the effect on the MRCPv2 
    server's output logs in undefined. 
     
           logging-tag    =    "Logging-Tag" ":" 1*VCHAR CRLF 
     
 Set-Cookie and Set-Cookie2: 
              
    Since the HTTP client on the MRCP server fetches document for 
    processing on behalf of the MRCP client, the cookie store in the 
    HTTP client of the MRCP server is considered to be an extension of 
    the cookie store in the HTTP client of the MRCP client. This 
    requires that the MRCP client and server be able to synchronize 
    their cookie stores as needed. The MRCP client should be able to 
    push its stored cookies to the MRCP server and get new cookies that 
    the MRCPv2 server stored back to the MRCP client. The set-cookie and 
    set-cookie2 entity-header fields MAY BE included in MRCPv2 requests 
    to update the cookie store on a server and be returned in final 
    MRCPv2 responses or events to subsequently update the client's own 
    cookie store. The stored cookies on the server persist for the 
    duration of the MRCPv2 session and MUST be destroyed at the end of 
    the session. Since the type of cookie header is dictated by the HTTP 
    origin server, MRCPv2 clients and servers SHOULD support both the 
    set-cookie and set-cookie2 entity header fields. 
            
          set-cookie      =       "Set-Cookie:" cookies CRLF 
          cookies         =       cookie *("," *LWS cookie) 
          cookie          =       NAME "=" VALUE *(";" cookie-av) 
          NAME            =       attribute 
          VALUE           =       value 
          cookie-av       =       "Comment" "=" value 
                          /       "Domain" "=" value 
                          /       "Max-Age" "=" value 
                          /       "Path" "=" value 
                          /       "Secure" 
                          /       "Version" "=" 1*DIGIT 
                          /       "Age" "=" delta-seconds 
                               
          set-cookie2     =       "Set-Cookie2:" cookies2 CRLF 
          cookies2        =       cookie2 *("," *LWS cookie2) 
          cookie2         =       NAME "=" VALUE *(";" cookie-av2) 
          NAME            =       attribute 
          VALUE           =       value 
          cookie-av2      =       "Comment" "=" value 
                          /       "CommentURL" "=" <"> http_URL <"> 
                          /       "Discard" 
                          /       "Domain" "=" value 
  
 S Shanmugham                  IETF-Draft                       Page 28 

                            MRCPv2 Protocol              October, 2004 

                          /       "Max-Age" "=" value 
                          /       "Path" "=" value 
                          /       "Port" [ "=" <"> portlist <"> ] 
                          /       "Secure" 
                          /       "Version" "=" 1*DIGIT 
                          /       "Age" "=" delta-seconds 
          portlist        =       portnum *("," *LWS portnum) 
          portnum         =       1*DIGIT 
                           
    The set-cookie and set-cookie2 header fields are specified in  RFC 
    2109 and RFC 2965 respectively. The "Age" attribute is introduced in 
    this specification to indicate the age of the cookie and is 
    OPTIONAL. An MRCPv2 client or server SHOULD calculate the age of the 
    cookie according to the age calculation rules in the HTTP/1.1 
    specification (RFC 2616) and append the "Age" attribute accordingly.  
             
    The media client or server MUST supply defaults for the Domain and 
    Path attributes if omitted by the HTTP origin server as specified in 
    RFC 2109 (set-cookie) and RFC 2965 (set-cookie2). Note that there 
    will be no leading dot present in the Domain attribute value in this 
    case. Although an explicitly specified Domain value received via the 
    HTTP protocol may be modified to include a leading dot, a media 
    client or server MUST NOT modify the Domain value when received via 
    the MRCPv2 protocol. 
         
    A media client or server MAY combine multiple cookie header fields  
    of the same type into a single "field-name: field-value" pair as 
    described in Section 6.1.  
              
    The set-cookie and set-cookie2 headers MAY BE specified in any 
    request that subsequently results in the server performing an HTTP 
    access. When a server receives new cookie information from an HTTP 
    origin server, and assuming the cookie store is modified according 
    to RFC 2109 or RFC2965, the server MUST return the new cookie 
    information in the MRCPv2 COMPLETE response or event as appropriate 
    to allow the client to update its own cookie store.  
         
    The SET-PARAMS request MAY specify the set-cookie and set-cookie2 
    headers to update the cookie store on a server. The GET-PARAMs 
    request MAY BE used to return the entire cookie store of "Set-
    Cookie" or "Set-Cookie2" type cookies to the client. 
     
 Vendor Specific Parameters 
     
    This set of headers allows for the client to set Vendor Specific 
    parameters.  
     
      vendor-specific     =    "Vendor-Specific-Parameters" ":" 
                               vendor-specific-av-pair  
                               *[";" vendor-specific-av-pair] CRLF  
      vendor-specific-av-pair = vendor-av-pair-name "="  
  
 S Shanmugham                  IETF-Draft                       Page 29 

                            MRCPv2 Protocol              October, 2004 

                               vendor-av-pair-value 
     
    This header MAY BE sent in the SET-PARAMS/GET-PARAMS method and is 
    used to set vendor-specific parameters on the server side. The 
    vendor-av-pair-name can be any Vendor specific field name and 
    conforms to the XML vendor-specific attribute naming convention. The 
    vendor-av-pair-value is the value to set the attribute to and needs 
    to be quoted. 
     
    When asking the server to get the current value of these parameters, 
    this header can be sent in the GET-PARAMS method with the list of 
    vendor-specific attribute names to get separated by a semicolon. 
     
 6.2. SET-PARAMS 
     
    The SET-PARAMS method, from the client to server, tells the MRCP 
    resource to define session parameters, like voice characteristics 
    and prosody on synthesizers or recognition timers on recognizers 
    etc. If the server accepted and set all parameters it MUST return a 
    Response-Status of 200. If it chose to ignore some optional headers 
    that can be safely ignored with affecting operation of the server it 
    MUST return 201. 
     
    If some of the headers being set are unsupported for the resource or 
    have illegal values, the server MUST reject the request with a 403, 
    Bad Parameter, and MUST include in the response the header fields 
    that could not be set. The header specified in SET-PARAMS affect the 
    session level values. They do not apply for request level scope and 
    for request that are in-PROGRESS. 
     
    Example: 
      C->S:MRCP/2.0 124 SET-PARAMS 543256 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender: female 
           Voice-category: adult 
           Voice-variant: 3 
          
          
      S->C:MRCP/2.0 47 543256 200 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
 6.3. GET-PARAMS 
     
    The GET-PARAMS method, from the client to server, asks the MRCPv2 
    resource for its current session parameters, like voice 
    characteristics and prosody on synthesizers and recognition-timer on 
    recognizers etc. The client SHOULD send the list of parameters it 
    wants to read from the server by listing a set of empty header 
    fields. If a specific list is not specified then the server SHOULD 
    return all the settable headers including vendor-specific parameters 
    and their current values. The wild card use can be very intensive as 
  
 S Shanmugham                  IETF-Draft                       Page 30 

                            MRCPv2 Protocol              October, 2004 

    the number of settable parameters can be large depending on the 
    vendor.  Hence it is RECOMMENDED that the client does not use the 
    wildcard GET-PARAMS operation very often. Note that the GET-PARAMS 
    returns header values that have been set for the whol session and do 
    not return values that have a request level scope. 
     
    Example: 
      C->S:MRCP/2.0 136 GET-PARAMS 543256 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender: 
           Voice-category:  
           Voice-variant: 
           Vendor-Specific-Parameters:com.mycorp.param1; 
                       com.mycorp.param2 
     
      S->C:MRCP/2.0 163 543256 200 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender:female 
           Voice-category: adult 
           Voice-variant: 3 
           Vendor-Specific-Parameters:com.mycorp.param1="Company Name"; 
                          com.mycorp.param2="124324234@mycorp.com" 
     
  
 7.   Resource Discovery 
     
    The list and capability of media resources on a server can be found 
    using the SIP OPTIONS method requesting the capability of the 
    server. The server SHOULD respond to such a request with an SDP 
    description of its capabilities according to RFC 3264. The MRCPv2 
    capabilities are described by a single m-line containing the media 
    type "control", transport type "TLS", "TCP" or "SCTP" and a format 
    of "application/mrcpv2". There should be one "resource" attribute 
    for each media resource that the server supports with the resource 
    type identifier as its value. 
      
    The SDP description MUST also contain m-lines describing the audio 
    capabilities, and the coders it supports. 
     
     
    Example 4: 
    The client uses the SIP OPTIONS method to query the capabilities of 
    the MRCPv2 server. 
     
    C->S: 
           OPTIONS sip:mrcp@mediaserver.com SIP/2.0 
           Max-Forwards: 6 
           To: <sip:mrcp@mediaserver.com> 
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 63104 OPTIONS 
  
 S Shanmugham                  IETF-Draft                       Page 31 

                            MRCPv2 Protocol              October, 2004 

           Contact: <sip:sarvi@cisco.com> 
           Accept: application/sdp 
           Content-Length: 0 
     
     
    S->C: 
           SIP/2.0 200 OK 
           To: <sip:mrcp@mediaserver.com>;tag=93810874 
           From: Sarvi <sip:sarvi@Cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 63104 OPTIONS 
           Contact: <sip:mrcp@mediaserver.com> 
           Allow: INVITE, ACK, CANCEL, OPTIONS, BYE 
           Accept: application/sdp 
           Accept-Encoding: gzip 
           Accept-Language: en 
           Supported: foo 
           Content-Type: application/sdp 
           Content-Length: 274 
     
           v=0 
           o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4 
           s=SDP Seminar 
           i=A session for processing media 
           c=IN IP4 224.2.17.12/127 
           m=control 9 TCP application/mrcpv2 
           a=resource:speechsynth 
           a=resource:speechrecog 
           a=resource:speakverify 
           m=audio 0 RTP/AVP 0 1 3 
           a=rtpmap:0 PCMU/8000 
           a=rtpmap:1 1016/8000 
           a=rtpmap:3 GSM/8000 
       
  
      
 8.   Speech Synthesizer Resource 
     
    This resource is capable of converting text provided by the client 
    and generating a speech stream in real-time.  Depending on the 
    implementation and capability of this resource, the client can 
    control parameters like voice characteristics, speaker speed, etc. 
     
    The synthesizer resource is controlled by MRCPv2 requests from the 
    client. Similarly the resource can respond to these requests or 
    generate asynchronous events to the server to indicate certain 
    conditions during the processing of the stream.  
     
    This section applies for the following resource types. 
     
           1. speechsynth 
  
 S Shanmugham                  IETF-Draft                       Page 32 

                            MRCPv2 Protocol              October, 2004 

           2. basicsynth 
            
    The capability of these resources are addressed in Section 4.5. 
     
 8.1. Synthesizer State Machine 
     
    The synthesizer maintains states to correlate MRCPv2 requests from 
    the client. The state transitions shown below describe the states of 
    the synthesizer and reflect the request at the head of the queue. A 
    SPEAK request in the PENDING state can be deleted or stopped by a 
    STOP request and does not affect the state of the resource. 
     
         Idle                   Speaking                  Paused 
         State                  State                     State 
          |                       |                          | 
          |----------SPEAK------->|                 |--------| 
          |<------STOP------------|             CONTROL      | 
          |<----SPEAK-COMPLETE----|                 |------->| 
          |<----BARGE-IN-OCCURRED-|                          | 
          |              |--------|                          | 
          |          CONTROL      |-----------PAUSE--------->| 
          |              |------->|<----------RESUME---------| 
          |                       |               |----------| 
          |                       |              PAUSE       | 
          |                       |               |--------->| 
          |                       |----------|               | 
          |                       |      SPEECH-MARKER       | 
          |                       |<---------|               | 
          |----------|            |             |------------| 
          |         STOP          |          SPEAK           | 
          |          |            |             |----------->| 
          |<---------|            |                          | 
          |<--------------------STOP-------------------------| 
          |----------|            |                          | 
          |     LOAD-LEXICON      |                          | 
          |          |            |                          | 
          |<---------|            |                          | 
          |<--------------------BARGE-IN-OCCURRED------------| 
     
 8.2. Synthesizer Methods 
     
    The synthesizer supports the following methods. 
     
    synthesizer-method    =  "SPEAK"    ; A 
                          /  "STOP"     ; B 
                          /  "PAUSE"    ; C 
                          /  "RESUME"   ; D 
                          /  "BARGE-IN-OCCURRED" ; E 
                          /  "CONTROL"  ; F 
                          /  "LOAD-LEXICON"  ; G 
     
  
 S Shanmugham                  IETF-Draft                       Page 33 

                            MRCPv2 Protocol              October, 2004 

 8.3. Synthesizer Events 
     
    The synthesizer may generate the following events. 
     
      synthesizer-event   =  "SPEECH-MARKER" ; H  
                          /  "SPEAK-COMPLETE" ; I  
     
 8.4. Synthesizer Header Fields 
     
    A synthesizer message may contain header fields containing request 
    options and information to augment the Request, Response or Event 
    the message it is associated with.  
     
      synthesizer-header  =  jump-size        
                          /  kill-on-barge-in   
                          /  speaker-profile    
                          /  completion-cause 
                          /  completion-reason   
                          /  voice-parameter    
                          /  prosody-parameter 
                          /  speech-marker      
                          /  speech-language    
                          /  fetch-hint         
                          /  audio-fetch-hint   
                          /  fetch-timeout      
                          /  failed-uri         
                          /  failed-uri-cause   
                          /  speak-restart      
                          /  speak-length       
                          /  load-lexicon 
                          /  lexicon-search-order 
     
    Header field          where     s  g  A  B  C  D  E  F  G  H  I 
    _______________________________________________________________ 
    Jump-Size               R       -  -  -  -  -  -  -  o  -  -  - 
    Kill-On-Barge-In        R       -  -  o  -  -  -  -  -  -  -  - 
    Speaker-Profile         R       o  o  o  -  -  -  -  o  -  -  - 
    Completion-Cause        R       -  -  -  -  -  -  -  -  -  -  m 
    Completion-Cause       4XX      -  -  o  -  -  -  -  -  -  -  - 
    Completion-Reason       R       -  -  -  -  -  -  -  -  -  -  m 
    Completion-Reason      4XX      -  -  o  -  -  -  -  -  -  -  - 
    Voice-Parameter         R       o  o  o  -  -  -  -  o  -  -  - 
    Prosody-Parameter       R       o  o  o  -  -  -  -  o  -  -  - 
    Speech-Marker           R       -  -  -  -  -  -  -  -  -  m  m 
    Speech-Marker          2XX      -  -  m  m  m  m  -  m  -  -  - 
    Speech-Language         R       o  o  o  -  -  -  -  -  -  -  - 
    Fetch-Hint              R       o  o  o  -  -  -  -  -  -  -  - 
    Audio-Fetch-Hint        R       o  o  o  -  -  -  -  -  -  -  - 
    Fetch-Timeout           R       o  o  o  -  -  -  -  -  -  -  - 
    Failed-URI              R       -  -  -  -  -  -  -  -  -  -  o 
    Failed-URI             4XX      -  o  -  -  -  -  -  -  -  -  o 
  
 S Shanmugham                  IETF-Draft                       Page 34 

                            MRCPv2 Protocol              October, 2004 

    Failed-URI-Cause        R       -  -  -  -  -  -  -  -  -  -  o 
    Failed-URI-Cause       4XX      -  o  -  -  -  -  -  -  -  -  o 
    Speak-Restart          2XX      -  -  -  -  -  -  -  o  -  -  - 
    Speak-Length            R       -  o  -  -  -  -  -  o  -  -  - 
    Load-Lexicon            R       -  -  -  -  -  -  -  -  o  -  - 
    Lexicon-Search-Order    R       -  -  -  -  -  -  -  -  m  -  - 
  
    Legend:   (s) - SET-PARAMS, (g) - GET-PARAMS, (A) - SPEAK, (B) - 
    STOP, (C) - PAUSE, (D) RESUME, (E) - BARGE-IN-OCCURRED, (F) - 
    CONTROL, (G) - LOAD-LEXICON (o) - Optional(Refer text for further 
    constraints), (R) - Request, (r) - Response  
     
 Jump-Size 
     
    This header MAY BE specified in a CONTROL method and controls the 
    jump size to move forward or rewind backward on an active SPEAK 
    request. A + or - indicates a relative value to what is being 
    currently played. This MAY BE specified in a SPEAK request to 
    indicate an offset into the speech markup that the SPEAK request 
    should start speaking from. The different speech length units 
    supported are dependent on the synthesizer implementation. If it 
    does not support a unit or the operation the resource SHOULD respond 
    with a status code of 404 "Illegal or Unsupported value for 
    parameter".  
     
      jump-size           =    "Jump-Size" ":" speech-length-value CRLF 
      speech-length-value =    numeric-speech-length 
                          /    text-speech-length 
      text-speech-length  =    1*VCHAR SP "Tag" 
                           
      numeric-speech-length=   ("+" / "-") 1*DIGIT SP  
                               numeric-speech-unit 
      numeric-speech-unit =    "Second" 
                          /    "Word" 
                          /    "Sentence" 
                          /    "Paragraph" 
     
 Kill-On-Barge-In 
     
    This header MAY BE sent as part of the SPEAK method to enable kill-
    on-barge-in support. If enabled, the SPEAK method is interrupted by 
    DTMF input detected by a signal detector resource or by the start of 
    speech sensed or recognized by the speech recognizer resource. 
     
      kill-on-barge-in    =    "Kill-On-Barge-In" ":" boolean-value CRLF 
      boolean-value       =    "true" / "false" 
     
    If the recognizer or signal detector resource is on the same server 
    as the synthesizer, the server SHOULD recognize their interactions 
    by their common MRCPv2 channel identifier (ignoring the portion 

  
 S Shanmugham                  IETF-Draft                       Page 35 

                            MRCPv2 Protocol              October, 2004 

    after "@" which is the resource type) and work with each other to 
    provide kill-on-barge-in support.  
     
    The client MUST send a BARGE-IN-OCCURRED method to the synthesizer 
    resource when it receives a bargin-in-able event from any source. 
    This source could be a synthesizer resource or signal detector 
    resource and MAY BE local or distributed. If this field is not 
    specified, the value defaults to "true".  
     
 Speaker Profile 
     
    This header MAY BE part of the SET-PARAMS/GET-PARAMS or SPEAK 
    request from the client to the server and specifies the profile of 
    the speaker by a uri, which may be a set of voice parameters like 
    gender, accent etc. 
     
      speaker-profile     =    "Speaker-Profile" ":" uri CRLF 
     
 Completion Cause 
     
    This header field MUST be specified in a SPEAK-COMPLETE event coming 
    from the synthesizer resource to the client. This indicates the 
    reason behind the SPEAK request completion. 
     
      completion-cause    =    "Completion-Cause" ":" 1*DIGIT SP 
                               1*VCHAR CRLF 
     
    Cause-Code  Cause-Name     Description 
      000       normal         SPEAK completed normally. 
      001       barge-in       SPEAK request was terminated because 
                               of barge-in. 
      002       parse-failure  SPEAK request terminated because of a 
                               failure to parse the speech markup text. 
      003       uri-failure    SPEAK request terminated because, access 
                               to one of the URIs failed. 
      004       error          SPEAK request terminated prematurely due 
                               to synthesizer error. 
      005       language-unsupported 
                               Language not supported. 
      006       lexicon-load-failure 
                               Lexicon loading failed. 
     
  
 Completion Reason 
     
    This header field MAY be specified in a SPEAK-COMPLETE event coming 
    from the synthesizer resource to the client. This contains the 
    reason text behind the SPEAK request completion. This field can be 
    use to communicate text describing the reason for the failure, such 
    as an error in parsing the speech markup text. 
     
  
 S Shanmugham                  IETF-Draft                       Page 36 

                            MRCPv2 Protocol              October, 2004 

      completion-reason   =    "Completion-Reason" ":"  
                               quoted-string CRLF 
       
 Voice-Parameters 
     
    This set of headers defines the voice of the speaker.  
     
      voice-parameter     =    "Voice-" voice-param-name ":" 
                               voice-param-value CRLF 
     
    voice-param-name is any one of the attribute names under the voice 
    element specified in W3C's Speech Synthesis Markup Language 
    Specification[10]. The voice-param-value is any one of the value 
    choices of the corresponding voice element attribute specified in 
    the above section.   
     
    These header fields MAY BE sent in SET-PARAMS/GET-PARAMS request to 
    define/get default values for the entire session or MAY BE sent in 
    the SPEAK request to define default values for that speak request.  
    Furthermore these attributes can be part of the speech text marked 
    up in SML.   
     
    These voice parameter header fields can also be sent in a CONTROL 
    method to affect a SPEAK request in progress and change its behavior 
    on the fly. If the synthesizer resource does not support this 
    operation, it should respond back to the client with a status of 
    unsupported.  
     
 Prosody-Parameters 
     
    This set of headers defines the prosody of the speech.  
     
      prosody-parameter   =    "Prosody-" prosody-param-name ":" 
                               prosody-param-value CRLF 
     
    prosody-param-name is any one of the attribute names under the 
    prosody element specified in W3C's Speech Synthesis Markup Language 
    Specification[10]. The prosody-param-value is any one of the value 
    choices of the corresponding prosody element attribute specified in 
    the above section. 
     
    These header fields MAY BE sent in SET-PARAMS/GET-PARAMS request to 
    define/get default values for the entire session or MAY BE sent in 
    the SPEAK request to define default values for that speak request.  
    Further more these attributes can be part of the speech text marked 
    up in SML.  
     
    The prosody parameter header fields in the SET-PARAMS or SPEAK 
    request only apply if the speech data is of type text/plain and does 
    not use a speech markup format.  
     
  
 S Shanmugham                  IETF-Draft                       Page 37 

                            MRCPv2 Protocol              October, 2004 

    These prosody parameter header fields MAY also be sent in a CONTROL 
    method to affect a SPEAK request in progress and change its behavior 
    on the fly. If the synthesizer resource does not support this 
    operation, it should respond back to the client with a status of 
    unsupported. 
     
 Speech Marker 
     
    This header field contains a marker tag that may be embedded in the 
    speech data. Most speech markup formats provide mechanisms to embed 
    marker fields between speech texts. The synthesizer will generate 
    SPEECH-MARKER events when it reaches these marker fields. This field 
    SHOULD be part of the SPEECH-MARKER event and will contain the 
    marker tag values. This header may have additional timestamp 
    information in a "timestamp" field separated by a semicolon. This is 
    the NTP timestamp and MUST be synced with the RTP timestamp. This 
    header field SHOULD also be returned in responses to STOP and 
    CONTROL methods and in the SPEAK-COMPLETE event. In these messages 
    the marker tag SHOULD be the last tag encountered and would be "" if 
    none was encountered. The marker tag SHOULD have timestamp 
    information which reflects what point into the current SPEAK 
    request, the particular message was generated.   
       
      timestamp      =         "timestamp" "=" time-stamp-value CRLF 
     
      speech-marker  =         "Speech-Marker" ":" 1*VCHAR  
                               [";" timestamp ]CRLF 
     
 Speech Language 
     
    This header field specifies the default language of the speech data 
    if it is not specified in it. The value of this header field should 
    follow RFC 3066 for its values. This MAY occur in SPEAK, SET-PARAMS 
    or GET-PARAMS request. 
     
      speech-language          =    "Speech-Language" ":" 1*VCHAR CRLF 
     
 Fetch Hint 
     
    When the synthesizer needs to fetch documents or other resources 
    like speech markup or audio files, etc., this header field controls 
    URI access properties. This defines when the synthesizer should 
    retrieve content from the server. A value of "prefetch" indicates a 
    file may be downloaded when the request is received, whereas "safe" 
    indicates a file that should only be downloaded when actually 
    needed. The default value is "prefetch". This header field MAY occur 
    in SPEAK, SET-PARAMS or GET-PARAMS requests. 
     
      fetch-hint               =    "Fetch-Hint" ":" 1*ALPHA CRLF 
  

  
 S Shanmugham                  IETF-Draft                       Page 38 

                            MRCPv2 Protocol              October, 2004 

 Audio Fetch Hint 
     
    When the synthesizer needs to fetch documents or other resources 
    like speech audio files, etc., this header field controls URI access 
    properties. This defines whether or not the synthesizer can attempt 
    to optimize speech by pre-fetching audio. The value is either "safe" 
    to say that audio is only fetched when it is needed, never before; 
    "prefetch" to permit, but not require the platform to pre-fetch the 
    audio; or "stream" to allow it to stream the audio fetches. The 
    default value is "prefetch". This header field MAY occur in SPEAK, 
    SET-PARAMS or GET-PARAMS. requests. 
     
      audio-fetch-hint         =    "Audio-Fetch-Hint" ":" 1*ALPHA CRLF 
     
 Fetch Timeout 
     
    When the synthesizer needs to fetch documents or other resources 
    like speech audio files, etc., this header field controls URI access 
    properties. This defines the synthesizer timeout for content the 
    server may need to fetch from the network. This is specified in 
    milliseconds. The default value is platform-dependent. This header 
    field MAY occur in SPEAK, SET-PARAMS or GET-PARAMS. 
     
      fetch-timeout            =    "Fetch-Timeout" ":" 1*DIGIT CRLF 
     
 Failed URI 
     
    When a synthesizer method needs a synthesizer to fetch or access a 
    URI and the access fails the server SHOULD provide the failed URI in 
    this header field in the method response. 
     
      failed-uri               =    "Failed-URI" ":" Uri CRLF 
     
 Failed URI Cause 
     
    When a synthesizer method needs a synthesizer to fetch or access a 
    URI and the access fails the server SHOULD provide the URI specific 
    or protocol specific response code through this header field in the 
    method response. This field has been defined as alphanumeric to 
    accommodate all protocols, some of which might have a response 
    string instead of a numeric response code. 
     
      failed-uri-cause    =    "Failed-URI-Cause" ":" 1*ALPHANUM CRLF 
     
 Speak Restart 
     
    When a CONTROL request to jump backward is issued to a currently 
    speaking synthesizer resource and the target jump point is beyond 
    the start of the current SPEAK request, the current SPEAK request 
    SHALL re-start from the beginning of its speech data and the 

  
 S Shanmugham                  IETF-Draft                       Page 39 

                            MRCPv2 Protocol              October, 2004 

    response to the CONTROL request SHOULD contain this header 
    indicating a restart. This header MAY occur in the CONTROL response. 
     
      speak-restart       =    "Speak-Restart" ":" boolean-value CRLF 
     
 Speak Length 
     
    This header MAY be specified in a CONTROL method to control the 
    length of speech to speak, relative to the current speaking point in 
    the currently active SPEAK request. A - value is illegal in this 
    field. If a field with a Tag unit is specified, then the media must 
    speak till the tag is reached or the SPEAK request complete, which 
    ever comes first. This MAY BE specified in a SPEAK request to 
    indicate the length to speak in the speech data and is relative to 
    the point in speech the SPEAK request starts. The different speech 
    length units supported are dependent on the synthesizer 
    implementation. If it does not support a unit or the operation the 
    resource SHOULD respond with a status code of 404 "Illegal or 
    Unsupported value for header".  
     
      speak-length        =    "Speak-Length" ":" speech-length-value 
                               CRLF 
      speech-length-value =    numeric-speech-length 
                          /    text-speech-length 
      text-speech-length  =    1*VCHAR SP "Tag" 
                           
      numeric-speech-length=   ("+" / "-") 1*DIGIT SP  
                               numeric-speech-unit 
      numeric-speech-unit =    "Second" 
                          /    "Word" 
                          /    "Sentence" 
                          /    "Paragraph" 
 Load-Lexicon 
    This header field is used to indicate whether a lexicon has to be 
    loaded or unloaded. The default value for this field is "true".  
      
           load-lexicon = "Load-Lexicon" : Boolean-value CRLF  
     
 Lexicon-Search-Order 
    This header field is used to specify the list of active Lexicon URIs 
    and the search order among the active lexicons. Note, the lexicons 
    specified within the SSML document still takes precedence over the 
    lexicons specified here. 
     
           Lexicon-search-order = "Lexicon-Search-Order" : uri-list CRLF 
     
  
     
 8.5. Synthesizer Message Body  
     

  
 S Shanmugham                  IETF-Draft                       Page 40 

                            MRCPv2 Protocol              October, 2004 

    A synthesizer message may contain additional information associated 
    with the Method, Response or Event in its message body.  
     
 Synthesizer Speech Data 
     
    Marked-up text for the synthesizer to speak is specified as a MIME 
    entity in the message body. The message to be spoken by the 
    synthesizer can be specified inline by embedding the data in the 
    message body or by reference by providing the URI to the data. In 
    either case the data and the format used to markup the speech needs 
    to be supported by the server. 
     
    All MRCPv2 servers MUST support plain text speech data and W3C's 
    Speech Synthesis Markup Language[10] as a minimum and hence MUST 
    support the MIME types text/plain and application/synthesis+ssml at 
    a minimum. 
     
    If the speech data needs to be specified by URI reference the MIME 
    type text/uri-list is used to specify the one or more URI that will 
    list what needs to be spoken. If a list of speech URI is specified, 
    speech data provided by each URI must be spoken in the order in 
    which the URI are specified. 
     
    If the data to be spoken consists of a mix of URI and inline speech 
    data the multipart/mixed MIME-type is used and embedded with the 
    MIME-blocks for text/uri-list, application/synthesis+ssml or 
    text/plain. The character set and encoding used in the speech data 
    may be specified according to standard MIME-type definitions. The 
    multi-part MIME-block can contain actual audio data in .wav or sun 
    audio format. This is used when the client has audio clips that it 
    may have recorded and has it stored in memory or a local device and 
    it needs to play it as part of the SPEAK request. The audio MIME-
    parts, can be sent by the client as part of the multi-part MIME-
    block. This audio will be referenced in the speech markup data that 
    will be another part in the multi-part MIME-block according to the 
    multipart/mixed MIME-type specification.  
     
    Example 1: 
         Content-Type: text/uri-list 
         Content-Length: 176 
          
         http://www.example.com/ASR-Introduction.sml 
         http://www.example.com/ASR-Document-Part1.sml 
         http://www.example.com/ASR-Document-Part2.sml 
         http://www.example.com/ASR-Conclusion.sml 
         
    Example 2:   
         Content-Type: application/synthesis+ssml 
         Content-Length: 104 
          
         <?xml version="1.0"?> 
  
 S Shanmugham                  IETF-Draft                       Page 41 

                            MRCPv2 Protocol              October, 2004 

         <speak> 
         <paragraph> 
                  <sentence>You have 4 new messages.</sentence> 
                  <sentence>The first is from <say-as  
                  type="name">Stephanie Williams</say-as> 
                  and arrived at <break/> 
                  <say-as type="time">3:45pm</say-as>.</sentence> 
          
                  <sentence>The subject is <prosody 
                  rate="-20%">ski trip</prosody></sentence> 
         </paragraph> 
         </speak> 
     
    Example 3: 
         Content-Type: multipart/mixed; boundary="break" 
          
         --break 
         Content-Type: text/uri-list 
         Content-Length: 176 
          
         http://www.example.com/ASR-Introduction.sml 
         http://www.example.com/ASR-Document-Part1.sml 
         http://www.example.com/ASR-Document-Part2.sml 
         http://www.example.com/ASR-Conclusion.sml 
              
         --break 
         Content-Type: application/synthesis+ssml 
         Content-Length: 104 
          
         <?xml version="1.0"?> 
         <speak> 
         <paragraph> 
                  <sentence>You have 4 new messages.</sentence> 
                  <sentence>The first is from <say-as  
                  type="name">Stephanie Williams</say-as> 
                  and arrived at <break/> 
                  <say-as type="time">3:45pm</say-as>.</sentence> 
          
                  <sentence>The subject is <prosody 
                  rate="-20%">ski trip</prosody></sentence> 
         </paragraph> 
         </speak> 
         --break-- 
     
    Lexicon Data 
     
      Synthesizer lexicon data from the client to the server can be 
    provided inline or by reference. Either way they are carried as MIME 
    entities in the message body of the MRCPv2 request message. 
     

  
 S Shanmugham                  IETF-Draft                       Page 42 

                            MRCPv2 Protocol              October, 2004 

    When a lexicon is specified in-line in the message, the client MUST 
    provide a content-id for that lexicon as part of the content 
    headers. The server MUST store the lexicon associated with that 
    content-id for the duration of the session. A stored lexicon can be 
    overwritten by defining a new lexicon with the same content-id. 
    Lexicons that have been associated with a content-id can be 
    referenced through a special "session:" URI scheme. 
     
     
    If lexicon data needs to be specified by external URI reference, the 
    MIME-type text/uri-list is used to list the one or more URI that 
    will specify the lexicon data. All media servers MUST support the 
    HTTP uri access mechanism. 
     
    If the data to be defined consists of a mix of URI and inline 
    lexicon data the multipart/mixed MIME-type is used. The character 
    set and encoding used in the lexicon data may be specified according 
    to standard MIME-type definitions. 
     
 8.6. SPEAK 
     
    The SPEAK method from the client to the server provides the 
    synthesizer resource with the speech text and initiates speech 
    synthesis and streaming. The SPEAK method can carry voice and 
    prosody header fields that define the behavior of the voice being 
    synthesized, as well as the actual marked-up text to be spoken. If 
    specific voice and prosody parameters are specified as part of the 
    speech markup text, it will take precedence over the values 
    specified in the header fields and those set using a previous SET-
    PARAMS request.  
     
    When applying voice parameters there are 3 levels of scope. The 
    highest precedence are those specified within the speech markup 
    text, followed by those specified in the header fields of the SPEAK 
    request and hence apply for that SPEAK request only, followed by the 
    session default values which can be set using the SET-PARAMS request 
    and apply for the whole session moving forward. 
      
    If the resource is idle and the SPEAK request is being actively 
    processed the resource will respond with a success status code and a 
    request-state of IN-PROGRESS.  
     
    If the resource is in the speaking or paused states, i.e. it is in 
    the middle of processing a previous SPEAK request, the status 
    returns success and a request-state of PENDING. This means that this 
    SPEAK request will be placed in the request queue and will be 
    processed in the other order received after the currently active 
    SPEAK request and previously queued SPEAK requests are completed.   
     
    For the synthesizer resource, this is the only request that can 
    return a request-state of IN-PROGRESS or PENDING.  
  
 S Shanmugham                  IETF-Draft                       Page 43 

                            MRCPv2 Protocol              October, 2004 

    When the text to be synthesized is complete, the resource will issue 
    a SPEAK-COMPLETE event with the request-id of the SPEAK message and 
    a request-state of COMPLETE. 
     
    Example: 
      C->S:MRCP/2.0 489 SPEAK 543257      
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender: neutral 
           Voice-category: teenager 
           Prosody-volume: medium 
           Content-Type: application/synthesis+ssml 
           Content-Length: 104 
     
           <?xml version="1.0"?> 
           <speak> 
           <paragraph> 
             <sentence>You have 4 new messages.</sentence> 
             <sentence>The first is from <say-as  
             type="name">Stephanie Williams</say-as> 
             and arrived at <break/> 
             <say-as type="time">3:45pm</say-as>.</sentence> 
     
             <sentence>The subject is <prosody 
             rate="-20%">ski trip</prosody></sentence> 
           </paragraph> 
           </speak> 
            
     
      S->C:MRCP/2.0 28 543257 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
     
      S->C:MRCP/2.0 79 SPEAK-COMPLETE 543257 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Completion-Cause: 000 normal 
     
     
 8.7. STOP 
     
    The STOP method from the client to the server tells the resource to 
    stop speaking if it is speaking something.  
     
    The STOP request can be sent with an active-request-id-list header 
    field to stop the zero or more specific SPEAK requests that may be 
    in queue and return a response code of 200(Success). If no active-
    request-id-list header field is sent in the STOP request it will 
    terminate all outstanding SPEAK requests.  
     
    If a STOP request successfully terminated one or more PENDING or IN-
    PROGRESS SPEAK requests, then the response message body contains an 
    active-request-id-list header field listing the SPEAK request-ids 
  
 S Shanmugham                  IETF-Draft                       Page 44 

                            MRCPv2 Protocol              October, 2004 

    that were terminated. Otherwise there will be no active-request-id-
    list header field in the response. No SPEAK-COMPLETE events will be 
    sent for these terminated requests. 
     
    If a SPEAK request that was IN-PROGRESS and speaking was stopped the 
    next pending SPEAK request, if any, would become IN-PROGRESS and 
    move to the speaking state. 
     
    If a SPEAK request that was IN-PROGRESS and in the paused state was 
    stopped the next pending SPEAK request, if any, would become IN-
    PROGRESS and move to the paused state. 
     
    Example: 
      C->S:MRCP/2.0 423 SPEAK 543258      
           Channel-Identifier: 32AECB23433802@speechsynth 
           Content-Type: application/synthesis+ssml 
           Content-Length: 104 
     
           <?xml version="1.0"?> 
           <speak> 
           <paragraph> 
             <sentence>You have 4 new messages.</sentence> 
             <sentence>The first is from <say-as  
             type="name">Stephanie Williams</say-as> 
             and arrived at <break/> 
             <say-as type="time">3:45pm</say-as>.</sentence> 
     
             <sentence>The subject is <prosody 
             rate="-20%">ski trip</prosody></sentence> 
           </paragraph> 
           </speak> 
            
     
      S->C:MRCP/2.0 48 543258 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
  
      C->S:MRCP/2.0 44 STOP 543259 200 
           Channel-Identifier: 32AECB23433802@speechsynth 
  
      S->C:MRCP/2.0 66 543259 200 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Active-Request-Id-List: 543258 
     
     
 8.8. BARGE-IN-OCCURRED 
     
    The BARGE-IN-OCCURRED method is a mechanism for the client to 
    communicate a barge-in-able event it detects to the speech resource.  
     
    This event is useful in two scenarios, 
     
  
 S Shanmugham                  IETF-Draft                       Page 45 

                            MRCPv2 Protocol              October, 2004 

    1. The client has detected some events like DTMF digits or other 
    barge-in-able events and wants to communicate that to the 
    synthesizer. 
    2. The recognizer resource and the synthesizer resource are in 
    different servers. In which case the client MUST act as a proxy and 
    receive event from the recognition resource, and then send a BARGE-
    IN-OCCURRED method to the synthesizer. In such cases, the BARGE-IN-
    OCCURRED method would also have a proxy-sync-id header field 
    received from the resource generating the original event.  
      
    If a SPEAK request is active with kill-on-barge-in enabled, and the 
    BARGE-IN-OCCURRED event is received, the synthesizer should stop 
    streaming out audio. It should also terminate any speech requests 
    queued behind the current active one, irrespective of whether they 
    have barge-in enabled or not. If a barge-in-able prompt was playing 
    and it was terminated, the response MUST contain the request-ids of 
    all SPEAK requests that were terminated in its active-request-id-
    list. There will be no SPEAK-COMPLETE events generated for these 
    requests.  
     
    If the synthesizer and the recognizer are part of the same session 
    they could be optimized for a quicker kill-on-barge-in response by 
    the recognizer and synthesizer interacting directly. In these cases, 
    the client MUST still proxy the START-OF-SPEECH event through a 
    BARGE-IN-OCCURRED method, but the synthesizer resource may have 
    already stopped and sent a SPEAK-COMPLETE event with a barge in 
    completion cause code.  If there were no SPEAK requests terminated 
    as a result of the BARGE-IN-OCCURRED method, the response would 
    still be a 200 success but MUST NOT contain an active-request-id-
    list header field. 
      
      C->S:MRCP/2.0 433 SPEAK 543258 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender: neutral 
           Voice-category: teenager 
           Prosody-volume: medium 
           Content-Type: application/synthesis+ssml 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
           <speak> 
           <paragraph> 
             <sentence>You have 4 new messages.</sentence> 
             <sentence>The first is from <say-as  
             type="name">Stephanie Williams</say-as> 
             and arrived at <break/> 
             <say-as type="time">3:45pm</say-as>.</sentence> 
     
             <sentence>The subject is <prosody 
             rate="-20%">ski trip</prosody></sentence> 
           </paragraph> 
  
 S Shanmugham                  IETF-Draft                       Page 46 

                            MRCPv2 Protocol              October, 2004 

           </speak> 
     
      S->C:MRCP/2.0 47 543258 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
      C->S:MRCP/2.0 69 BARGE-IN-OCCURRED 543259 200 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Proxy-Sync-Id: 987654321 
     
      S->C:MRCP/2.0 72 543259 200 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Active-Request-Id-List: 543258 
  
     
 8.9. PAUSE 
     
    The PAUSE method from the client to the server tells the resource to 
    pause speech, if it is speaking something. If a PAUSE method is 
    issued on a session when a SPEAK is not active the server SHOULD 
    respond with a status of 402 or "Method not valid in this state". If 
    a PAUSE method is issued on a session when a SPEAK is active and 
    paused the server SHOULD respond with a status of 200 or "Success". 
    If a SPEAK request was active the server MUST return an active-
    request-id-list header with the request-id of the SPEAK request that 
    was paused. 
     
      C->S:MRCP/2.0 434 SPEAK 543258 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender: neutral 
           Voice-category: teenager 
           Prosody-volume: medium 
           Content-Type: application/synthesis+ssml 
           Content-Length: 104 
     
           <?xml version="1.0"?> 
           <speak> 
           <paragraph> 
             <sentence>You have 4 new messages.</sentence> 
             <sentence>The first is from <say-as  
             type="name">Stephanie Williams</say-as> 
             and arrived at <break/> 
             <say-as type="time">3:45pm</say-as>.</sentence> 
     
             <sentence>The subject is <prosody 
             rate="-20%">ski trip</prosody></sentence> 
           </paragraph> 
           </speak> 
     
      S->C:MRCP/2.0 48 543258 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
  
 S Shanmugham                  IETF-Draft                       Page 47 

                            MRCPv2 Protocol              October, 2004 

      C->S:MRCP/2.0 43 PAUSE 543259 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
      S->C:MRCP/2.0 68 543259 200 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Active-Request-Id-List: 543258 
     
 8.10.     RESUME 
     
    The RESUME method from the client to the server tells a paused 
    synthesizer resource to continue speaking. If a RESUME method is 
    issued on a session with no active SPEAK request, the server SHOULD 
    respond with a status of 402 or "Method not valid in this state". If 
    a RESUME method is issued on a session with an active SPEAK request 
    is speaking(i.e. not paused) the server SHOULD respond with a status 
    of 200 or "Success". If a SPEAK request was active the server MUST 
    return an active-request-id-list header with the request-id of the 
    SPEAK request that was resumed 
     
    Example: 
      C->S:MRCP/2.0 434 SPEAK 543258 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender: neutral 
           Voice-category: teenager 
           Prosody-volume: medium 
           Content-Type: application/synthesis+ssml 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
           <speak> 
           <paragraph> 
               <sentence>You have 4 new messages.</sentence> 
               <sentence>The first is from <say-as  
               type="name">Stephanie Williams</say-as> 
               and arrived at <break/> 
               <say-as type="time">3:45pm</say-as>.</sentence> 
       
               <sentence>The subject is <prosody 
               rate="-20%">ski trip</prosody></sentence> 
           </paragraph> 
           </speak> 
     
      S->C:MRCP/2.0 48 543258 200 IN-PROGRESS@speechsynth 
           Channel-Identifier: 32AECB23433802 
     
      C->S:MRCP/2.0 44 PAUSE 543259 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
      S->C:MRCP/2.0 47 543259 200 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Active-Request-Id-List: 543258 
  
 S Shanmugham                  IETF-Draft                       Page 48 

                            MRCPv2 Protocol              October, 2004 

     
      C->S:MRCP/2.0 44 RESUME 543260 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
      S->C:MRCP/2.0 66 543260 200 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Active-Request-Id-List: 543258 
     
 8.11.     CONTROL 
     
    The CONTROL method from the client to the server tells a synthesizer 
    that is speaking to modify what it is speaking on the fly.  This 
    method is used to make the synthesizer jump forward or backward in 
    what it is speaking, change speaker rate, and speaker parameters, 
    etc. It affects the active or IN-PROGRESS SPEAK request. Depending 
    on the implementation and capability of the synthesizer resource it 
    may allow this operation or one or more of its headers.   
     
    When a CONTROL to jump forward is issued and the operation goes 
    beyond the end of the active SPEAK method's text, the CONTROL 
    request succeeds. Also, the active SPEAK request completes and 
    returns a SPEAK-COMPLETE event following the response to the CONTROL 
    method. If there are more SPEAK requests in the queue, the 
    synthesizer resource will start at the beginning of the next SPEAK 
    request in the queue. 
     
    When a CONTROL to jump backwards is issued and the operation jumps 
    to the beginning or beyond the beginning of the speech data of the 
    active SPEAK request, the response to the CONTROL request contains 
    the speak-restart header, and the active SPEAK request starts from 
    the beginning of its speech data.  
     
    These two behaviors can be used to rewind or fast-forward across 
    multiple speech requests, if the client wants to break up a speech 
    markup text to multiple SPEAK requests. 
     
    If a SPEAK request was active when the CONTROL method was received 
    the server MUST return an active-request-id-list header with the 
    Request-id of the SPEAK request that was active. 
     
    Example: 
      C->S:MRCP/2.0 434 SPEAK 543258 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender: neutral 
           Voice-category: teenager 
           Prosody-volume: medium 
           Content-Type: application/synthesis+ssml 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
           <speak> 
  
 S Shanmugham                  IETF-Draft                       Page 49 

                            MRCPv2 Protocol              October, 2004 

           <paragraph> 
             <sentence>You have 4 new messages.</sentence> 
             <sentence>The first is from <say-as  
             type="name">Stephanie Williams</say-as> 
             and arrived at <break/> 
             <say-as type="time">3:45pm</say-as>.</sentence> 
     
             <sentence>The subject is <prosody 
             rate="-20%">ski trip</prosody></sentence> 
           </paragraph> 
           </speak> 
     
     
      S->C:MRCP/2.0 47 543258 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
      C->S:MRCP/2.0 63 CONTROL 543259          
           Channel-Identifier: 32AECB23433802@speechsynth 
           Prosody-rate: fast 
     
      S->C:MRCP/2.0 67 543259 200 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Active-Request-Id-List: 543258 
     
      C->S:MRCP/2.0 68 CONTROL 543260          
           Channel-Identifier: 32AECB23433802@speechsynth 
           Jump-Size: -15 Words 
     
      S->C:MRCP/2.0 69 543260 200 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Active-Request-Id-List: 543258 
     
 8.12.     SPEAK-COMPLETE 
     
    This is an Event message from the synthesizer resource to the client 
    indicating that the SPEAK request was completed. The request-id 
    header field WILL match the request-id of the SPEAK request that 
    initiated the speech that just completed. The request-state field 
    should be COMPLETE indicating that this is the last Event with that 
    request-id, and that the request with that request-id is now 
    complete. The completion-cause header field specifies the cause code 
    pertaining to the status and reason of request completion such as 
    the SPEAK completed normally or because of an error or kill-on-
    barge-in etc.   
     
    Example: 
      C->S:MRCP/2.0 434 SPEAK 543260 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender: neutral 
           Voice-category: teenager 
           Prosody-volume: medium 
  
 S Shanmugham                  IETF-Draft                       Page 50 

                            MRCPv2 Protocol              October, 2004 

           Content-Type: application/synthesis+ssml 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
           <speak> 
           <paragraph> 
             <sentence>You have 4 new messages.</sentence> 
             <sentence>The first is from <say-as  
             type="name">Stephanie Williams</say-as> 
             and arrived at <break/> 
             <say-as type="time">3:45pm</say-as>.</sentence> 
     
             <sentence>The subject is <prosody 
             rate="-20%">ski trip</prosody></sentence> 
           </paragraph> 
           </speak> 
     
      S->C:MRCP/2.0 48 543260 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
      S->C:MRCP/2.0 73 SPEAK-COMPLETE 543260 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Completion-Cause: 000 normal 
     
 8.13.     SPEECH-MARKER 
     
    This is an event generated by the synthesizer resource to the client 
    when it hits a marker tag in the speech markup it is currently 
    processing. The request-id field in the header matches the SPEAK 
    request request-id that initiated the speech. The request-state 
    field should be IN-PROGRESS as the speech is still not complete and 
    there is more to be spoken. The actual speech marker tag hit, 
    describing where the synthesizer is in the speech markup, is 
    returned in the speech-marker header field, with an NTP timestamp. 
    The SPEECH-MARKER event is also generated with a marker value of "" 
    and the NTP timestamp, when a SPEAK-REQUEST in Pending-State(in the 
    queue) moves to IN-PROGRESS and starts speaking. The NTP timestamp 
    MUST be synchronized with the RTP timestamp used to generate the 
    speech stream. 
     
    Example: 
      C->S:MRCP/2.0 434 SPEAK 543261 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Voice-gender: neutral 
           Voice-category: teenager 
           Prosody-volume: medium 
           Content-Type: application/synthesis+ssml 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
           <speak> 
  
 S Shanmugham                  IETF-Draft                       Page 51 

                            MRCPv2 Protocol              October, 2004 

           <paragraph> 
             <sentence>You have 4 new messages.</sentence> 
             <sentence>The first is from <say-as  
             type="name">Stephanie Williams</say-as> 
             and arrived at <break/> 
             <say-as type="time">3:45pm</say-as>.</sentence> 
             <mark name="here"/> 
             <sentence>The subject is  
                <prosody rate="-20%">ski trip</prosody> 
             </sentence> 
             <mark name="ANSWER"/> 
           </paragraph> 
           </speak> 
     
     
      S->C:MRCP/2.0 48 543261 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
      S->C:MRCP/2.0 73 SPEECH-MARKER 543261 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Speech-Marker: here 
     
      S->C:MRCP/2.0 74 SPEECH-MARKER 543261 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Speech-Marker: ANSWER 
            
      S->C:MRCP/2.0 73 SPEAK-COMPLETE 543261 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Completion-Cause: 000 normal 
     
 8.14.     DEFINE-LEXICON 
     
    The DEFINE-LEXICON method, from the client to the server, provides a 
    lexicon and tells the server to load, unload, activate or deactivate 
    the lexicon.   
     
    If the server resource is in the speaking or paused state, the 
    DEFINE-LEXICON request MUST respond with a failure status.  
     
    If the resource is in the idle state and is able to successfully 
    load/unload/activate/deactivate the lexicon the status MUST return a 
    success code and the request-state MUST be COMPLETE. 
     
    If the synthesizer could not define the lexicon for some reason, say 
    the download failed or the lexicon was in an unsupported form, the 
    MRCPv2 response for the DEFINE-LEXICON method MUST contain a failure 
    status code of 407, and a completion-cause header field describing 
    the failure reason. 
     
     
     
  
 S Shanmugham                  IETF-Draft                       Page 52 

                            MRCPv2 Protocol              October, 2004 

     
 9.   Speech Recognizer Resource 
     
    The speech recognizer resource is capable of receiving an incoming 
    voice stream and providing the client with an interpretation of what 
    was spoken in textual form. 
     
    This section applies for the following resource types. 
           1. speechrecog 
           2. dtmfrecog 
            
    The difference between the above two resources is in their level of 
    support for recognition grammars. The "dtmfrecog" resource is 
    capable of recognizing DTMF digits only and hence will accept DTMF 
    grammars only. The "speechrecog" can recognize regular speech as 
    well as DTMF digits and hence SHOULD support grammars describing 
    speech or DTMF. The recognition resource may support recognition in 
    the normal or hotword modes or both. For implementations where a 
    single recognition resource does not support both modes, they can be 
    implemented as separate resources and allocated to the same SIP 
    session with different MRCP session identifiers and share the RTP 
    audio feed. 
     
 Normal Mode Recognition 
    Regular mode recognition tries to match all of the speech or dtmf 
    from the time it starts recognizing to the grammar and returns a no-
    match status if it fails to match or times out. 
     
 Hotword Mode Recognition 
    Hotword mode is where the recognizer looks for a specific speech 
    grammar or dtmf sequence and ignores speech or DTMF that does not 
    match. It does not timeout nor generate a no-match and will complete 
    only for a successful match of grammar.  
     
 Voice Enrolled Grammars 
    A recognition resource may optionally support Voice Enrolled 
    Grammars. With this functionality enrollment is performed using a 
    person's voice.  For example, a list of contacts can be created and 
    maintained by recording the person's names using the caller's voice.  
    This technique is sometimes also called speaker-dependent 
    recognition.     
     
    Voice Enrollment has a concept of an enrollment session.  A session 
    to add a new phrase to a personal grammar involves the initial 
    enrollment followed by a repeat of enough utterances before 
    committing the new phrase to the personal grammar.  Each time an 
    utterance is recorded, it is compared for similarity with the other 
    samples and a clash test is performed against other entries in the 
    personal grammar to ensure there are no similar and confusable 
    entries. 
     
  
 S Shanmugham                  IETF-Draft                       Page 53 

                            MRCPv2 Protocol              October, 2004 

    Enrollment is done using a Recognizer resource.  Controlling which 
    utterances are to be considered for enrollment of a new phrase is 
    done by setting a header field in the Recognize request.  
     
     
 9.1. Recognizer State Machine 
     
    The recognizer resource is controlled by MRCPv2 requests from the 
    client. Similarly the resource can respond to these requests or 
    generate asynchronous events to the server to indicate certain 
    conditions during the processing of the stream. Hence the recognizer 
    maintains states to correlate MRCPv2 requests from the client. The 
    state transitions are described below. 
     
         Idle                   Recognizing               Recognized 
         State                  State                     State 
          |                       |                          | 
          |---------RECOGNIZE---->|---RECOGNITION-COMPLETE-->| 
          |<------STOP------------|<-----RECOGNIZE-----------| 
          |                       |                          | 
          |                       |              |-----------| 
          |              |--------|       GET-RESULT         | 
          |       START-OF-SPEECH |              |---------->| 
          |------------| |------->|                          | 
          |            |          |----------|               | 
          |      DEFINE-GRAMMAR   | START-INPUT-TIMERS       | 
          |<-----------|          |<---------|               | 
          |                       |                          | 
          |                       |------|                   | 
          |-------|               |   RECOGNIZE              | 
          |      STOP             |<-----|                   | 
          |<------|                                          | 
          |                                                  | 
          |<-------------------STOP--------------------------| 
          |<-------------------DEFINE-GRAMMAR----------------|       
     
    If a recognition resource support voice enrolled grammars, starting 
    an enrollment session does not change the state of the recognizer 
    resource.  Once an enrollment session is started, then utterances 
    are enrolled by calling the RECOGNIZE method repeatedly.  The state 
    of the Speech Recognizer resources goes from IDLE to RECOGNIZING 
    state each time RECOGNIZE is called. 
     
 9.2. Recognizer Methods 
     
    The recognizer supports the following methods. 
     
    recognizer-method     =    recog-only-method 
                          /    enrollment-method 
     
    recog-only-method     =    "DEFINE-GRAMMAR"   ; A 
  
 S Shanmugham                  IETF-Draft                       Page 54 

                            MRCPv2 Protocol              October, 2004 

                          /    "RECOGNIZE"        ; B  
                          /    "INTERPRET"        ; C 
                          /    "GET-RESULT"       ; D 
                          /    "START-INPUT-TIMERS" ; E 
                          /    "STOP"             ; F 
     
    It is OPTIONAL for a recognizer resource to support voice enrolled 
    grammars. If the recognizer resource does support voice enrolled 
    grammars it MUST support the following methods. 
       
      enrollment-method   =    "START-PHRASE-ENROLLMENT" ; G  
                          /    "ENROLLMENT-ROLLBACK"     ; H 
                          /    "END-PHRASE-ENROLLMENT"   ; I 
                          /    "MODIFY-PHRASE"           ; J 
                          /    "DELETE-PHRASE"           ; K 
     
 9.3. Recognizer Events 
     
    The recognizer may generate the following events. 
      recognizer-event    =    "START-OF-SPEECH"         ; L 
                          /    "RECOGNITION-COMPLETE"    ; M 
                          /    "INTERPRETATION-COMPLETE" ; N 
     
     
 9.4. Recognizer Header Fields 
     
    A recognizer message may contain header fields containing request 
    options and information to augment the Method, Response or Event 
    message it is associated with.  
     
      recognizer-header   =    recog-only-header 
                          /    enrollment-header 
     
      recog-only-header   =    confidence-threshold      
                          /    sensitivity-level         
                          /    speed-vs-accuracy         
                          /    n-best-list-length        
                          /    no-input-timeout          
                          /    recognition-timeout       
                          /    waveform-uri 
                          /    input-waveform-uri             
                          /    completion-cause 
                          /    completion-reason         
                          /    recognizer-context-block  
                          /    start-input-timers        
                          /    speech-complete-timeout   
                          /    speech-incomplete-timeout 
                          /    dtmf-interdigit-timeout   
                          /    dtmf-term-timeout         
                          /    dtmf-term-char            
                          /    fetch-timeout             
  
 S Shanmugham                  IETF-Draft                       Page 55 

                            MRCPv2 Protocol              October, 2004 

                          /    failed-uri                
                          /    failed-uri-cause          
                          /    save-waveform             
                          /    new-audio-channel         
                          /    speech-language                     
                          /    ver-buffer-utterance 
                          /    recognition-mode 
                          /    cancel-if-queue 
                          /    hotword-max-duration 
                          /    hotword-min-duration 
                          /    interpret-text 
     
    If a recognition resource supports voice enrolled grammars, the 
    following header fields apply towards using that functionality. 
     
      enrollment-header  =  num-min-consistent-pronunciations 
                          / consistency-threshold   
                          / clash-threshold         
                          / personal-grammar-uri    
                          / phrase-id               
                          / phrase-nl               
                          / weight                  
                          / save-best-waveform      
                          / new-phrase-id           
                          / confusable-phrases-uri  
                          / abort-phrase-enrollment 
     
    Header field          where    s g A B C D E F G H I J K L M N 
          __________________________________________________________ 
    Confidence-Threshold    R      o o - o - o - - - - - - - - - - 
    Sensitivity-Level       R      o o - o - - - - - - - - - - - - 
    Speed-Vs-Accuracy       R      o o - o - - - - - - - - - - - - 
    N-Best-List-Length      R      o o - o - o - - - - - - - - - - 
    No-Input-Timeout        R      o o - o - - - - - - - - - - - - 
    Recognition-Timeout     R      o o - o - - - - - - - - - - - - 
    Waveform-URI            R      - - - - - - - - - - - - - - o - 
    Waveform-URI           2XX     - - - - - - - - - - o - - - - - 
    Input-Waveform-URI      R      - - - o - - - - - - - - - - - - 
    Completion-Cause        R      - - - - - - - - - - - - - - m m 
    Completion-Cause       2XX     - - o o o - - - - - - - - - - - 
    Completion-Cause       4XX     - - m m m - - - - - - - - - - - 
    Completion-Reason       R      - - - - - - - - - - - - - - m m 
    Completion-Reason      2XX     - - o o o - - - - - - - - - - - 
    Completion-Reason      4XX     - - m m m - - - - - - - - - - - 
    Recognizer-Context-Bl.  R      o o - - - - - - - - - - - - - - 
    Start-Input-Timers      R      - - - o - - - - - - - - - - - - 
    Speech-Complete-Time.   R      o o - o - - - - - - - - - - - - 
    Speech-Incomplete-Time. R      o o - o - - - - - - - - - - - - 
    DTMF-Interdigit-Timeo.  R      o o - o - - - - - - - - - - - - 
    DTMF-Term-Timeout       R      o o - o - - - - - - - - - - - - 
    DTMF-Term-Char          R      o o - o - - - - - - - - - - - - 
  
 S Shanmugham                  IETF-Draft                       Page 56 

                            MRCPv2 Protocol              October, 2004 

    Fetch-Timeout           R      o o o o - - - - - - - - - - - - 
    Failed-URI              R      - - - - - - - - - - - - - - o o 
    Failed-URI             4XX     - - o o - - - - - - - - - - - - 
    Failed-URI-Cause        R      - - - - - - - - - - - - - - o o 
    Failed-URI-Cause       4XX     - - o o - - - - - - - - - - - - 
    Save-Waveform           R      o o - o - - - - - - - - - - - - 
    New-Audio-Channel       R      - - - o - - - - - - - - - - - - 
    Speech-Language         R      o o o o - - - - - - - - - - - - 
    Ver-Buffer-Utterance    R      o o - o - - - - - - - - - - - - 
    Recognition-Mode        R      - - - o - - - - - - - - - - - - 
    Cancel-If-Queue         R      - - - o - - - - - - - - - - - - 
    Hotword-Max-Duration    R      o o - o - - - - - - - - - - - - 
    Hotword-Min-Duration    R      o o - o - - - - - - - - - - - - 
    Interpret-Text          R      - - - - m - - - - - - - - - - - 
     
    Num-Min-Consistent-Pr   R      o o - - - - - - o - - - - - - - 
    Consistency-Threshold   R      o o - - - - - - o - - - - - - - 
    Clash-Threshold         R      o o - - - - - - o - - - - - - - 
    Personal-Grammar-URI    R      o o - - - - - - o - - o o - - - 
    Phrase-ID               R      - - - - - - - - m - - m m - - - 
    Phrase-NL               R      - - - - - - - - o - - o - - - - 
    Weight                  R      - - - - - - - - o - - o - - - - 
    Save-Best-Waveform      R      o o - - - - - - o - - - - - - - 
    New-Phrase-ID           R      - - - - - - - - - - - o - - - - 
    Confusable-Phrases-URI  R      - - - o - - - - - - - - - - - - 
    Abort-Phrase-Enrollment R      - - - - - - - - - - o - - - - - 
     
    Legend:   (s) - SET-PARAMS, (g) - GET-PARAMS, (A) - DEFINE, (B) - 
    RECOGNIZE, (C) -INTERPRET, (D) GET-RESULT, (E) - START-INPUT-TIMERS, 
    (F) - STOP, (G) - START-PHRASE-ENROLLMENT, (H) - ENROLLMENT-
    ROLLBACK, (I) - END-PHRASE-ENROLLMENT, (J) - MODIFY-PHRASE, (K) - 
    DELETE-PHRASE, (L) - START-OF-SPEECH, (M) - RECOGNITION-COMPLETE, 
    (M) - INTERPRETATION-COMPLETE  (o) - Optional(Refer text for further 
    constraints), (R) - Request, (r) - Response 
  
    For enrollment-specific header fields that can appear as part of 
    SET-PARAMS or GET-PARAMS methods, the following general rule 
    applies:  the START-PHRASE-ENROLLMENT method must be called before 
    these header fields can be set through the SET-PARAMS method or 
    retrieved through the GET-PARAMS method. 
     
    Note that the waveform-uri header field of the Recognizer resource 
    can also appear in the response to the END-PHRASE-ENROLLMENT method. 
     
     
 Confidence Threshold 
     
    When a recognition resource recognizes or matches a spoken phrase 
    with some portion of the grammar, it associates a confidence level 
    with that conclusion. The confidence-threshold header tells the 
    recognizer resource what confidence level should be considered a 
  
 S Shanmugham                  IETF-Draft                       Page 57 

                            MRCPv2 Protocol              October, 2004 

    successful match. This is a float value between 0.0-1.0 indicating 
    the recognizer's confidence in the recognition. If the recognizer 
    determines that its confidence in all its recognition results is 
    less than the confidence threshold, then it MUST return no-match as 
    the recognition result. This header field MAY occur in RECOGNIZE, 
    SET-PARAMS or GET-PARAMS. The default value for this field is 
    platform specific. 
     
      confidence-threshold=    "Confidence-Threshold" ":" FLOAT CRLF 
     
 Sensitivity Level    
     
    To filter out background noise and not mistake it for speech, the 
    recognizer may support a variable level of sound sensitivity. The 
    sensitivity-level header is a float value between 0.0 and 1.0 and 
    allows the client to set the sensity level for the recognizer. This 
    header field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. A 
    higher value for this field means higher sensitivity. The default 
    value for this field is platform specific. 
     
      sensitivity-level   =    "Sensitivity-Level" ":" FLOAT CRLF 
     
 Speed Vs Accuracy 
     
    Depending on the implementation and capability of the recognizer 
    resource it may be tunable towards Performance or Accuracy. Higher 
    accuracy may mean more processing and higher CPU utilization, 
    meaning less calls per server and vice versa. This header is a float 
    value between 0.0 and 1.0 and allows this field to be tuned by the 
    speed-vs-accuracy header. This header field MAY occur in RECOGNIZE, 
    SET-PARAMS or GET-PARAMS. A higher value for this field means higher 
    speed. The default value for this field is platform specific. 
     
      speed-vs-accuracy   =     "Speed-Vs-Accuracy" ":" FLOAT CRLF 
     
 N Best List Length 
     
    When the recognizer matches an incoming stream with the grammar, it 
    may come up with more than one alternative matches because of 
    confidence levels in certain words or conversation paths.  If this 
    header field is not specified, by default, the recognition resource 
    will only return the best match above the confidence threshold. The 
    client, by setting this header, could ask the recognition resource 
    to send it more than 1 alternative. All alternatives must still be 
    above the confidence-threshold. A value greater than one does not 
    guarantee that the recognizer will send the requested number of 
    alternatives. This header field MAY occur in RECOGNIZE, SET-PARAMS 
    or GET-PARAMS. The minimum value for this field is 1. The default 
    value for this field is 1. 
     
      n-best-list-length  =    "N-Best-List-Length" ":" 1*DIGIT CRLF 
  
 S Shanmugham                  IETF-Draft                       Page 58 

                            MRCPv2 Protocol              October, 2004 

  
 No Input Timeout 
     
    When recognition is started and there is no speech detected for a 
    certain period of time, the recognizer can send a RECOGNITION-
    COMPLETE event to the client and terminate the recognition 
    operation. The no-input-timeout header field can set this timeout 
    value. The value is in milliseconds. This header field MAY occur in 
    RECOGNIZE, SET-PARAMS or GET-PARAMS. The value for this field ranges 
    from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. The 
    default value for this field is platform specific. 
     
      no-input-timeout    =    "No-Input-Timeout" ":" 1*DIGIT CRLF 
     
 Recognition Timeout 
     
    When recognition is started and there is no match for a certain 
    period of time, the recognizer can send a RECOGNITION-COMPLETE event 
    to the client and terminate the recognition operation. It is the 
    timer that is started when START-OF-SPEECH event is generated by the 
    resource and specifies the maximum duration of the utterance. When 
    this timer expires the recognition request would complete with a 
    status code of "008 too-much-speech-timeout". The recognition-
    timeout header field sets this timeout value. The value is in 
    milliseconds. The value for this field ranges from 0 to MAXTIMEOUT, 
    where MAXTIMEOUT is platform specific. The default value is 10 
    seconds. This header field MAY occur in RECOGNIZE, SET-PARAMS or 
    GET-PARAMS.   
     
     
      recognition-timeout =    "Recognition-Timeout" ":" 1*DIGIT CRLF 
  
 Waveform URI  
     
    If the save-waveform header field is set to true, the recognizer 
    MUST record the incoming audio stream of the recognition into a file 
    and provide a URI for the client to access it. This header MUST be 
    present in the RECOGNITION-COMPLETE event if the save-waveform 
    header field was set to true. The URI value of the header MUST be 
    NULL if there was some error condition preventing the server from 
    recording. Otherwise, the URI generated by the server SHOULD be 
    globally unique across the server and all its recognition sessions. 
    The URI SHOULD BE available until the session is torn down. 
     
    Similarly, if the save-best-waveform header field is set to true, 
    the recognizer MUST save the audio stream for the best repetition of 
    the phrase that was used during the enrollment session.  The 
    recognizer MUST then record the recognized audio and make it 
    available to the client in the form of a URI returned in the 
    waveform-uri header field in the response to the END-PHRASE-
    ENROLLMENT method. The URI value of the header MUST be NULL if there 
  
 S Shanmugham                  IETF-Draft                       Page 59 

                            MRCPv2 Protocol              October, 2004 

    was some error condition preventing the server from recording. 
    Otherwise, the URI generated by the server SHOULD be globally unique 
    across the server and all its recognition sessions. The URI SHOULD 
    BE available until the session is torn down. 
     
      waveform-uri        =    "Waveform-URI" ":" Uri CRLF 
     
 Input-Waveform-Uri 
     
    This optional header field specifies an audio file that has to be 
    processed according to the RECOGNIZE operation.  This enables the 
    client to recognize from a specified buffer or audio file. It MAY be 
    part of the RECOGNIZE method. 
     
      input-waveform-uri    = "Input-Waveform-URI" ":" Uri CRLF 
     
 Completion Cause 
     
    This header field MUST be part of a RECOGNITION-COMPLETE, event 
    coming from the recognizer resource to the client. This indicates 
    the reason behind the RECOGNIZE method completion. This header field 
    MUST BE sent in the DEFINE-GRAMMAR and RECOGNIZE responses, if they 
    return with a failure status and a COMPLETE state. 
     
      completion-cause    =    "Completion-Cause" ":" 1*DIGIT SP 
                               1*VCHAR CRLF 
     
      Cause-Code     Cause-Name     Description 
     
        000           success       RECOGNIZE completed with a match or  
                                    DEFINE-GRAMMAR succeeded in 
                                    downloading and compiling the 
                                    grammar 
        001           no-match      RECOGNIZE completed, but no match 
                                    was found 
        002           no-input-timeout  
                                    RECOGNIZE completed without a match 
                                    due to a no-input-timeout 
        003           recognition-timeout  
                                    RECOGNIZE completed without a match 
                                    due to a recognition-timeout 
        004           gram-load-failure   
                                    RECOGNIZE failed due grammar load 
                                    failure. 
        005           gram-comp-failure  
                                    RECOGNIZE failed due to grammar  
                                    compilation failure. 
        006           error         RECOGNIZE request terminated 
                                    prematurely due to a recognizer 
                                    error. 
        007           speech-too-early  
  
 S Shanmugham                  IETF-Draft                       Page 60 

                            MRCPv2 Protocol              October, 2004 

                                    RECOGNIZE request terminated because 
                                    speech was too early. This happens  
                                    when the audio stream is already  
                                    "in-speech" when the RECOGNIZE  
                                    request was received. 
        008           too-much-speech-timeout  
                                    RECOGNIZE request terminated because 
                                    speech was too long. 
        009           uri-failure   Failure accessing a URI. 
        010           language-unsupported 
                                    Language not supported. 
        011           cancelled     A new RECOGNIZE cancelled this one. 
        012           semantics-failure   
                                    Recognition succeeded but semantic 
                                    interpretation of the recognized 
                                    input failed. The RECOGNITION- 
                                    COMPLETE event MUST contain the 
                                    Recognition result with only input 
                                    text and no interpretation. 
  
 Completion Reason 
     
    This header field MAY be specified in a RECOGNITION-COMPLETE event 
    coming from the recognizer resource to the client. This contains the 
    reason text behind the RECOGNIZE request completion. This field can 
    be use to communicate text describing the reason for the failure, 
    such as an error in parsing the grammar markup text. 
     
      completion-reason   =    "Completion-Reason" ":"  
                               quoted-string CRLF 
     
 Recognizer Context Block 
     
    This header MAY BE sent as part of the SET-PARAMS or GET-PARAMS 
    request. If the GET-PARAMS method, contains this header field with 
    no value, then it is a request to the recognizer to return the 
    recognizer context block. The response to such a message MAY contain 
    a recognizer context block as a message entity.  If the server 
    returns a recognizer context block, the response MUST contain this 
    header field and its value MUST match the content-id of that entity. 
     
    If the SET-PARAMS method contains this header field, it MUST contain 
    a message entity containing the recognizer context data, and a 
    content-id matching this header field.  This content-id should match 
    the content-id that came with the context data during the GET-PARAMS 
    operation. 
     
    Each recognition vendor choosing to use this mechanism to handoff 
    recognizer context data between servers MUST distinguish its vendor 
    specific block of data by using an IANA-registered content type in 
    the IANA MIME vendor tree. 
  
 S Shanmugham                  IETF-Draft                       Page 61 

                            MRCPv2 Protocol              October, 2004 

      
     
      recognizer-context-block =    "Recognizer-Context-Block" ":" 
                                    1*VCHAR CRLF 
     
 Start Input Timers 
     
    This header MAY BE sent as part of the RECOGNIZE request. A value of 
    false tells the recognizer to start recognition, but not to start 
    the no-input timer yet. The recognizer should not start the timers 
    until the client sends a START-INPUT-TIMERS request to the 
    recognizer. This is useful in the scenario when the recognizer and 
    synthesizer engines are not part of the same session. Here when a 
    kill-on-barge-in prompt is being played, you want the RECOGNIZE 
    request to be simultaneously active so that it can detect and 
    implement kill-on-barge-in. But at the same time you don't want the 
    recognizer to start the no-input timers until the prompt is 
    finished. The default value is "true".  
     
      start-input-timers  =    "Start-Input-Timers" ":" 
                                    boolean-value CRLF 
     
 Speech Complete Timeout 
     
    This header field specifies the length of silence required following 
    user speech before the speech recognizer finalizes a result (either 
    accepting it or throwing a nomatch event). The speech-complete-
    timeout value is used when the recognizer currently has a complete 
    match of an active grammar, and specifies how long it should wait 
    for more input declaring a match.  By contrast, the incomplete 
    timeout is used when the speech is an incomplete match to an active 
    grammar. The value is in milliseconds. 
     
      speech-complete-timeout= "Speech-Complete-Timeout" ":"  
                               1*DIGIT CRLF 
     
    A long speech-complete-timeout value delays the result completion 
    and therefore makes the computer's response slow. A short speech-
    complete-timeout may lead to an utterance being broken up 
    inappropriately. Reasonable complete timeout values are typically in 
    the range of 0.3 seconds to 1.0 seconds.  The value for this field 
    ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. 
    The default value for this field is platform specific. This header 
    field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. 
     
 Speech Incomplete Timeout 
     
    This header field specifies the required length of silence following 
    user speech after which a recognizer finalizes a result.  The 
    incomplete timeout applies when the speech prior to the silence is 
    an incomplete match of all active grammars.  In this case, once the 
  
 S Shanmugham                  IETF-Draft                       Page 62 

                            MRCPv2 Protocol              October, 2004 

    timeout is triggered, the partial result is rejected (with a nomatch 
    event). The value is in milliseconds. The value for this field 
    ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. 
    The default value for this field is platform specific. 
     
      speech-incomplete-timeout= "Speech-Incomplete-Timeout" ":"  
                               1*DIGIT CRLF 
     
    The speech-incomplete-timeout also applies when the speech prior to 
    the silence is a complete match of an active grammar, but where it 
    is possible to speak further and still match the grammar.  By 
    contrast, the complete timeout is used when the speech is a complete 
    match to an active grammar and no further words can be spoken. 
     
    A long speech-incomplete-timeout value delays the result completion 
    and therefore makes the computer's response slow. A short speech-
    incomplete-timeout may lead to an utterance being broken up 
    inappropriately. 
     
    The speech-incomplete-timeout is usually longer than the speech-
    complete-timeout to allow users to pause mid-utterance (for example, 
    to breathe). This header field MAY occur in RECOGNIZE, SET-PARAMS or 
    GET-PARAMS. 
     
 DTMF Interdigit Timeout 
     
    This header field specifies the inter-digit timeout value to use 
    when recognizing DTMF input. The value is in milliseconds.  The 
    value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT 
    is platform specific. The default value is 5 seconds. This header 
    field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. 
     
      dtmf-interdigit-timeout= "DTMF-Interdigit-Timeout" ":"  
                               1*DIGIT CRLF 
     
 DTMF Term Timeout 
     
    This header field specifies the terminating timeout to use when 
    recognizing DTMF input. The DTMF-Term-Timeout applies only when no 
    additional input is allowed by the grammar; otherwise, the 
    DTMF-Interdigit-Timeout applies. The value is in milliseconds. The 
    value for this field ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT 
    is platform specific. The default value is 10 seconds. This header 
    field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. 
     
      dtmf-term-timeout   =    "DTMF-Term-Timeout" ":" 1*DIGIT CRLF 
     
 DTMF-Term-Char 
     
    This header field specifies the terminating DTMF character for DTMF 
    input recognition. The default value is NULL which is specified as 
  
 S Shanmugham                  IETF-Draft                       Page 63 

                            MRCPv2 Protocol              October, 2004 

    an empty header field. This header field MAY occur in RECOGNIZE, 
    SET-PARAMS or GET-PARAMS. 
     
      dtmf-term-char      =    "DTMF-Term-Char" ":" VCHAR CRLF 
     
 Fetch Timeout 
     
    When the recognizer needs to fetch grammar documents this header 
    field controls URI access properties. This defines the recognizer 
    timeout for content that the server may need to fetch from the 
    network. The value is in milliseconds.  The value for this field 
    ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. 
    The default value for this field is platform specific. This header 
    field MAY occur in RECOGNIZE, SET-PARAMS or GET-PARAMS. 
     
      fetch-timeout       =    "Fetch-Timeout" ":" 1*ALPHA CRLF 
     
 Failed URI 
     
    When a recognizer method needs a recognizer to fetch or access a URI 
    and the access fails the server SHOULD provide the failed URI in 
    this header field in the method response. 
     
      failed-uri               =    "Failed-URI" ":" Uri CRLF 
     
 Failed URI Cause 
     
    When a recognizer method needs a recognizer to fetch or access a URI 
    and the access fails the server SHOULD provide the URI specific or 
    protocol specific response code through this header field in the 
    method response. This field has been defined as alphanumeric to 
    accommodate all protocols, some of which might have a response 
    string instead of a numeric response code. 
     
      failed-uri-cause         =    "Failed-URI-Cause" ":" 1*ALPHANUM 
                                    CRLF 
     
 Save Waveform 
     
    This header field allows the client to indicate to the recognizer 
    that it MUST save the audio stream that was recognized. The 
    recognizer MUST then record the recognized audio, without end-
    pointing and make it available to the client in the form of a URI 
    returned in the waveform-uri header field in the RECOGNITION-
    COMPLETE event. If there was an error in recording the stream or the 
    audio clip is otherwise not available, the recognizer MUST return an 
    empty waveform-uri header field. The default value for this fields 
    is "false". 
     
      save-waveform       =    "Save-Waveform" ":" boolean-value CRLF 
     
  
 S Shanmugham                  IETF-Draft                       Page 64 

                            MRCPv2 Protocol              October, 2004 

 New Audio Channel 
     
    This header field MAY BE specified in a RECOGNIZE message and allows 
    the client to tell the server that, from that point on, it will be 
    sending audio data from a new audio source, channel or speaker. If 
    the recognition resource had collected any line statistics or 
    information, it MUST discard it and start fresh for this RECOGNIZE. 
    Note that if there are multiple resources on the same SIP session 
    that may be collecting or using these line statistics, the client 
    MUST reset the line statistics for all these resource. This helps in 
    the case where the client MAY want to reuse an open recognition 
    session with a media resource for multiple telephone calls. 
     
      new-audio-channel   =    "New-Audio-Channel" ":" boolean-value  
                               CRLF 
     
 Speech-Language 
  
    This header field specifies the language of recognition grammar data 
    within a session or request, if it is not specified within the data. 
    The value of this header field should follow RFC 3066 for its 
    values. This MAY occur in DEFINE-GRAMMAR, RECOGNIZE, SET-PARAMS or 
    GET-PARAMS request. 
     
      speech-language          =    "Speech-Language" ":" 1*VCHAR CRLF 
       
 Ver-Buffer-Utterance 
     
    This header field is the same as the one described for the 
    Verification resource. This tells the server to buffer the utterance 
    associated with this recognition request into the verification 
    buffer. Sending this header field is not valid if the verification 
    buffer is not instantiated for the session. This buffer is shared 
    across resource within a session and gets instantiated when a 
    verification resource is added to this session and is released when 
    the resource is released from the session. 
  
 Recognition-Mode 
  
     This header field specifies what mode the RECOGNIZE command should 
     start up in. The value choices are "normal" or "hotword". If the 
     value is "normal", the RECOGNIZE starts matching all speech and DTMF 
     from that point to the grammars specified in the RECOGNIZE commands. 
     If any portion of the speech does not match the grammar, the 
     RECOGNIZE command completes with a no-match status. Also, timers may 
     be active to detect speech in the audio, and the RECOGNIZE command 
     finish because of timeout waiting for speech. If the value of this 
     header field is "hotword", the RECOGNIZE command starts up in 
     hotword mode, where it only looks for particular keywords or DTMF 
     sequences specified in the grammar and ignore silence or other 

  
 S Shanmugham                  IETF-Draft                       Page 65 

                            MRCPv2 Protocol              October, 2004 

     speech in the audio stream. The default value for this header field 
     is "normal". 
  
      recognition-mode         =    "Recognition-Mode" ":" 1*ALPHA CRLF 
  
 Cancel-If-Queue 
  
     This header field specifies what should happen to this RECOGNIZE 
     method when the client queues more RECOGNIZE methods to the 
     resource. The value for this header field is Boolean. A value of 
     "true" for this header field in a RECOGNIZE method, means this 
     RECOGNIZE method when active MUST terminate, with a Completion-Cause 
     of "cancelled", when the client queues another RECOGNIZE command to 
     the resource. A value of "false" for this header field in a 
     RECOGNIZE method, means that the RECOGNIZE method will continue till 
     its operation is complete and if the client queues more RECOGNIZE 
     methods to the resource, they are queued. When the current RECOGNIZE 
     method is stopped or completes with a successful match, the first 
     RECOGNIZE method in the queue becomes active. If the current 
     RECOGNIZE fails, all RECOGNIZE methods in the pending queue are 
     cancelled and will generate a RECOGNITION-COMPLETE event with a 
     Completion-Cause of "cancelled". This field MUST exist in all 
     RECOGNIZE methods. 
      
      cancel-if-queue     =    "Cancel-If-Queue" ":" Boolean-value CRLF 
  
 Hotword-Max-Duration 
     
    This header MAY BE sent in a hotword mode RECOGNIZE request.  It 
    specifies the maximum length of an utterance (in seconds) that 
    should be considered for Hotword recognition.  This header, along 
    with Hotword-Min-Duration, can be used to tune performance by 
    preventing the recognizer from evaluating utterances that are too 
    short or too long to be the Hotword.  The value is in milliseconds. 
    The default is platform dependent. 
     
      hotword-max-duration     =    "Hotword-Max-Duration" ":" 1*DIGIT 
                                    CRLF 
  
 Hotword-Min-Duration 
     
    This header MAY BE sent in a hotword mode RECOGNIZE request.  It 
    specifies the minimum length of an utterance (in seconds) that can 
    be considered for Hotword.  This header, along with Hotword-Max-
    Duration, can be used to tune performance by preventing the 
    recognizer from evaluating utterances that are too short or too long 
    to be the hot word.  The value is in milliseconds. The default value 
    is platform dependent. 
     
      hotword-min-duration     = "Hotword-Min-Duration" ":" 1*DIGIT CRLF 
  
  
 S Shanmugham                  IETF-Draft                       Page 66 

                            MRCPv2 Protocol              October, 2004 

 Interpret-Text  
              
    This header field is used to provide the text for which a natural 
    language interpretation is desired. The value of this field has a 
    content-id that refers to a MIME entity of type plain/text in the 
    body of the message. This header field MUST be used when invoking 
    the INTERPRET method.   
              
              interpret-text = "Interpret-Text" : 1*VCHAR CRLF           
  
 Num-Min-Consistent-Pronunciations  
     
    This header MAY BE specified in a START-PHRASE-ENROLLMENT, SET-
    PARAMS, or GET-PARAMS method and is used to specify the minimum 
    number of consistent pronunciations that must be obtained to voice 
    enroll a new phrase. The minimum value is 1. The default value is 
    platform specific and MAY BE greater than 1. 
  
      num-min-consistent-pronunciations  =  
                   "Num-Min-Consistent-Pronunciations" ":" 1*DIGIT CRLF  
     
     
 Consistency-Threshold  
     
    This header MAY BE sent as part of the START-PHRASE-ENROLLMENT, SET-
    PARAMS, or GET-PARAMS method.  Used during voice-enrollment, this 
    header specifies how similar an utterance needs to be, to a 
    previously enrolled pronunciation of the same phrase to be 
    considered "consistent." The higher the threshold, the closer the 
    match between an utterance and previous pronunciations must be for 
    the pronunciation to be considered consistent. The range for this 
    threshold is a float value between is 0.0 to 1.0. The default value 
    for this field is platform specific. 
     
      consistency-threshold = "Consistency-Threshold" ":" FLOAT CRLF 
      
     
 Clash-Threshold 
     
    This header MAY BE sent as part of the START-PHRASE-ENROLLMENT, SET-
    PARMS, or GET-PARAMS method.  Used during voice-enrollment, this 
    header specifies how similar the pronunciations of two different 
    phrases can be before they are considered to be clashing. For 
    example, pronunciations of phrases such as "John Smith" and "Jon 
    Smits" may be so similar that they are difficult to distinguish 
    correctly. A smaller threshold reduces the number of clashes 
    detected. The range for this threshold is float value between 0.0 
    and 1.0. The default value for this field is platform specific. 
     
      clash-threshold     =    "Clash-Threshold" ":" 1*DIGIT CRLF 
  
  
 S Shanmugham                  IETF-Draft                       Page 67 

                            MRCPv2 Protocol              October, 2004 

     
 Personal-Grammar-URI  
     
    This header specifies the speaker-trained grammar to be used or 
    referenced during enrollment operations.  For example, a contact 
    list for user "Jeff" could be stored at the Personal-Grammar-
    URI="http://myserver/myenrollmentdb/jeff-list". There is no default 
    value for this header field. 
     
      personal-grammar-uri = "Personal-Grammar-URI" ":" Uri CRLF 
  
     
 Phrase-Id 
     
    This header identifies a phrase in a personal grammar and will also 
    be returned when doing recognition.  This header field MAY occur in 
    START-PHRASE-ENROLLMENT, MODIFY-PHRASE or DELETE-PHRASE requests. 
    There is no default value for this header field. 
     
      phrase-id           =    "Phrase-ID" ":" 1*VCHAR CRLF 
  
  
 Phrase-NL 
     
    This is a string specifying the natural language statement to 
    execute when the phrase is recognized.  This header field MAY occur 
    in START-PHRASE-ENROLLMENT and MODIFY-PHRASE requests. There is no 
    default value for this header field. 
     
      phrase-nl           =    "Phrase-NL" ":" 1*VCHAR CRLF 
     
     
 Weight  
     
    The value of this header field represents the occurrence likelihood 
    of this branch of the grammar.  The weights are normalized to sum to 
    one at compilation time, so use the value of '1' if you want all 
    branches to have the same weight. This header field MAY occur in 
    START-PHRASE-ENROLLMENT and MODIFY-PHRASE requests. The default 
    value for this field is platform specific. 
     
      weight         = "Weight" ":" WEIGHT CRLF 
  
     
 Save-Best-Waveform  
     
    This header field allows the client to indicate to the recognizer 
    that it MUST save the audio stream for the best repetition of the 
    phrase that was used during the enrollment session.  The recognizer 
    MUST then record the recognized audio and make it available to the 
    client in the form of a URI returned in the waveform-uri header 
  
 S Shanmugham                  IETF-Draft                       Page 68 

                            MRCPv2 Protocol              October, 2004 

    field in the response to the END-PHRASE-ENROLLMENT method.  If there 
    was an error in recording the stream or the audio clip is otherwise 
    not available, the recognizer MUST return an empty waveform-uri 
    header field. 
     
      save-best-waveform  = "Save-Best-Waveform" ":" Boolean-value CRLF 
     
     
 New-Phrase-Id  
     
    This header field replaces the id used to identify the phrase in a 
    personal grammar.  The recognizer returns the new id when using an 
    enrollment grammar.  This header field MAY occur in MODIFY-PHRASE 
    requests. 
     
      new-phrase-id       =    "New-Phrase-ID" ":" 1*VCHAR CRLF 
  
  
 Confusable-Phrases-URI  
     
    This optional header field specifies the grammar that defines 
    invalid phrases for enrollment.  For example, typical applications 
    do not allow an enrolled phrase that is also a command word.  This 
    header field MAY occur in RECOGNIZE requests. 
     
      confusable-phrases-uri   =    "Confusable-Phrases-URI" ":"  
                                    Uri CRLF 
     
     
 Abort-Phrase-Enrollment  
      
    This header field can optionally be specified in the END-PHRASE-
    ENROLLMENT method to abort the phrase enrollment, rather than 
    committing the phrase to the personal grammar.  
      
      abort-phrase-enrollment  =    "Abort-Phrase-Enrollment" ":"  
                                    Boolean- value CRLF 
  
     
 9.5. Recognizer Message Body  
     
    A recognizer message may carry additional data associated with the 
    method, response or event. The client may send the grammar to be 
    recognized in DEFINE-GRAMMAR or RECOGNIZE requests. When the grammar 
    is sent in the DEFINE-GRAMMAR method, the server should be able to 
    download compile and optimize the grammar. The RECOGNIZE request 
    MUST contain a list of grammars that need to be active during the 
    recognition. The server resource may send the recognition results in 
    the RECOGNITION-COMPLETE event or the GET-RESULT response. This data 
    will be carried in the message body of the corresponding MRCPv2 
    message.  
  
 S Shanmugham                  IETF-Draft                       Page 69 

                            MRCPv2 Protocol              October, 2004 

     
 Recognizer Grammar Data 
     
    Recognizer grammar data from the client to the server can be 
    provided inline or by reference. Either way they are carried as MIME 
    entities in the message body of the MRCPv2 request message. The 
    grammar specified inline or by reference specifies the grammar used 
    to match in the recognition process and this data is specified in 
    one of the standard grammar specification formats like W3C's XML or 
    ABNF or Sun's Java Speech Grammar Format etc.  All MRCPv2 servers 
    MUST support W3C's XML based grammar markup format [11](MIME-type 
    application/srgs+xml) and SHOULD support the ABNF form (MIME-type 
    application/srgs). 
      
    When a grammar is specified in-line in the message, the client MUST 
    provide a content-id for that grammar as part of the content 
    headers. The server MUST store the grammar associated with that 
    content-id for the duration of the session. A stored grammar can be 
    overwritten by defining a new grammar with the same content-id. 
    Grammars that have been associated with a content-id can be 
    referenced through a special "session:" URI scheme.  
     
    Example: 
      session:help@root-level.store  
     
    If grammar data needs to be specified by external URI reference, the 
    MIME-type text/uri-list is used to list the one or more URI that 
    will specify the grammar data. All servers MUST support the HTTP uri 
    access mechanism. 
     
    If the data to be defined consists of a mix of URI and inline 
    grammar data the multipart/mixed MIME-type is used and embedded with 
    the MIME-blocks for text/uri-list, application/srgs or 
    application/srgs+xml. The character set and encoding used in the 
    grammar data may be specified according to standard MIME-type 
    definitions. 
     
    When more than one grammar URI or inline grammar block is specified 
    in a message body of the RECOGNIZE request, it is an active list of 
    grammar alternatives to listen.  The ordering of the list implies 
    the precedence of the grammars, with the first grammar in the list 
    having the highest precedence. 
     
    Example 1:   
         Content-Type: application/srgs+xml 
         Content-Id: <request1@form-level.store> 
         Content-Length: 104 
          
         <?xml version="1.0"?> 
          
         <!-- the default grammar language is US English --> 
  
 S Shanmugham                  IETF-Draft                       Page 70 

                            MRCPv2 Protocol              October, 2004 

         <grammar xml:lang="en-US" version="1.0"> 
          
         <!-- single language attachment to tokens --> 
         <rule id="yes"> 
                    <one-of> 
                        <item xml:lang="fr-CA">oui</item> 
                        <item xml:lang="en-US">yes</item> 
                    </one-of>  
            </rule>  
          
         <!-- single language attachment to a rule expansion --> 
            <rule id="request"> 
                    may I speak to 
                    <one-of xml:lang="fr-CA"> 
                        <item>Michel Tremblay</item> 
                        <item>Andre Roy</item> 
                    </one-of> 
            </rule> 
          
            <!-- multiple language attachment to a token --> 
            <rule id="people1"> 
                    <token lexicon="en-US,fr-CA"> Robert </token> 
            </rule> 
          
            <!-- the equivalent single-language attachment expansion --> 
            <rule id="people2"> 
                    <one-of> 
                        <item xml:lang="en-US">Robert</item> 
                        <item xml:lang="fr-CA">Robert</item> 
                    </one-of> 
            </rule> 
          
            </grammar> 
     
    Example 2: 
        Content-Type: text/uri-list 
        Content-Length: 176 
         
        session:help@root-level.store 
        http://www.example.com/Directory-Name-List.grxml 
        http://www.example.com/Department-List.grxml 
        http://www.example.com/TAC-Contact-List.grxml 
        session:menu1@menu-level.store 
           
    Example 3: 
        Content-Type: multipart/mixed; boundary="break" 
         
        --break 
        Content-Type: text/uri-list 
        Content-Length: 176 
        http://www.example.com/Directory-Name-List.grxml 
  
 S Shanmugham                  IETF-Draft                       Page 71 

                            MRCPv2 Protocol              October, 2004 

        http://www.example.com/Department-List.grxml 
        http://www.example.com/TAC-Contact-List.grxml 
         
        --break 
        Content-Type: application/srgs+xml 
        Content-Id: <request1@form-level.store> 
        Content-Length: 104 
         
        <?xml version="1.0"?> 
         
        <!-- the default grammar language is US English --> 
        <grammar xml:lang="en-US" version="1.0"> 
         
        <!-- single language attachment to tokens --> 
        <rule id="yes"> 
                    <one-of> 
                        <item xml:lang="fr-CA">oui</item> 
                        <item xml:lang="en-US">yes</item> 
                    </one-of>  
           </rule>  
         
        <!-- single language attachment to a rule expansion --> 
           <rule id="request"> 
                    may I speak to 
                    <one-of xml:lang="fr-CA"> 
                        <item>Michel Tremblay</item> 
                        <item>Andre Roy</item> 
                    </one-of> 
           </rule> 
         
           <!-- multiple language attachment to a token --> 
           <rule id="people1"> 
                    <token lexicon="en-US,fr-CA"> Robert </token> 
           </rule> 
         
           <!-- the equivalent single-language attachment expansion --> 
           <rule id="people2"> 
                    <one-of> 
                        <item xml:lang="en-US">Robert</item> 
                        <item xml:lang="fr-CA">Robert</item> 
                    </one-of> 
           </rule> 
         
           </grammar> 
        --break-- 
  
 Recognizer Result Data 
     
    Recognition result data from the server is carried in the MRCPv2 
    message body of the RECOGNITION-COMPLETE event or the GET-RESULT 
    response message as MIME entities. All servers MUST support Natural 
  
 S Shanmugham                  IETF-Draft                       Page 72 

                            MRCPv2 Protocol              October, 2004 

    Language Semantics Markup Language (NLSML), an XML markup based on 
    an early draft from the W3C.  This is the default standard for 
    returning recognition results back to the client, and hence MUST 
    support the MIME-type application/x-nlsml.  
     
    MRCP-specific additions to this result format have been made and is 
    fully described in section 9.6 with a normative definition of the 
    DTD and schema in the Appendix.  
     
    Example 1:   
        Content-Type: application/x-nlsml 
        Content-Length: 104 
         
        <?xml version="1.0"?> 
        <result grammar="http://theYesNoGrammar"> 
            <interpretation> 
                <instance> 
                    <myApp:yes_no> 
                        <response>yes</response> 
                    </myApp:yes_no> 
                </instance> 
                <input>ok</input> 
            </interpretation> 
        </result> 
     
  
     
 Enrollment Result Data 
     
    Enrollment results come as part of the RECOGNIZE-COMPLETE event as 
    part of the Recognition result XML data. The XML Schema and DTD for 
    this XML data is provided in section 9.7 with a normative definition 
    of the DTD and scheme in the Appendix.  
     
  
  
 Recognizer Context Block 
     
    When the client has to change servers within a call, this is a block 
    of data that the client MAY collect from the first server and 
    provide to the second server. This may be because the client needs a 
    different language support or because the server issued a redirect. 
    Here the first recognizer resource may have collected acoustic and 
    other data during its recognition. When we switch servers, 
    communicating this data may allow the recognition resource on the 
    new server to provide better recognition based on the acoustic data 
    collected by the previous recognizer. This block of data is vendor-
    specific and MUST be carried as MIME-type application/octets in the 
    body of the message. 
     

  
 S Shanmugham                  IETF-Draft                       Page 73 

                            MRCPv2 Protocol              October, 2004 

    This block of data is communicated in the SET-PARAMS and GET-PARAMS 
    method/response messages. In the GET-PARAMS method, if an empty 
    recognizer-context-block header field is present, then the 
    recognizer should return its vendor-specific context block in the 
    message body as a MIME-entity with a specific content-id.  The 
    content-id value should also be specified in the recognizer-context-
    block header field in the GET-PARAMS response.  The SET-PARAMS 
    request wishing to provide this vendor-specific data should send it 
    in the message body as a MIME-entity with the same content-id that 
    it received from the GET-PARAMS.  The content-id should also be sent 
    in the recognizer-context-block header field of the SET-PARAMS 
    message. 
     
    Each automatic speech recognition (ASR) vendor choosing to use this 
    mechanism to handoff recognizer context data among its servers 
    should distinguish its vendor-specific block of data from other 
    vendors by choosing a unique content-id that they should recognize. 
  
  
  
 9.6. Natural Language Semantic Markup Language 
     
    The general purpose of the NL Semantics Markup is to represent 
    information automatically extracted from a user's utterances by a 
    semantic interpretation component, where utterance is to be taken in 
    the general sense of a meaningful user input in any modality 
    supported by the platform. A specific architecture can take 
    advantage of this representation by using it to convey content among 
    various system components that generate and make use of the markup. 
    In MRCP it is to be used to convey these results between a 
    recognition resource on the MRCP server and the MRCP client. 
     
    Components that generate NLSML: 
         1. Automatic Speech Recognition (ASR) 
         2. Natural language understanding 
         3. Other input media interpreters (e.g. DTMF, pointing, 
           keyboard) 
         4. Reusable dialog components 
         5. Multimedia integration 
          
    Components that use NLSML: 
         1. Dialog manager 
         2. Multimedia integration 
       
    A platform may also choose to use this general format as the basis 
    of a general semantic result that is carried along and filled out 
    during each stage of processing. In addition, future systems may 
    also potentially make use of this markup to convey abstract semantic 
    content to be rendered into natural language by a natural language 
    generation component. 
     
  
 S Shanmugham                  IETF-Draft                       Page 74 

                            MRCPv2 Protocol              October, 2004 

 Markup Functions 
  
    A semantic interpretation system that supports the Natural Language 
    Semantics Markup Language is responsible for interpreting natural 
    language inputs and formatting the interpretation as defined in this 
    document. Semantic interpretation is typically either included as 
    part of the speech recognition process, or involves one or more 
    additional components, such as natural language interpretation 
    components and dialog interpretation components.  
     
    The elements of the markup fall into the following general 
    functional categories: 
     
    Interpretation: 
     
    Elements and attributes representing the semantics of the user's 
    utterance, including the <result>, <interpretation>, and <instance> 
    elements. The <result> element contains the full result of 
    processing one utterance. It may contain multiple <interpretation> 
    elements if the interpretation of the utterance results in multiple 
    alternative meanings due to uncertainty in speech recognition or 
    natural language understanding. There are at least two reasons for 
    providing multiple interpretations: 
     
       1. another component, such as a dialog manager, might have 
         additional information, for example, information from a 
         database, that would allow it to select a preferred 
         interpretation from among the possible interpretations returned 
         from the semantic interpreter. 
        
       2. a dialog manager that was unable to select between several 
         competing interpretations could use this information to go back 
         to the user and find out what was intended. For example, Did 
         you say "Boston" or "Austin"? 
  
    Side Information: 
     
    Elements and attributes representing additional information about 
    the interpretation, over and above the interpretation itself. Side 
    information includes 
     
       1. Whether an interpretation was achieved (the <nomatch> element) 
         and the system's confidence in an interpretation (the 
         "confidence" attribute of <interpretation>). 
        
       2. Alternative interpretations (<interpretation>) 
  
        
       3. Input formats and ASR information: The <input> element, 
         representing the input to the semantic interpreter. 
     
  
 S Shanmugham                  IETF-Draft                       Page 75 

                            MRCPv2 Protocol              October, 2004 

    Multi-modal integration: 
     
    When more than one modality is available for input, the 
    interpretation of the inputs needs to be coordinated. The "mode" 
    attribute of <input> supports this by indicating whether the 
    utterance was input by speech, dtmf, pointing, etc. 
    The"timestamp_start" and "timestamp_end" attributes of 
    <interpretation> also provide for temporal coordination by 
    indicating when inputs occurred. 
  
  
 Overview of NLSML Elements and their Relationships 
     
    The elements in NLSML fall into two categories: 
     
       1. description of the input that was processed. 
        
       2. description of the meaning which was extracted from the input. 
        
    Next to each element are its attributes. In addition, some elements 
    can contain multiple instances of other elements. For example, a 
    <result> can contain multiple <interpretations>, each of which is 
    taken to be an alternative. Similarly, <input> can contain multiple 
    child <input> elements which are taken to be cumulative. A URI 
    reference to an XForms data model is permitted but not required. 
    To illustrate the basic usage of these elements, as a simple 
    example, consider the utterance ok (interpreted as "yes"). The 
    example illustrates how that utterance and its interpretation would 
    be represented in the NL Semantics markup. 
     
    <result grammar="http://theYesNoGrammar> 
      <interpretation> 
         <instance> 
          <yes_no> 
            <response>yes</response> 
          <yes_no> 
          </instance> 
        <input>ok</input> 
      </interpretation> 
    </result> 
     
    This example includes only the minimum required information. There 
    is an overall <result> element which includes one interpretation, 
    containing the application-specific elements "<yes_no>" and 
    "<response>".  
     
 Elements and Attributes 
    
   RESULT Root Element 
    
    Attributes: grammar, x-model xmlns 
  
 S Shanmugham                  IETF-Draft                       Page 76 

                            MRCPv2 Protocol              October, 2004 

     
    The root element of the markup is <result>. The <result> element 
    includes one or more <interpretation> elements. Multiple 
    interpretations can result from ambiguities in the input or in the 
    semantic interpretation. If the "grammar" and "x-model" attributes 
    don't apply to all of the interpretations in the result they can be 
    overridden for individual interpretations at the <interpretation> 
    level. 
     
    Attributes: 
     
       1. grammar: The grammar or recognition rule matched by this 
         result. The format of the grammar attribute will match the rule 
         reference semantics defined in the grammar specification. 
         Specifically, the rule reference will be in the external XML 
         form for grammar rule references. The dialog markup interpreter 
         needs to know the grammar rule that is matched by the utterance 
         because multiple rules may be simultaneously active. The value 
         is the grammar URI used by the dialog markup interpreter to 
         specify the grammar. The grammar can be overridden by a grammar 
         attribute in the <interpretation> element if the input was 
         ambiguous as to which grammar it matched. 
        
       2. x-model: The URI which defines the XForms data model used for 
         this result. The x-model can be overridden by an x-model 
         attribute in the <interpretation> element if the input was 
         ambiguous as to which x-model it matched.(optional) 
     
    <result grammar="http://grammar"  
      <interpretation> 
       .... 
      </interpretation> 
    </result> 
     
   INTERPRETATION Element 
     
    Attributes: confidence, grammar, x-model 
     
    An <interpretation> element contains a single semantic 
    interpretation. 
     
    Attributes: 
     
       1. confidence: An integer from 0-100 indicating the semantic 
         analyzer's confidence in this interpretation. At this point 
         there is no formal, platform-independent, definition of 
         confidence. (optional) 
        
       2. grammar: The grammar or recognition rule matched by this 
         interpretation (if needed to override the grammar specification 
         at the <interpretation> level.) This attribute will only be 
  
 S Shanmugham                  IETF-Draft                       Page 77 

                            MRCPv2 Protocol              October, 2004 

         needed under <interpretation> if it is necessary to override a 
         grammar that was defined at the <result> level.) (optional) 
  
       3. x-model: The URI which defines the XForms data model used for 
         this interpretation. (As in the case of "grammar", this 
         attribute only needs to be defined under <interpretation> if it 
         is necessary to override the x-model specification at the 
         <interpretation> level.) (optional) 
     
    Interpretations must be sorted best-first by some measure of 
    "goodness". The goodness measure is "confidence" if present, 
    otherwise, it is some platform-specific indication of quality. 
     
    The x-model and grammar are expected to be specified most frequently 
    at the <result> level, because most often one data model will be 
    sufficient for the entire result. However, it can be overridden at 
    the <interpretation> level because it is possible that different 
    interpretations may have different data models - perhaps because 
    they match different grammar rules. 
     
    The <interpretation> element includes an optional <input> element 
    which contains the input being analyzed, and an <instance> element 
    containing the interpretation of the utterance. 
     
       <interpretation confidence="75" grammar="http://grammar"  
        x-model="http://dataModel"> 
        ... 
       </interpretation> 
     
   INSTANCE Element 
    
    The <instance> element contains the interpretation of the utterance. 
    If a reference to a data model is present (that is, if there is an 
    "x-model" attribute on the <result> or <interpretation> elements), 
    the markup describing the instance should conform to that data 
    model. When there is semantic markup in the grammar that does not 
    create semantic objects, but instead only does a semantic 
    translation of a portion of the input, such as translating "coke" to 
    "coca-cola", the instance contains the whole input but with the 
    translation applied. The NLSML looks like in example 2 below. If 
    there is no semantic objects created, nor any semantic translation 
    the instance value is the same as the input value. 
     
    Attributes: 
     
       1. confidence: Each element of the instance may have a confidence 
         attribute, defined in the NL semantics namespace. The 
         confidence attribute contains an integer value in the range 
         from 0-100 reflecting the system's confidence in the analysis 
         of that slot. The meaning of confidence scores has not been 

  
 S Shanmugham                  IETF-Draft                       Page 78 

                            MRCPv2 Protocol              October, 2004 

         defined in a platform-independent way. The default value of 
         "confidence" is 100. (optional) 
  
    Example 1: 
     
    <instance name="nameAddress"> 
      <nameAddress> 
          <street confidence=75>123 Maple Street</street> 
          <city>Mill Valley</city> 
          <state>CA</state> 
          <zip>90952</zip> 
      </nameAddress> 
    <instance>  
    <input> 
      My address is 123 Maple Street, 
      Mill Valley, California, 90952 
    </input> 
     
    Example 2: 
     
    <instance> 
        I would like to buy a coca-cola 
    <instance>  
    <input> 
      I would like buy a coke 
    </input> 
     
     
   INPUT Element 
     
    The <input> element is the text representation of a user's input. It 
    includes an optional "confidence" attribute which indicates the 
    recognizer's confidence in the recognition result (as opposed to the 
    confidence in the interpretation, which is indicated by the 
    "confidence" attribute of <interpretation>). Optional "timestamp-
    start" and "timestamp-end" attributes indicate the start and end 
    times of a spoken utterance, in ISO 8601 format. 
     
    Attributes: 
     
       1. timestamp-start: The time at which the input began. (optional) 
        
       2. timestamp-end: The time at which the input ended. (optional) 
        
       3. mode: The modality of the input, for example, speech, dtmf, 
         etc. (optional) 
        
       4. confidence: the confidence of the recognizer in the correctness 
         of the input in the range 0.0 to 1.0 (optional) 
     

  
 S Shanmugham                  IETF-Draft                       Page 79 

                            MRCPv2 Protocol              October, 2004 

    Note that it may not make sense for temporally overlapping inputs to 
    have the same mode; however, this constraint is not expected to be 
    enforced by platforms. 
     
    When there is no time zone designator, ISO 8601 time representations 
    default to local time. 
     
    There are three possible formats for the <input> element. 
     
    a) The <input> element can contain simple text: 
            
           <input>onions</input> 
       
      A future possibility is for <input> to contain not only text but 
      additional markup that represents prosodic information that was 
      contained in the original utterance and extracted by the speech 
      recognizer. This depends on the availability of ASR's that are 
      capable of producing prosodic information. 
       
    b) An <input> tag can also contain additional <input> tags. Having 
      additional input elements allows the representation to support 
      future multi-modal inputs as well as finer-grained speech 
      information, such as timestamps for individual words and word-
      level confidences. 
       
      <input>  
         <input mode="speech" confidence="0.5" 
           timestamp-start="2000-04-03T0:00:00"  
           timestamp-end="2000-04-03T0:00:00.2">fried</input> 
         <input mode="speech" confidence="1.0" 
           timestamp-start="2000-04-03T0:00:00.25"  
           timestamp-end="2000-04-03T0:00:00.6">onions</input> 
      </input> 
       
    c) Finally, the <interpretation> element can contain <nomatch> and 
      <noinput> elements, which describe situations in which the speech 
      recognizer (or other media interpreter) received input that it was 
      unable to process, or did not receive any input at all, 
      respectively. 
     
   NOMATCH Element 
     
    The <nomatch> element under <input> is used to indicate that the 
    semantic interpreter was unable to successfully match any input with 
    confidence above the threshold. It can optionally contain the text 
    of the best of the (rejected) matches. 
     
    <interpretation> 
       <instance/> 
          <input confidence="0.1">  
             <nomatch/> 
  
 S Shanmugham                  IETF-Draft                       Page 80 

                            MRCPv2 Protocol              October, 2004 

          </input> 
    </interpretation> 
    <interpretation>   
       <instance/>        
       <input mode="speech" confidence="0.1">            
         <nomatch>I want to go to New York</nomatch>        
       </input> 
    </interpretation> 
     
   NOINPUT Element 
     
    <noinput> indicates that there was no input-- a timeout occurred in 
    the speech recognizer due to silence. 
     
    <interpretation> 
       <instance/> 
       <input> 
          <noinput/> 
       </input> 
    </interpretation> 
     
    If there are multiple levels of inputs, it appears that the most 
    natural place for <nomatch> and <noinput> elements is under the 
    highest level of <input> for <no input>, and under the appropriate 
    level of <interpretation> for <nomatch>. So <noinput> means "no 
    input at all" and <nomatch> means "no match in speech modality" or 
    "no match in dtmf modality". For example, to represent garbled 
    speech combined with dtmf "1 2 3 4", we would have the following: 
     
    <input>  
       <input mode="speech"><nomatch/></input> 
       <input mode="dtmf">1 2 3 4</input> 
    </input> 
     
    While <noinput> could be represented as an attribute of input, 
    <nomatch> cannot, since it could potentially include PCDATA content 
    with the best match. For parallelism, <noinput> is also an element. 
     
 9.7. Enrollment Results 
    It will contain the following elements/tags to provide information 
    associated with the voice enrollment. 
     
      1. Num-Clashes                  
      2. Num-Good-Repetitions         
      3. Num-Repetitions-Still-Needed 
      4. Consistency-Status           
      5. Clash-Phrase-Ids 
      6. Transcriptions              
      7. Confusable-Phrases          
  
  
  
 S Shanmugham                  IETF-Draft                       Page 81 

                            MRCPv2 Protocol              October, 2004 

    1. Num-Clashes 
     
    This is not a header field, but part of the recognition results. It 
    is returned in a RECOGNITION-COMPLETE event.  Its value represents 
    the number of clashes that this pronunciation has with other 
    pronunciations in an active enrollment session.  The header field 
    Clash-Threshold determines the sensitivity of the clash measurement.  
    Clash testing can be turned off completely by setting Clash-
    Threshold to 0. 
     
      num-clashes    = "<num-clashes>" 1*DIGIT "</num-clashes>" CRLF 
  
      
    2. Num-Good-Repetitions 
     
    This is not a header field, but part of the recognition results. It 
    is returned in a RECOGNITION-COMPLETE event.  Its value represents 
    the number of consistent pronunciations obtained so far in an active 
    enrollment session. 
     
      num-good-repetitions = "<num-good-repetitions>" 1*DIGIT 
                             "</num-good-repetitions>"  CRLF 
     
     
    3. Num-Repetitions-Still-Needed 
     
    This is not a header field, but part of the recognition results. It 
    is returned in a RECOGNITION-COMPLETE event.  Its value represents 
    the number of consistent pronunciations that must still be obtained 
    before the new phrase can be added to the enrollment grammar.  The 
    number of consistent pronunciations required is determined by the 
    header Num-Min-Consistent-Pronunciations, whose default value is 
    two.  The returned value must be 0 before the system will allow you 
    to end an enrollment session for a new phrase. 
     
      num-repetitions-still-needed =  
                     "<num-repetitions-still-needed>" 1*DIGIT 
                     "</num-repetitions-still-needed>" CRLF 
     
     
    4. Consistency-Status 
     
    This is not a header field, but part of the recognition results. It 
    is returned in a RECOGNITION-COMPLETE event. This is used to 
    indicate how consistent the repetitions are when learning a new 
    phrase. It can have the values of CONSISTENT, INCONSISTENT and 
    UNDECIDED. 
     
      consistency-status       = "<consistency-status>" 1*ALPHA 
                                 "</consistency-status>" CRLF 
     
  
 S Shanmugham                  IETF-Draft                       Page 82 

                            MRCPv2 Protocol              October, 2004 

     
    5. Clash-Phrase-Ids 
     
    This is not a header field, but part of the recognition results. It 
    is returned in a RECOGNITION-COMPLETE event.  This gets filled with 
    the phrase ids of the clashing pronunciation(s).  This field is 
    absent if there are no clashes.  This MAY occur in RECOGNITION-
    COMPLETE events.  
     
      phrase-id           = "<item>" 1*ALPHA "</item>" CRLF 
      clash-phrase-ids    = "<clash-phrase-ids>" 1*phrase-id 
      "</clash-phrase-ids>" CRLF 
     
     
    6. Transcriptions 
     
    This is not a header field, but part of the recognition results. It 
    is optionally returned in a RECOGNITION-COMPLETE event.  This gets 
    filled with the transcriptions returned in the last repetition of 
    the phrase being enrolled. This MAY occur in RECOGNITION-COMPLETE 
    events.  
  
      transcription       = "<item>" 1*OCTET "</item>" CRLF 
      transcriptions      = "<transcriptions>" 1*transcription 
                            "</transcriptions>" CRLF 
     
     
    7. Confusable-Phrases 
     
    This is not a header field, but part of the recognition results. It 
    is optionally returned in a RECOGNITION-COMPLETE event.  This gets 
    filled with the list of phrases from a command grammar that are 
    confusable with the phrase being added to the personal grammar.  
    This MAY occur in RECOGNITION-COMPLETE events.  
  
      Confusable-phrase   = "<item>" 1*OCTET "</item>" CRLF 
      confusable-phrases  = "<confusable-phrases>" 1*confusable-phrase 
                            "</confusable-phrases>" CRLF 
     
     
     
 9.8. DEFINE-GRAMMAR 
     
    The DEFINE-GRAMMAR method, from the client to the server, provides a 
    grammar and tells the server to define, download if needed and 
    compile the grammar.   
     
    If the server resource is in the recognition state, the DEFINE-
    GRAMMAR request MUST respond with a failure status.  
     

  
 S Shanmugham                  IETF-Draft                       Page 83 

                            MRCPv2 Protocol              October, 2004 

    If the resource is in the idle state and is able to successfully 
    load and compile the grammar the status MUST return a success code 
    and the request-state MUST be COMPLETE. 
     
    If the recognizer could not define the grammar for some reason, say 
    the download failed or the grammar failed to compile, or the grammar 
    was in an unsupported form, the MRCPv2 response for the DEFINE-
    GRAMMAR method MUST contain a failure status code of 407, and a 
    completion-cause header field describing the failure reason. 
     
    Example: 
      C->S:MRCP/2.0 589 DEFINE-GRAMMAR 543257 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Content-Type: application/srgs+xml 
           Content-Id: <request1@form-level.store> 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
            
           <!-- the default grammar language is US English --> 
           <grammar xml:lang="en-US" version="1.0"> 
            
           <!-- single language attachment to tokens --> 
           <rule id="yes"> 
               <one-of> 
                   <item xml:lang="fr-CA">oui</item> 
                   <item xml:lang="en-US">yes</item> 
               </one-of>  
           </rule>  
     
           <!-- single language attachment to a rule expansion --> 
           <rule id="request"> 
               may I speak to 
               <one-of xml:lang="fr-CA"> 
                   <item>Michel Tremblay</item> 
                   <item>Andre Roy</item> 
               </one-of> 
           </rule> 
     
           </grammar> 
     
      S->C:MRCP/2.0 73 543257 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Completion-Cause: 000 success 
     
     
      C->S:MRCP/2.0 334 DEFINE-GRAMMAR 543258 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Content-Type: application/srgs+xml 
           Content-Id: <helpgrammar@root-level.store> 
           Content-Length: 104 
  
 S Shanmugham                  IETF-Draft                       Page 84 

                            MRCPv2 Protocol              October, 2004 

            
           <?xml version="1.0"?> 
            
           <!-- the default grammar language is US English --> 
           <grammar xml:lang="en-US" version="1.0"> 
     
           <rule id="request"> 
               I need help 
           </rule> 
     
      S->C:MRCP/2.0 73 543258 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Completion-Cause: 000 success 
     
      C->S:MRCP/2.0 723 DEFINE-GRAMMAR 543259 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Content-Type: application/srgs+xml 
           Content-Id: <request2@field-level.store> 
           Content-Length: 104 
            
           <?xml version="1.0" encoding="UTF-8"?> 
            
           <!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN" 
                             "http://www.w3.org/TR/speech-
           grammar/grammar.dtd"> 
            
           <grammar xmlns="http://www.w3.org/2001/06/grammar" 
           xml:lang="en" 
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  
           xsi:schemaLocation="http://www.w3.org/2001/06/grammar  
                      http://www.w3.org/TR/speech-grammar/grammar.xsd" 
                      version="1.0" mode="voice" root="basicCmd"> 
            
           <meta name="author" content="Stephanie Williams"/> 
            
           <rule id="basicCmd" scope="public"> 
             <example> please move the window </example> 
             <example> open a file </example> 
            
             <ruleref     
                uri="http://grammar.example.com/politeness.grxml#startPo
           lite"/> 
            
             <ruleref uri="#command"/> 
             <ruleref  
                uri="http://grammar.example.com/politeness.grxml#endPoli
           te"/> 
            
           </rule> 
            
           <rule id="command"> 
  
 S Shanmugham                  IETF-Draft                       Page 85 

                            MRCPv2 Protocol              October, 2004 

             <ruleref uri="#action"/> <ruleref uri="#object"/> 
           </rule> 
            
           <rule id="action"> 
              <one-of> 
                 <item weight="10"> open   <tag>TAG-CONTENT-1</tag>  
                     </item> 
                 <item weight="2">  close  <tag>TAG-CONTENT-2</tag>  
                     </item> 
                 <item weight="1">  delete <tag>TAG-CONTENT-3</tag>  
                     </item> 
                 <item weight="1">  move   <tag>TAG-CONTENT-4</tag>  
                     </item> 
               </one-of> 
           </rule> 
            
           <rule id="object"> 
             <item repeat="0-1"> 
               <one-of> 
                 <item> the </item> 
                 <item> a </item> 
               </one-of> 
             </item> 
            
             <one-of> 
                 <item> window </item> 
                 <item> file </item> 
                 <item> menu </item> 
             </one-of> 
           </rule> 
            
           </grammar> 
     
     
      S->C:MRCP/2.0 69 543259 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Completion-Cause: 000 success 
     
      C->S:MRCP/2.0 155 RECOGNIZE 543260 
           Channel-Identifier: 32AECB23433801@speechrecog 
           N-Best-List-Length: 2 
           Content-Type: text/uri-list 
           Content-Length: 176 
            
           session:request1@form-level.store 
           session:request2@field-level.store 
           session:helpgramar@root-level.store 
     
      S->C:MRCP/2.0 48 543260 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
     
  
 S Shanmugham                  IETF-Draft                       Page 86 

                            MRCPv2 Protocol              October, 2004 

      S->C:MRCP/2.0 48 START-OF-SPEECH 543260 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
            
      S->C:MRCP/2.0 486 RECOGNITION-COMPLETE 543260 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Completion-Cause: 000 success 
           Waveform-URI: http://web.media.com/session123/audio.wav 
           Content-Type: applicationt/x-nlsml 
           Content-Length: 276 
            
           <?xml version="1.0"?> 
           <result x-model="http://IdentityModel" 
             xmlns:xf="http://www.w3.org/2000/xforms" 
             grammar="session:request1@form-level.store"> 
                <interpretation> 
                     <xf:instance name="Person"> 
                       <Person> 
                           <Name> Andre Roy </Name> 
                       </Person> 
                     </xf:instance> 
                     <input>   may I speak to Andre Roy </input> 
                </interpretation> 
           </result> 
     
 9.9. RECOGNIZE 
     
    The RECOGNIZE method from the client to the server tells the 
    recognizer to start recognition and provides it with a grammar to 
    match for. The RECOGNIZE method can carry headers to control the 
    sensitivity, confidence level and the level of detail in results 
    provided by the recognizer. These headers override the current 
    defaults set by a previous SET-PARAMS method. 
     
    The RECOGNIZE method can be started in normal or hotword mode, and 
    is specified by the Recognition-Mode header field. The default value 
    is "normal".  
     
    Note that the recognizer may also enroll the collected utterance in 
    a personal grammar if the Enroll-utterance header field is set to 
    true and an Enrollment is active (via an earlier execution of the 
    START-PHRASE-ENROLLMENT method). If so, and if the RECOGNIZE request 
    contains a Content-Id header field then the resulting grammar (which 
    includes the personal grammar as a sub-grammar) can be referenced 
    from elsewhere by using "session:foo", where "foo" is the value of 
    the Content-Id header field. 
     
    If the resource is in the recognizing state, the RECOGNIZE request 
    MUST respond with a failure status. If the resource is in the Idle 
    state and was able to successfully start the recognition, the server 
    MUST return a success code and a request-state of IN-PROGRESS. This 

  
 S Shanmugham                  IETF-Draft                       Page 87 

                            MRCPv2 Protocol              October, 2004 

    means that the recognizer is active and that the client should 
    expect further events with this request-id.  
     
    If the resource could not start a recognition, it MUST return a 
    failure status code of 407 and contain a completion-cause header 
    field describing the cause of failure. 
     
    For the recognizer resource, this is the only request that can 
    return request-state of IN-PROGRESS, meaning that recognition is in 
    progress. When the recognition completes by matching one of the 
    grammar alternatives or by a time-out without a match or for some 
    other reason, the recognizer resource MUST send the client a 
    RECOGNITION-COMPLETE event with the result of the recognition and a 
    request-state of COMPLETE.  
     
    For large grammars that can take a long time to compile and for 
    grammars which are used repeatedly, the client could issue a DEFINE-
    GRAMMAR request with the grammar ahead of time. In such a case the 
    client can issue the RECOGNIZE request and reference the grammar 
    through the "session:" special URI. This also applies in general if 
    the client wants to restart recognition with a previous inline 
    grammar.   
     
    Note that since the audio and the messages are carried over separate 
    communication paths there may be a race condition between the start 
    of the flow of audio and the receipt of the RECOGNIZE method. For 
    example, if audio flow is started by the client at the same time as 
    the RECOGNIZE method is sent, either the audio or the RECOGNIZE will 
    arrive at the recognizer first. As another example, the client may 
    chose to continuously send audio to the Server and signal the Server 
    to recognize using the RECOGNIZE method.  A number of mechanisms 
    exist to resolve this condition and the mechanism chosen is left to 
    the implementers of recognition resource. The recognizer should 
    expect the media to start flowing when it receives the recognize 
    request, and shouldn't buffer anything it receives beforehand. 
     
     
    Example: 
      C->S:MRCP/2.0 479 RECOGNIZE 543257 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Confidence-Threshold: 0.9 
           Content-Type: application/srgs+xml 
           Content-Id: <request1@form-level.store> 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
            
           <!-- the default grammar language is US English --> 
           <grammar xml:lang="en-US" version="1.0"> 
            
           <!-- single language attachment to tokens --> 
  
 S Shanmugham                  IETF-Draft                       Page 88 

                            MRCPv2 Protocol              October, 2004 

           <rule id="yes"> 
                    <one-of> 
                             <item xml:lang="fr-CA">oui</item> 
                             <item xml:lang="en-US">yes</item> 
                    </one-of>  
                </rule>  
            
           <!-- single language attachment to a rule expansion --> 
                <rule id="request"> 
                    may I speak to 
                    <one-of xml:lang="fr-CA"> 
                             <item>Michel Tremblay</item> 
                             <item>Andre Roy</item> 
                    </one-of> 
                </rule> 
            
             </grammar> 
     
      S->C:MRCP/2.0 48 543257 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
     
      S->C:MRCP/2.0 49 START-OF-SPEECH 543257 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
            
      S->C:MRCP/2.0 467 RECOGNITION-COMPLETE 543257 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Completion-Cause: 000 success 
           Waveform-URI: http://web.media.com/session123/audio.wav 
           Content-Type: application/x-nlsml 
           Content-Length: 276 
            
           <?xml version="1.0"?> 
           <result x-model="http://IdentityModel" 
             xmlns:xf="http://www.w3.org/2000/xforms" 
             grammar="session:request1@form-level.store"> 
               <interpretation> 
                   <xf:instance name="Person"> 
                       <Person> 
                           <Name> Andre Roy </Name> 
                       </Person> 
                   </xf:instance> 
                     <input>   may I speak to Andre Roy </input> 
               </interpretation> 
           </result> 
     
 9.10.     STOP 
     
    The STOP method from the client to the server tells the resource to 
    stop recognition if one is active. If a RECOGNIZE request is active 
    and the STOP request successfully terminated it, then the response 
    header contains an active-request-id-list header field containing 
  
 S Shanmugham                  IETF-Draft                       Page 89 

                            MRCPv2 Protocol              October, 2004 

    the request-id of the RECOGNIZE request that was terminated. In this 
    case, no RECOGNITION-COMPLETE event will be sent for the terminated 
    request. If there was no recognition active, then the response MUST 
    NOT contain an active-request-id-list header field. Either way the 
    response MUST contain a status of 200(Success). 
     
    Example: 
      C->S:MRCP/2.0 573 RECOGNIZE 543257 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Confidence-Threshold: 0.9 
           Content-Type: application/srgs+xml 
           Content-Id: <request1@form-level.store> 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
            
           <!-- the default grammar language is US English --> 
           <grammar xml:lang="en-US" version="1.0"> 
            
           <!-- single language attachment to tokens --> 
           <rule id="yes"> 
                    <one-of> 
                             <item xml:lang="fr-CA">oui</item> 
                             <item xml:lang="en-US">yes</item> 
                    </one-of>  
                </rule>  
            
           <!-- single language attachment to a rule expansion --> 
                <rule id="request"> 
                    may I speak to 
                    <one-of xml:lang="fr-CA"> 
                             <item>Michel Tremblay</item> 
                             <item>Andre Roy</item> 
                    </one-of> 
                </rule> 
            
           </grammar> 
     
      S->C:MRCP/2.0 47 543257 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
     
      C->S:MRCP/2.0 28 STOP 543258 200 
           Channel-Identifier: 32AECB23433801@speechrecog 
     
      S->C:MRCP/2.0 67 543258 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Active-Request-Id-List: 543257 
     
 9.11.     GET-RESULT 
     

  
 S Shanmugham                  IETF-Draft                       Page 90 

                            MRCPv2 Protocol              October, 2004 

    The GET-RESULT method from the client to the server can be issued 
    when the recognizer is in the recognized state. This request allows 
    the client to retrieve results for a completed recognition.  This is 
    useful if the client decides it wants more alternatives or more 
    information. When the server receives this request it should re-
    compute and return the results according to the recognition 
    constraints provided in the GET-RESULT request.  
     
    The GET-RESULT request could specify constraints like a different 
    confidence-threshold, or n-best-list-length. This feature is 
    optional and the automatic speech recognition (ASR) engine may 
    return a status of unsupported feature.   
     
    Example: 
      C->S:MRCP/2.0 73 GET-RESULT 543257 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Confidence-Threshold: 0.9 
            
     
      S->C:MRCP/2.0 487 543257 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Content-Type: application/x-nlsml 
           Content-Length: 276 
     
           <?xml version="1.0"?> 
           <result x-model="http://IdentityModel" 
             xmlns:xf="http://www.w3.org/2000/xforms" 
             grammar="session:request1@form-level.store"> 
               <interpretation> 
                   <xf:instance name="Person"> 
                       <Person> 
                           <Name> Andre Roy </Name> 
                       </Person> 
                   </xf:instance> 
                             <input>   may I speak to Andre Roy </input> 
               </interpretation> 
           </result> 
     
 9.12.     START-OF-SPEECH 
     
    This is an event from the recognizer to the client indicating that 
    it has detected speech or a DTMF digit. This event is useful in 
    implementing kill-on-barge-in scenarios when the synthesizer 
    resource is in a different session than the recognizer resource and 
    hence is not aware of an incoming audio source. In these cases, it 
    is up to the client to act as a proxy and turn around and issue the 
    BARGE-IN-OCCURRED method to the synthesizer resource. The recognizer 
    resource also sends a unique proxy-sync-id in the header for this 
    event, which is sent to the synthesizer in the BARGE-IN-OCCURRED 
    method to the synthesizer.  
     
  
 S Shanmugham                  IETF-Draft                       Page 91 

                            MRCPv2 Protocol              October, 2004 

    This event should be generated irrespective of whether the 
    synthesizer and recognizer are on the same server or not.  
     
 9.13.     START-INPUT-TIMERS 
     
    This request is sent from the client to the recognition resource 
    when it knows that a kill-on-barge-in prompt has finished playing. 
    This is useful in the scenario when the recognition and synthesizer 
    engines are not in the same session. Here when a kill-on-barge-in 
    prompt is being played, you want the RECOGNIZE request to be 
    simultaneously active so that it can detect and implement kill on 
    barge-in. But at the same time you don't want the recognizer to 
    start the no-input timers until the prompt is finished. The header 
    Start-Input-Timers header field in the RECOGNIZE request will allow 
    the client to say if the timers should be started or not. The 
    recognizer should not start the timers until the client sends a 
    START-INPUT-TIMERS method to the recognizer.  
     
 9.14.     RECOGNITION-COMPLETE 
     
    This is an Event from the recognizer resource to the client 
    indicating that the recognition completed. The recognition result is 
    sent in the MRCPv2 body of the message. The request-state field MUST 
    be COMPLETE indicating that this is the last event with that 
    request-id, and that the request with that request-id is now 
    complete. The recognizer context still holds the results and the 
    audio waveform input of that recognition till the next RECOGNIZE 
    request is issued. A URI to the audio waveform MAY BE returned to 
    the client in a waveform-uri header field in the RECOGNITION-
    COMPLETE event. The client can use this URI to retrieve or playback 
    the audio. 
     
    Note if an enrollment session was active on with the recognizer that 
    the event can contain recognition or enrollment results depending on 
    what was spoken. 
     
     
    Example 1:  
      C->S:MRCP/2.0 487 RECOGNIZE 543257 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Confidence-Threshold: 0.9 
           Content-Type: application/srgs+xml 
           Content-Id: <request1@form-level.store> 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
            
           <!-- the default grammar language is US English --> 
           <grammar xml:lang="en-US" version="1.0"> 
            
           <!-- single language attachment to tokens --> 
  
 S Shanmugham                  IETF-Draft                       Page 92 

                            MRCPv2 Protocol              October, 2004 

           <rule id="yes"> 
                    <one-of> 
                             <item xml:lang="fr-CA">oui</item> 
                             <item xml:lang="en-US">yes</item> 
                    </one-of>  
                </rule>  
            
           <!-- single language attachment to a rule expansion --> 
                <rule id="request"> 
                    may I speak to 
                    <one-of xml:lang="fr-CA"> 
                             <item>Michel Tremblay</item> 
                             <item>Andre Roy</item> 
                    </one-of> 
                </rule> 
            
           </grammar> 
     
      S->C:MRCP/2.0 48 543257 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
     
      S->C:MRCP/2.0 49 START-OF-SPEECH 543257 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
            
      S->C:MRCP/2.0 465 RECOGNITION-COMPLETE 543257 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Completion-Cause: 000 success 
           Waveform-URI: http://web.media.com/session123/audio.wav 
           Content-Type: application/x-nlsml 
           Content-Length: 276 
            
           <?xml version="1.0"?> 
           <result x-model="http://IdentityModel" 
             xmlns:xf="http://www.w3.org/2000/xforms" 
             grammar="session:request1@form-level.store"> 
               <interpretation> 
                   <xf:instance name="Person"> 
                       <Person> 
                           <Name> Andre Roy </Name> 
                       </Person> 
                   </xf:instance> 
                             <input>   may I speak to Andre Roy </input> 
               </interpretation> 
           </result> 
  
     
    Example 2: 
  
      S->C:MRCP/2.0 465 RECOGNITION-COMPLETE 543257 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Completion-Cause: 000 success 
  
 S Shanmugham                  IETF-Draft                       Page 93 

                            MRCPv2 Protocol              October, 2004 

           Content-Type: application/x-nlsml 
           Content-Length: 123 
            
           <?xml version= "1.0"?> 
           <result grammar="Personal-Grammar-URI" 
                   xmlns:mrcp="http://www.ietf.org/mrcp2"> 
              <mrcp:result-type type="ENROLLMENT" /> 
              <mrcp:enrollment-result> 
                <num-clashes> 2 </num-clashes> 
                <num-good-repetitions> 1 </num-good-repetitions> 
                <num-repetitions-still-needed>  
                   1  
                </num-repetitions-still-needed> 
                <consistency-status> consistent </consistency-status> 
                <clash-phrase-ids>  
                     <item> Jeff </item> <item> Andre </item>  
                </clash-phrase-ids> 
                <transcriptions> 
                     <item> m ay b r ow k er </item>  
                     <item> m ax r aa k ah </item> 
                </transcriptions> 
                <confusable-phrases> 
                     <item> 
                          <phrase> call </phrase> 
                          <confusion-level> 10 </confusion-level> 
                     </item> 
                </confusable-phrases> 
              </mrcp:enrollment-result> 
           </result> 
  
 9.15.     START-PHRASE-ENROLLMENT 
     
    The START-PHRASE-ENROLLMENT method sent from the client to the 
    server starts a new phrase enrollment session during which the 
    client may call RECOGNIZE to enroll a new utterance.  This consists 
    of a set of calls to RECOGNIZE in which the caller speaks a phrase 
    several times so the system can "learn" it. The phrase is then added 
    to a personal grammar (speaker-trained grammar), and the system can 
    recognize it later. 
     
    Only one phrase enrollment session may be active at a time. The 
    Personal-Grammar-URI identifies the grammar that is used during 
    enrollment to store the personal list of phrases.  Once RECOGNIZE is 
    called, the result is returned in a RECOGNITION-COMPLETE event and 
    may contain either an enrollment result OR a recognition result for 
    a regular recognition.  
     
    Calling END-PHRASE-ENROLLMENT ends the ongoing phrase enrollment 
    session, which is typically done after a sequence of successful 
    calls to RECOGNIZE.  This method can be called to commit the new 

  
 S Shanmugham                  IETF-Draft                       Page 94 

                            MRCPv2 Protocol              October, 2004 

    phrase to the personal grammar or to abort the phrase enrollment 
    session.  
     
    The Personal-Grammar-URI, which specifies the grammar to contain the 
    new enrolled phrase, will be created if it does not exist. Also, the 
    personal grammar may ONLY contain phrases added via a phrase  
    enrollment session.  
  
    The Phrase-ID passed to this method will be used to identify this 
    phrase in the grammar and will be returned as the speech input when 
    doing a RECOGNIZE on the grammar. The Phrase-NL similarly will be 
    returned in a RECOGNITION-COMPLETE event in the same manner as other 
    NL in a grammar. The tag-format of this NL is vendor specific.  
  
    If the client has specified Save-Best-Waveform as true, then the 
    response after ending the phrase enrollment session should contain 
    the location/URI of a recording of the best repetition of the 
    learned phrase. 
  
    Example: 
    C->S:  MRCP/2.0 123 START-PHRASE-ENROLLMENT 543258 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Num-Min-Consistent-Pronunciations: 2 
           Consistency-Threshold: 30 
           Clash-Threshold: 12 
           Personal-Grammar-URI: <personal grammar uri> 
           Phrase-Id: <phrase id> 
           Phrase-NL: <NL phrase> 
           Weight: 1 
           Save-Best-Waveform: true 
     
    S->C:  MRCP/2.0 49 543258 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
  
 9.16.     ENROLLMENT-ROLLBACK 
  
    The ENROLLMENT-ROLLBACK method discards the last live utterance from 
    the RECOGNIZE operation. This method should be invoked when the 
    caller provides undesirable input such as non-speech noises, side-
    speech, commands, utterance from the RECOGNIZE grammar, etc. Note 
    that this method does not provide a stack of rollback states. 
    Executing ENROLLMENT-ROLLBACK twice in succession without an 
    intervening recognition operation has no effect on the second 
    attempt. 
     
    Example: 
    C->S:  MRCP/2.0 49 ENROLLMENT-ROLLBACK 543261 
           Channel-Identifier: 32AECB23433801@speechrecog 
     
    S->C:  MRCP/2.0 49 543261 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
  
 S Shanmugham                  IETF-Draft                       Page 95 

                            MRCPv2 Protocol              October, 2004 

  
 9.17.     END-PHRASE-ENROLLMENT  
      
    The END-PHRASE-ENROLLMENT method can only be called during an active 
    phrase enrollment session, which was started by calling the method 
    START-PHRASE-ENROLLMENT.  It may NOT be called during an ongoing 
    RECOGNIZE operation. It should be called when successive calls to 
    RECOGNIZE have succeeded and Num-Repetitions-Still-Needed has been 
    returned as 0 in the RECOGNITION-COMPLETE event to commit the new 
    phrase in the grammar.  Alternatively, it can be called by 
    specifying the Abort-Phrase-Enrollment header to abort the phrase 
    enrollment session.   
     
    If the client has specified Save-Best-Waveform as true in the START-
    PHRASE-ENROLLMENT request, then the response should contain the 
    location/URI of a recording of the best repetition of the learned 
    phrase. 
  
    Example: 
    C->S:  MRCP/2.0 49 END-PHRASE-ENROLLMENT 543262 
           Channel-Identifier: 32AECB23433801@speechrecog 
       
     
    S->C:  MRCP/2.0 123 543262 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Waveform-URI: <waveform uri> 
  
  
 9.18.     MODIFY-PHRASE 
     
    The MODIFY-PHRASE method sent from the client to the server is used 
    to change the phrase ID, NL phrase and/or weight for a given phrase 
    in a personal grammar. 
     
    If no fields are supplied then calling this method has no effect and 
    it is silently ignored. 
     
 Example: 
    C->S:  MRCP/2.0 123 MODIFY-PHRASE 543265   
           Channel-Identifier: 32AECB23433801@speechrecog 
           Personal-Grammar-URI: <personal grammar uri> 
           Phrase-Id: <phrase id> 
           New-Phrase-Id: <new phrase id> 
           Phrase-NL: <NL phrase> 
           Weight: 1 
  
    S->C:  MRCP/2.0 49 543265 200 COMPLETE  
           Channel-Identifier: 32AECB23433801@speechrecog 
  
  

  
 S Shanmugham                  IETF-Draft                       Page 96 

                            MRCPv2 Protocol              October, 2004 

 9.19.     DELETE-PHRASE 
     
    The DELETE-PHRASE method sent from the client to the server is used 
    to delete a phase in a personal grammar added through voice 
    enrollment or text enrollment. If the specified phrase doesn't 
    exist, this method has no effect and it is silently ignored. 
     
 Example: 
    C->S:  MRCP/2.0 123 DELETE-PHRASE 543266 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Personal-Grammar-URI: <personal grammar uri> 
           Phrase-Id: <phrase id> 
     
    S->C:  MRCP/2.0 49 543266 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
     
 9.20.     INTERPRET  
              
    The INTERPRET method from the client to the server takes as input an 
    interpret-text header, containing the text for which the semantic 
    interpretation is desired, and returns, via the INTERPRETATION-
    COMPLETE event, an interpretation result which is very similar to 
    the one returned from a RECOGNIZE method invocation.  Only portions 
    of the result relevant to acoustic matching are excluded from the 
    result.  The interpret-text header MUST be included in the INTERPRET 
    request.  
              
    Recognizer grammar data is treated in the same way as it is when 
    issuing a RECOGNIZE method call.  
     
    If a RECOGNIZE, RECORD or another INTERPRET operation is already in 
    progress, invoking this method will cause the response to have a 
    status code of 402, "Method not valid in this state", and a COMPLETE 
    request state.  
              
 Example:   
              
    C->S:  MRCP/2.0 123 INTERPRET 543266 
           Channel-Identifier: 32AECB23433801@speechrecog  
           Interpret-Text: may I speak to Andre Roy  
           Content-Type: application/srgs+xml   
           Content-Id: <request1@form-level.store>   
           Content-Length: 104   
                          
           <?xml version="1.0"?>   
           <!-- the default grammar language is US English -->   
           <grammar xml:lang="en-US" version="1.0">   
              <!-- single language attachment to tokens -->   
                <rule id="yes">   
                   <one-of>   
                     <item xml:lang="fr-CA">oui</item>   
  
 S Shanmugham                  IETF-Draft                       Page 97 

                            MRCPv2 Protocol              October, 2004 

                     <item xml:lang="en-US">yes</item>   
                   </one-of>    
                </rule>    
                          
              <!-- single language attachment to a rule expansion -->   
                <rule id="request">   
                     may I speak to   
                     <one-of xml:lang="fr-CA">   
                          <item>Michel Tremblay</item>   
                          <item>Andre Roy</item>   
                     </one-of>   
                </rule>   
           </grammar>   
                   
    S->C:  MRCP/2.0 49 543266 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
                        
    S->C:  MRCP/2.0 49 543267 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Completion-Cause: 000 success   
           Content-Type: application/x-nlsml   
           Content-Length: 276   
                          
           <?xml version="1.0"?>   
           <result   x-model="http://IdentityModel"   
                     xmlns:xf="http://www.w3.org/2000/xforms"   
                     grammar="session:request1@form-level.store">   
                <interpretation>   
                     <xf:instance name="Person">   
                          <Person>   
                               <Name> Andre Roy </Name>   
                          </Person>   
                     </xf:instance>   
                     <input>   may I speak to Andre Roy </input>   
                </interpretation>   
           </result>  
     
 9.21.     INTERPRETATION-COMPLETE  
              
    This event from the recognition resource to the client indicates 
    that the INTERPRET operation is complete.  The interpretation result 
    is sent in the body of the MRCP message.  The request state MUST be 
    set to COMPLETE.  
     
    The completion-cause header MUST be included in this event and MUST 
    be set to an appropriate value from the list of cause codes.          
              
 Example:   
              
    C->S:  MRCP/2.0 123 INTERPRET 543266 
           Channel-Identifier: 32AECB23433801@speechrecog  
  
 S Shanmugham                  IETF-Draft                       Page 98 

                            MRCPv2 Protocol              October, 2004 

           Interpret-Text: may I speak to Andre Roy  
           Content-Type: application/srgs+xml   
           Content-Id: <request1@form-level.store>   
           Content-Length: 104   
                          
           <?xml version="1.0"?>   
           <!-- the default grammar language is US English -->   
           <grammar xml:lang="en-US" version="1.0">   
              <!-- single language attachment to tokens -->   
                <rule id="yes">   
                   <one-of>   
                     <item xml:lang="fr-CA">oui</item>   
                     <item xml:lang="en-US">yes</item>   
                   </one-of>    
                </rule>    
                          
              <!-- single language attachment to a rule expansion -->   
                <rule id="request">   
                     may I speak to   
                     <one-of xml:lang="fr-CA">   
                          <item>Michel Tremblay</item>   
                          <item>Andre Roy</item>   
                     </one-of>   
                </rule>       
           </grammar>   
                   
    S->C:  MRCP/2.0 49 543266 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
                        
    S->C:  MRCP/2.0 49 543267 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Completion-Cause: 000 success   
           Content-Type: application/x-nlsml   
           Content-Length: 276   
                          
           <?xml version="1.0"?>   
           <result   x-model="http://IdentityModel"   
                     xmlns:xf="http://www.w3.org/2000/xforms"   
                     grammar="session:request1@form-level.store">   
                <interpretation>   
                     <xf:instance name="Person">   
                          <Person>   
                               <Name> Andre Roy </Name>   
                          </Person>   
                     </xf:instance>   
                     <input>   may I speak to Andre Roy </input>   
                </interpretation>   
           </result>           
              


  
 S Shanmugham                  IETF-Draft                       Page 99 

                            MRCPv2 Protocol              October, 2004 

 9.22.     DTMF Detection 
  
    Digits received as DTMF tones will be delivered to the automatic 
    speech recognition (ASR) engine in the RTP stream according to RFC 
    2833. The automatic speech recognizer (ASR) MUST support RFC 2833 to 
    recognize digits and it MAY support recognizing DTMF tones in the 
    audio.  
     
 10.  Recorder Resource 
    This resource captures the received audio and video and stores it as 
    file. Their main applications would be for capturing speech audio 
    that may be applied for recognition at a later time or recording 
    voice or video mails. Both these applications require functionality 
    above and beyond those specified by protocols such as RTSP such as 
    Audio End-pointing(i.e detecting speech or silence). Detection of 
    speech or silence may be required to start or stop recording. The 
    support for video is optional and is mainly capturing video mails 
    that may require the speech or audio processing mentioned above. 
     
 10.1.     Recorder State Machine 
     
                Idle                   Recording 
                State                  State 
                 |                       | 
                 |---------RECORD------->| 
                 |                       | 
                 |<------STOP------------| 
                 |                       | 
                 |<--RECORD-COMPLETE-----| 
                 |                       | 
                 |              |--------| 
                 |       START-OF-SPEECH | 
                 |              |------->| 
                 |                       | 
          
     
     
 10.2.     Recorder Methods 
    The recorder supports the following methods. 
     
      recorder-Method     =    "RECORD               ; A 
                          /    "STOP"                ; B 
                          /    "START-INPUT-TIMERS"  ; C 
     
     
 10.3.     Recorder Events 
     
    The recorder may generate the following events. 
     
      recorder-Event      =    "START-OF-SPEECH"    ; D 
                          /    "RECORD-COMPLETE"    ; E 
  
 S Shanmugham                  IETF-Draft                      Page 100 

                            MRCPv2 Protocol              October, 2004 

     
 10.4.     Recorder Header Fields 
     
    A recorder messages may contain header fields containing request 
    options and information to augment the Method, Response or Event 
    message it is associated with.  
     
      recorder-header     =    sensitivity-level         
                          /    no-input-timeout          
                          /    completion-cause          
                          /    completion-reason 
                          /    failed-uri                
                          /    failed-uri-cause          
                          /    record-uri 
                          /    media-type                
                          /    max-time                  
                          /    final-silence             
                          /    capture-on-speech 
                          /    ver-buffer-utterance 
                          /    start-input-timers 
                          /    new-audio-channel 
  
    Header field          where     s g A B C D E 
    _______________________________________________ 
    Sensitivity-Level       R       o o o - - - - 
    No-Input-Timeout        R       o o o - - - - 
    Completion-Cause        R       - - - - - - m 
    Completion-Cause       2XX      - - - o - - - 
    Completion-Cause       4XX      - - - m - - - 
    Completion-Reason       R       - - - - - - m 
    Completion-Reason      2XX      - - - o - - - 
    Completion-Reason      4XX      - - - m - - - 
    Start-Input-Timers      R       - - - o - - - 
    Fetch-Timeout           R       o o o - - - - 
    Failed-URI              R       - - - - - - o 
    Failed-URI             4XX      - - o - - - - 
    Failed-URI-Cause        R       - - - - - - o 
    Failed-URI-Cause       4XX      - - o - - - - 
    New-Audio-Channel       R       - - o - - - - 
    Ver-Buffer-Utterance    R       - o o o - - - - 
    Capture-On-Speech       R       o o o - - - - 
    Media-Type              R       - - m - - - - 
    Max-Time                R       o o o - - - - 
    Final-Silence           R       o o o - - - - 
    Record-URI              R       - - m - - - - 
     
     
    Legend:   (s) - SET-PARAMS, (g) - GET-PARAMS, (A) - RECORD, (B) - 
    STOP, (C) - START-TIMERS , (D) - START-OF-SPEECH, (E) - RECORD-
    COMPLETE, (o) - Optional(Refer text for further constraints), (R) - 
    Request, (r) - Response 
  
 S Shanmugham                  IETF-Draft                      Page 101 

                            MRCPv2 Protocol              October, 2004 

  
     
 Sensitivity Level    
     
    To filter out background noise and not mistake it for speech, the 
    recorder may support a variable level of sound sensitivity. The 
    sensitivity-level header allows the client to set this value on the 
    recorder. This header field MAY occur in RECORD, SET-PARAMS or GET-
    PARAMS. A higher value for this field means higher sensitivity. The 
    default value for this field is platform specific. 
     
      sensitivity-level   =    "Sensitivity-Level" ":" 1*DIGIT CRLF 
  
 No Input Timeout 
     
    When recorder is started and there is no speech detected for a 
    certain period of time, the recorder can send a RECORDER-COMPLETE 
    event to the client and terminate the record operation. The no-
    input-timeout header field can set this timeout value. The value is 
    in milliseconds. This header field MAY occur in RECORD, SET-PARAMS 
    or GET-PARAMS. The value for this field ranges from 0 to MAXTIMEOUT, 
    where MAXTIMEOUT is platform specific. The default value for this 
    field is platform specific. 
     
      no-input-timeout    =    "No-Input-Timeout" ":" 1*DIGIT CRLF 
  
 Completion Cause 
     
    This header field MUST be part of a RECORD-COMPLETE, event coming 
    from the recorder resource to the client. This indicates the reason 
    behind the RECORD method completion. This header field MUST be sent 
    in the RECORD responses, if they return with a failure status and a 
    COMPLETE state. 
     
      completion-cause    =    "Completion-Cause" ":" 1*DIGIT SP 
                               1*VCHAR CRLF 
     
      Cause-Code Cause-Name         Description 
     
        000     success-silence     RECORD completed with a silence at  
                                    the end 
        001     success-maxtime     RECORD completed after reaching 
                                    Maximum recording time specified in 
                                    record method. 
        002     noinput-timeout     RECORD failed due to no input 
        003     uri-failure         Failure accessing the record URI. 
        004     error               RECORD request terminated  
                                    prematurely due to a recorder error. 
  
 Completion Reason 
     
  
 S Shanmugham                  IETF-Draft                      Page 102 

                            MRCPv2 Protocol              October, 2004 

    This header field MAY be specified in a RECORD-COMPLETE event coming 
    from the recorder resource to the client. This contains the reason 
    text behind the RECORD request completion. This field can be use to 
    communicate text describing the reason for the failure. 
     
      completion-reason   =    "Completion-Reason" ":"  
                               quoted-string CRLF 
     
 Failed URI 
     
    When a record method needs to post the audio to an URI and access to 
    the URI fails, the server SHOULD provide the failed URI in this 
    header field in the method response. 
     
      failed-uri               =    "Failed-URI" ":" Uri CRLF 
     
 Failed URI Cause 
     
    When a record method needs to post the audio to an URI and access to 
    the URI fails, the server SHOULD provide the URI specific or 
    protocol specific response code through this header field in the 
    method response. This field has been defined as alphanumeric to 
    accommodate all protocols, some of which might have a response 
    string instead of a numeric response code. 
     
      failed-uri-cause         =    "Failed-URI-Cause" ":" 1*ALPHANUM  
                                    CRLF 
  
 Record URI 
     
    When a record method contains this header field the server must 
    capture the audio and store it. If the header field is empty, it 
    MUST store it locally and generate a URI that points to it. This URI 
    is then returned in the STOP response of the RECORD-COMPLETE events. 
    If the header in the RECORD method specifies a URI the server must 
    capture and store the audio at that location. If this header field 
    is not specified in the RECORD message the server MUST capture the 
    audio and send it in the STOP response or the RECORD-COMPLETE event 
    as a message body. In the case, the message carrying the audio 
    content would have this header field with a cid value pointing to 
    the Content-ID in the message body. 
     
      record-uri               =    "Record-URI" ":" Uri CRLF 
     
 Media Type 
     
    A RECORD method MUST contain this header field and specifies to the 
    server the file format in which to store the captured audio or 
    video. 
     
      Media-type               =    "Media-Type" ":" media-type CRLF 
  
 S Shanmugham                  IETF-Draft                      Page 103 

                            MRCPv2 Protocol              October, 2004 

  
 Max Time 
     
    When recorder is started this specifies the maximum length of the 
    recording, calculated from the time the actual capture and store 
    begins and is not necessarily the time the RECORD method is 
    recieved. After this time, the recording stops and the server must 
    return a RECORD-COMPLETE event back to the client and will have a 
    request-state of "COMPLETE".This header field MAY occur in RECORD, 
    SET-PARAMS or GET-PARAMS. The value for this field ranges from 0 to 
    MAXTIMEOUT, where MAXTIMEOUT is platform specific. A value of zero 
    means infinity and hence the recording will continue until one of 
    the other stop conditions are met. The default value for this field 
    is 0. 
     
      max-time  =    "Max-Time" ":" 1*DIGIT CRLF 
  
 Final Silence 
     
    When recorder is started and the actual capture begins, this header 
    field specifies the length of silence in the audio that is to be 
    interpreted as the end of the recording. This header field MAY occur 
    in RECORD, SET-PARAMS or GET-PARAMS. The value for this field ranges 
    from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. A value 
    of zero means infinity and hence the recording will continue until 
    one of the other stop conditions are met. The default value for this 
    field is platform specific. 
     
      final-silence  =    "Final-Silence" ":" 1*DIGIT CRLF 
  
 Capture On Speech 
     
    When recorder is started this header field specifies if the recorder 
    should start capturing immediately(false) or wait for the end-
    pointing functionality to detect speech(true) before it start 
    capturing. This header field MAY occur in the RECORD, SET-PARAMS or 
    GET-PARAMS. The value for this field is a Boolean. The default value 
    for this field is false. 
     
    capture-on-speech     =    "Capture-On-Speech " ":" 1*DIGIT CRLF 
     
 Ver-Buffer-Utterance 
     
    This header field is the same as the one described for the 
    Verification resource. This tells the server to buffer the utterance 
    associated with this recording request into the verification buffer. 
    Sending this header field is not valid if the verification buffer is 
    not instantiated for the session. This buffer is shared across 
    resources within a session and gets instantiated when a verification 
    resource is added to this session and is released when the resource 
    is released from the session. 
  
 S Shanmugham                  IETF-Draft                      Page 104 

                            MRCPv2 Protocol              October, 2004 

  
 Start Input Timers 
     
    This header MAY BE sent as part of the RECORD request. A value of 
    false tells the recorder resource to start the operation, but not to 
    start the no-input timer yet. The recorder resource should not start 
    the timers until the client sends a START-INPUT-TIMERS request to 
    the recorder resource. This is useful in the scenario when the 
    recorder and synthesizer resources are not part of the same session. 
    Here when a kill-on-barge-in prompt is being played, you may want 
    the RECORD request to be simultaneously active so that it can detect 
    and implement kill-on-barge-in. But at the same time you don't want 
    the recorder resource to start the no-input timers until the prompt 
    is finished. The default value is "true".  
     
      start-input-timers  =    "Start-Input-Timers" ":" 
                                    boolean-value CRLF 
  
 New Audio Channel 
     
    This header field is the same as the one described for the 
    Recognizer resource. 
     
  
     
 10.5.     Recorder Message Body 
    The STOP response or the RECORD-COMPLETE events MAY contain a 
    message body carrying the captured audio. This happens if the RECORD 
    method did not have a Record-Uri header field in it. In this case, 
    message carrying the audio content would have a Record-Uri header 
    field with a cid value pointing to the message part that contains 
    the recorded audio 
     
 10.6.     RECORD 
    The RECORD method moves the recorder resource to the Recording 
    State. Depending on the header fields specified in the RECORD method 
    the resource may start recording the audio immediately or wait for 
    the end pointing functionality to detect speech in the audio. It 
    then saves the audio to the URI supplied in the recording-uri header 
    field. If the recording-uri is not specified, the server MUST 
    capture the media onto a local disk and return a URI pointing to the 
    recorded audio in the RECORD-COMPLETE event. The server MUST support 
    HTTP and file URI schemes. 
     
    If a RECORD operation is already in progress, invoking this method 
    will cause the response to have a status code of 402, "Method not 
    valid in this state", and a COMPLETE request state. 
     
    If the recording-uri is not valid, a status code of 404, "Illegal 
    Value for Header", will be returned in the response. If it is 

  
 S Shanmugham                  IETF-Draft                      Page 105 

                            MRCPv2 Protocol              October, 2004 

    impossible for the server to create the requested file, a status 
    code of 407, "Method or Operation Failed", will be returned. 
     
    When the recording operation is initiated the response will indicate 
    an IN-PROGRESS request state.  The server MAY generate a subsequent 
    START-OF-SPEECH event when speech is detected.  Upon completion of 
    the recording operation, the server will generate a RECORDING-
    COMPLETE event.  
     
    Example:  
     
           C->S:MRCP/2.0 386 RECORD 543257 
                Channel-Identifier: 32AECB23433802@recorder           
                Record-URI: file://mediaserver/recordings/myfile.wav   
                Capture-On-Speech: true 
                Final-Silence: 300 
                Max-Time: 6000 
                
           S->C:MRCP/2.0 48 456234 200 IN-PROGRESS  
                Channel-Identifier: 32AECB23433802@recorder            
     
           S->C:MRCP/2/0 49 START-OF-SPEECH 456234 IN-PROGRESS  
                Channel-Identifier: 32AECB23433802@recorder            
                 
           S->C:MRCP/2.0 54 RECORDING-COMPLETE 456234 COMPLETE  
                Channel-Identifier: 32AECB23433802@recorder           
                Completion-Cause: 000 success-silence 
                Record-URI: file://mediaserver/recordings/myfile.wav 
     
 10.7.     STOP 
    The STOP method moves the recorder from the recording state back to 
    the idle state. If the recording was a success the STOP response 
    contains a Record-URI header pointing to the recorded audio file on 
    the server or to a MIME part in the body of the message containing 
    the recorded audio file. The STOP method may have a Trim-Length 
    header field, in which case the specified length of audio is trimmed 
    from the end of the recording after the stop.  
     
     
    Example:  
     
           C->S:MRCP/2.0 386 RECORD 543257 
                Channel-Identifier: 32AECB23433802@recorder           
                Record-URI: file://mediaserver/recordings/myfile.wav   
                Capture-On-Speech: true 
                Final-Silence: 300 
                Max-Time: 6000 
                
           S->C:MRCP/2.0 48 456234 200 IN-PROGRESS  
                Channel-Identifier: 32AECB23433802@recorder            
     
  
 S Shanmugham                  IETF-Draft                      Page 106 

                            MRCPv2 Protocol              October, 2004 

           S->C:MRCP/2/0 49 START-OF-SPEECH 456234 IN-PROGRESS  
                Channel-Identifier: 32AECB23433802@recorder            
                 
           C->S:MRCP/2.0 386 STOP 543257 
                Channel-Identifier: 32AECB23433802@recorder            
                Trim-Length: 200 
                
           S->C:MRCP/2.0 48 456234 200 COMPLETE  
                Channel-Identifier: 32AECB23433802@recorder            
                Completion-Cause: 000 success 
                Record-URI: file://mediaserver/recordings/myfile.wav 
     
     
 10.8.     RECORD-COMPLETE 
    If the recording completes due to no-input, silence after speech or 
    max-time the server MUST generate the RECORD-COMPLETE event to the 
    client with a request-state of "COMPLETE". If the recording was a 
    success the RECORD-COMPLETE event contains a Record-URI header 
    pointing to the recorded audio file on the server or to a MIME part 
    in the body of the message containing the recorded audio file. 
     
    Example:  
     
           C->S:MRCP/2.0 386 RECORD 543257 
                Channel-Identifier: 32AECB23433802@recorder           
                Record-URI: file://mediaserver/recordings/myfile.wav   
                Capture-On-Speech: true 
                Final-Silence: 300 
                Max-Time: 6000 
                
           S->C:MRCP/2.0 48 456234 200 IN-PROGRESS  
                Channel-Identifier: 32AECB23433802@recorder            
     
           S->C:MRCP/2/0 49 START-OF-SPEECH 456234 IN-PROGRESS  
                Channel-Identifier: 32AECB23433802@recorder            
                 
           S->C:MRCP/2.0 48 RECORD-COMPLETE 456234 COMPLETE  
                Channel-Identifier: 32AECB23433802@recorder            
                Completion-Cause: 000 success 
                Record-URI: file://mediaserver/recordings/myfile.wav 
                 
                 
 10.9.     START-INPUT-TIMERS 
     
    This request is sent from the client to the recorder resource when 
    it knows that a kill-on-barge-in prompt has finished playing. This 
    is useful in the scenario when the recorder and synthesizer 
    resources are not in the same session. Here when a kill-on-barge-in 
    prompt is being played, you want the RECORD request to be 
    simultaneously active so that it can detect and implement kill on 
    barge-in. But at the same time you don't want the recorder resource 
  
 S Shanmugham                  IETF-Draft                      Page 107 

                            MRCPv2 Protocol              October, 2004 

    to start the no-input timers until the prompt is finished. The 
    header Start-Input-Timers header field in the RECORD request will 
    allow the client to say if the timers should be started or not. In 
    the above case the recorder resource should not start the timers 
    until the client sends a START-INPUT-TIMERS method to the recorder.  
     













































  
 S Shanmugham                  IETF-Draft                      Page 108 

                            MRCPv2 Protocol              October, 2004 

 11.  Speaker Verification and Identification 
     
    This section describes the methods, responses and events needed for 
    doing Speaker Verification / Identification. 
  
    Speaker verification is a voice authentication feature that can be 
    used to identify the speaker in order to grant the user access to 
    sensitive information and transactions.  To do this, a recorded 
    utterance is compared to a voiceprint previously stored for that 
    user.  Verification consists of two phases: a designation phase to 
    establish the claimed identity of the caller and an execution phase 
    in which a voiceprint is either created (training) or used to 
    authenticate the claimed identity (verification). The resource name 
    is 'speakverify'. 
     
    Speaker identification identifies the speaker from a set of valid 
    users, such as family members.  It may also be referred to, 
    sometimes as Multi-Verification. Identification can be performed on 
    a small set of users or for a large population.  This feature is 
    useful for applications where multiple users share the same account 
    number, but where the individual speaker must be uniquely identified 
    from the group.  Speaker identification is also done in two phases, 
    a designation phase and an execution phase. 
     
    It is possible for a speaker verification resource to share the same 
    session as an existing recognizer resource or a speaker verification 
    session can be set up to operate in standalone mode, without a 
    recognizer resource sharing the same session.  In order to share the 
    same session, the SDP/SIP INVITE message for the verification 
    resource MUST also include the recognizer resource request.  
    Otherwise, an independent verification resource, running on the same 
    physical server or a separate one, will be set up. 
     
    Some of the speaker verification methods, described below, apply 
    only to a specific mode of operation. 
     
    The verification resource supports buffering that allow the user to 
    buffer the verification data from an utterance and then process this 
    utterance later.  This is different from collecting waveforms and 
    processing them using the VERIFY method that operates directly on 
    the incoming audio stream, because this buffering mechanism does not 
    simply accumulate utterance data to a buffer.  This buffer is iwned 
    by the verification resource but shares write access with other 
    input resources such as the recognizer and recorder resources. When 
    both the recognition and verification resources share the same 
    session, additional information gathered by the recognition resource 
    may be saved with these buffers to improve verification performance. 
    This buffer can be cleared by a CLEAR-BUFFER request from the client 
    and is freed when the resource is 'speakverify' is freed.  
     

  
 S Shanmugham                  IETF-Draft                      Page 109 

                            MRCPv2 Protocol              October, 2004 

 11.1.     Speaker Verification State Machine  
     
    Speaker Verification has a concept of a training or verification 
    sessions.  Starting one of these sessions does not change the state 
    of the verification resource, i.e. it remains idle.  Once a 
    verification or training session is started, then utterances are 
    trained or verified by calling the VERIFY or VERIFY-FROM-BUFFER 
    method.  The state of the Speaker Verification resources goes from 
    IDLE to VERIFYING state each time VERIFY or VERIFY-FROM-BUFFER is 
    called. 
     
    As mentioned above, the verification resource has a verification 
    buffer associated with it. This allows the buffering of speech 
    utterances for the purposes of verification, identification or 
    training from the buffered speech. This buffer is owned by the 
    verification resource but other input resources such as the 
    recognition resource or recorder resource share write access to it. 
    This allows the speech received as part of a recognition or 
    recording scenario to be later used for verification, identification 
    or training. 
     
    Note that access the buffer is limited to one operation at time. 
    Hence when resource is doing read, write or delete operation such as 
    a RECOGNIZE with ver-buffer-utternance turned on, another operation 
    involving the buffer such a CLEAR-BUFFER would fail with a status of 
    402.  
     
 11.2.     Speaker Verification Methods 
     
    Speaker Verification supports the following methods. 
      verification-method  = "START-SESSION"      ; A 
                          / "END-SESSION"         ; B 
                          / "QUERY-VOICEPRINT"    ; C 
                          / "DELETE-VOICEPRINT"   ; D 
                          / "VERIFY"              ; E 
                          / "VERIFY-FROM-BUFFER"  ; F  
                          / "VERIFY-ROLLBACK"     ; G 
                          / "STOP"                ; H 
                          / "CLEAR-BUFFER"        ; I 
                          / "START-INPUT-TIMERS"  ; J 
                          / "GET-INTERMEDIATE-RESULT" ; K 
  
    These methods allow the client to control the mode and target of 
    verification or identification operations within the context of a 
    session. All the verification input cycles that occur within a 
    session may be used to create, update, or validate against the 
    voiceprint specified during the session. At the beginning of each 
    session the verification resource is reset to a known state. 
     
    Verification/identification operations can be executed against live 
    or buffered audio. The verification resource provides methods for 
  
 S Shanmugham                  IETF-Draft                      Page 110 

                            MRCPv2 Protocol              October, 2004 

    for collecting and evaluating live audio data, and methods for 
    controlling the verification resource and adjusting its configured 
    behavior. 
     
    There are no specific methods for collecting buffered audio data.  
    This is accomplished by calling VERIFY, RECOGNIZE or RECORD as 
    appropriate for the resource, with the header ver-buffer-utterance.  
    Then, when the following method is called verification is performed 
    using the set of buffered audio. 
     
           1. VERIFY-FROM-BUFFER 
     
    The following methods provide controls for verification of live 
    audio utterances : 
     
           1. VERIFY 
           2. START-INPUT-TIMERS 
     
    The following methods provide controls for configuring the 
    verification resource and for establishing resource states : 
     
           1. START-SESSION 
           2. END-SESSION 
           3. QUERY-VOICEPRINT 
           4. DELETE-VOICEPRINT 
           5. VERIFY-ROLLBACK 
           6. STOP 
           7. CLEAR-BUFFER 
     
    The following method allows the polling a Verification in progress 
    for intermediate results. 
     
           8. GET-INTERMEDIATE-RESULTS 
       
 11.3.     Verification Events 
     
    Speaker Verification may generate the following events. 
     
      verification-event   =  "VERIFICATION-COMPLETE" ; L 
                          /   "START-OF-SPEECH"       ; M 
     
 11.4.     Verification Header Fields 
     
    A Speaker Verification request may contain header fields containing 
    request options and information to augment the Request, Response or 
    Event message it is associated with.  
     
    verification-header  =     repository-uri            
                          /    voiceprint-identifier     
                          /    verification-mode    
                          /    adapt-model               
  
 S Shanmugham                  IETF-Draft                      Page 111 

                            MRCPv2 Protocol              October, 2004 

                          /    abort-model               
                          /    security-level          
                          /    num-min-verification-phrases 
                          /    num-max-verification-phrases 
                          /    no-input-timeout            
                          /    save-waveform               
                          /    waveform-uri                           
                          /    voiceprint-exists           
                          /    ver-buffer-utterance          
                          /    input-waveform-uri            
                          /    completion-cause  
                          /    completion-reason 
                          /    speech-complete-timeout           
                          /    new-audio-channel 
                          /    abort-verification 
                          /    start-input-timers 
                                 
                           
                           
    Header field          where    s g A B C D E F G H I J K L M 
    _____________________________________________________________ 
    Repository-URI          R      - - m - m m - - - - - - - - - 
    Voiceprint-Identifier   R      - - m - m m - - - - - - - - - 
    Verification-Mode       R      o o o - - - - - - - - - - - - 
    Adapt-Model             R      o o o - - - - - - - - - - - - 
    Abort-Model             R      - - - o - - - - - - - - - - - 
    Security-Level          R      o o o - - - - - - - - - - - - 
    Num-Min-Verification-P. R      o o o - - - - - - - - - - - - 
    Num-Max-Verification-P. R      o o o - - - - - - - - - - - - 
    No-Input-Timeout        R      o o - - - - o - - - - - - - - 
    Save-Waveform           R      o o - - - - o - - - - - - - - 
    Waveform-URI            R      - - - - - - - - - - - - - o - 
    Input-Waveform-URI      R      - - - - - - o - - - - - - - - 
    Ver-Buffer-Utterance    R      o o - - - - o - - - - - - - - 
    Completion-Cause        R      - - - - - - - - - - - - - m - 
    Completion-Cause       2XX     - - - - m m - o - - - - - - - 
    Completion-Cause       4XX     - - - - m m m m - - - - - - - 
    Completion-Reason       R      - - - - - - - - - - - - - m - 
    Completion-Reason      2XX     - - - - m m - o - - - - - - - 
    Completion-Reason      4XX     - - - - m m m m - - - - - - - 
    Start-Input-Timers      R      - - - - - - o - - - - - - - - 
    Fetch-Timeout           R      o o o o - - - - - - - - - - - 
    Failed-URI              R      - - - - - - - - - - - - - o - 
    Failed-URI             4XX     - - o o - - - - - - - - - - - 
    Failed-URI-Cause        R      - - - - - - - - - - - - - o - 
    Failed-URI-Cause       4XX     - - o o - - - - - - - - - - - 
    New-Audio-Channel       R      - - - o - - o - - - o - - - - 
    Abort-Verification      R      - - - - - - - - - m - - - - - 
    Speech-Complete-Timeout R      o o - - - - o - - - - - - - - 
    Voice-Print-Exists     2XX     - - - - m m - - - - - - - - - 
     
  
 S Shanmugham                  IETF-Draft                      Page 112 

                            MRCPv2 Protocol              October, 2004 

    Legend:   (s) - SET-PARAMS, (g) - GET-PARAMS, (A) - START-SESSION, 
    (B) - END-SESSION, (C) - QUERY-VOICE-PRINT, (D) DELETE-VOICE-PRINT, 
    (E) - VERIFY, (F) - VERIFY-FROM-BUFFER, (G) - VERIFY-ROLLBACK, (H) - 
    STOP, (I) - CLEAR-BUFFER, (J) - START-INPUT-TIMERS , (K) - GET-
    INTERMEDIATE-RESULTS, (L) - VERIFICATION-COMPLETE, (M) - START-OF-
    SPEECH, (o) - Optional(Refer text for further constraints), (R) - 
    Request, (r) - Response  
     
  
 Repository-URI  
     
    This header specifies the voiceprint repository to be used or 
    referenced during speaker verification or identification operations.  
    This header field is required in START-SESSION, QUERY-VOICEPRINT and 
    DELETE-VOICEPRINT methods.  
     
      repository-uri = "Repository-URI" ":" Uri CRLF 
     
 Voiceprint-Identifier 
  
    This header field specifies the claimed identity for voice 
    verification applications.  The claimed identity may be used to 
    specify an existing voiceprint or to establish a new voiceprint. 
    This header field is required in QUERY-VOICEPRINT and DELETE-
    VOICEPRINT methods. The Voiceprint-Identifier is required in the 
    SESSION-START method for verification operations. For Identification 
    or Multi-Verification operations this header may contain a list of 
    voice print identifiers separated by semi-colon. For identification 
    operations you could also specify a voice print group identifier 
    instead of a list of voice print identifiers. All voice print group 
    identifiers have an extension of ".vpg". The creation of such group 
    identifier objects is left to mechanism outside this protocol.  
     
      voiceprint-identifier =  "Voiceprint-Identifier" ":"  
                               1*VCHAR "." 3VCHAR 
                               *[";" 1*VCHAR "." 3VCHAR] CRLF 
     
 Verification-Mode 
     
    This header field specifies the mode of the verification resource 
    and is set in SESSION-START method. Acceptable values indicate 
    whether the verification session should train a voiceprint ("train") 
    or verify/identify using an existing voiceprint ("verify").  
     
    Training and verification sessions both require the voiceprint 
    Repository-URI to be specified in the START-SESSION.  In many usage 
    scenarios, however, the system cannot know the speaker's claimed 
    identity until the speaker says, for example, their account number.  
    In order to allow the first few utterances of a dialog to be both 
    recognized and verified, the verification resource on the MRCP 
    server retains an audio buffer. In this audio buffer, the MRCP 
  
 S Shanmugham                  IETF-Draft                      Page 113 

                            MRCPv2 Protocol              October, 2004 

    server will accumulate recognized utterances in memory.  The 
    application can later execute a verification method and apply the 
    buffered utterances to the current verification session. The 
    buffering methods are used for this purpose. When buffering is used, 
    subsequent input utterances are added to the audio buffer for later 
    analysis. 
     
    Some voice user interfaces may require additional user input that 
    should not be analyzed for verification. For example, the user's 
    input may have been recognized with low confidence and thus require 
    a confirmation cycle. In such cases, the client should not execute 
    the VERIFY or VERIFY-FROM-BUFFER methods to collect and analyze the 
    caller's input. A separate recognizer resource can analyze the 
    caller's response without any participation on behalf of the 
    verification resource.  
     
    Once the following conditions have been met:  
    1. Voiceprint identity has been successfully established through the 
       voiceprint identifier header fields of the -VOICEPRINT method, 
       and 
    2. the verification mode has been set to one of "train" or "verify", 
    the verification resource may begin providing verification 
    information during verification operations. The verification 
    resource MUST reach one of the two major states ("train" or 
    "verify") if the above two conditions hold, or it MUST report an 
    error condition in the MRCP status code to indicate why the 
    verification resource is not ready for action. 
     
    The value of verification-mode is persistent within a verification 
    session. Changing the mode to a different value than the previous 
    setting causes the verification resource to report an error if the 
    previous setting was either "train" or "verify". If the mode is 
    changed back to its previous value, the operation may continue.  
      verification-mode = "Verification-Mode" ":"  
                           verification-mode-string 
      verification-mode-string = "train" 
                               / "verify" 
  
     
 Adapt-Model 
     
    This header field indicates the desired behavior of the verification 
    resource after a successful verification execution. If the value of 
    this header is "true", the audio collected during the verification 
    session is may be to update the voiceprint to account for ongoing 
    changes in a speaker's incoming speech characteristics. If the value 
    is "false" (the default), the voiceprint is not updated with the 
    latest audio. This header field MAY only occur in START-SESSION 
    method.  
  
      adapt-model = "Adapt-Model" ":" Boolean-value CRLF 
  
 S Shanmugham                  IETF-Draft                      Page 114 

                            MRCPv2 Protocol              October, 2004 

     
     
 Abort-Model 
     
    The Abort-Model header field indicates the desired behavior of the 
    verification resource upon session termination. If the value of this 
    header is "true", the pending changes to a voiceprint due to 
    verification training or verification adaptation are discarded. If 
    the value is "false" (the default), the pending changes for a 
    training session or a successful verification session are committed 
    to the voiceprint repository. A value of "true" for Abort-Model 
    overrides a value of "true" for the Adapt-Model header field. This 
    header field MAY only occur in END-SESSION method.  
  
      abort-model = "Abort-Model" ":" Boolean-value CRLF 
     
     
     
 Security-Level 
  
    The Security-Level header field determines the range of verification 
    scores in which a decision of 'accepted' may be declared. This 
    header field MAY occur in SET-PARAMS, GET-PARAMS and START-SESSION 
    methods. It can be "high" (highest security level), "medium-high", 
    "medium" (normal security level), "medium-low", or "low" (low 
    security level). The default value is platform specific. 
     
      security-level = "Security-Level" ":" security-level-string CRLF 
      security-level-string = "high" / 
            "medium-high" / 
            "medium" /  
            "medium-low" / 
            "low" 
  
  
 Num-Min-Verification-Phrases 
  
    The Num-Min-Verification-Phrases header field is used to specify the 
    minimum number of valid utterances before a positive decision is 
    given for verification. The value for this header is integer and the 
    default value is 1. The verification resource should not announce a 
    decision of 'accepted' unless the Num-Min-Verification-Phrases 
    utterances are available. The minimum value is 1. 
     
      num-min-verification-phrases = "Num-Min-Verification-Phrases" ":"  
                                      1*DIGIT CRLF 
     
     



  
 S Shanmugham                  IETF-Draft                      Page 115 

                            MRCPv2 Protocol              October, 2004 

 Num-Max-Verification-Phrases 
  
    The Num-Max-Verification-Phrases header field is used to specify the 
    number of valid utterances required before a decision is forced for 
    verification. The verification resource MUST NOT return a decision 
    of 'undecided' once Num-Max-Verification-Phrases have been collected 
    and used to determine a verification score. The value for this 
    header is integer and the minimum value is 1.  
     
      num-min-verification-phrases = "Num-Max-Verification-Phrases" ":"  
                                      1*DIGIT CRLF 
  
     
 No-Input-Timeout 
  
    The No-Input-Timeout header field sets the length of time from the 
    start of the verification timers (see START-INPUT-TIMERS) until the 
    declaration of a no-input event in the VERIFICATION-COMPLETE server 
    event message. The value is in milliseconds. This header field MAY 
    occur in VERIFY, SET-PARAMS or GET-PARAMS. The value for this field 
    ranges from 0 to MAXTIMEOUT, where MAXTIMEOUT is platform specific. 
    The default value for this field is platform specific.  
          
      no-input-timeout = "No-Input-Timeout" ":" 1*DIGIT CRLF 
  
  
 Save-Waveform 
  
    This header field allows the client to indicate to the verification 
    resource that it MUST save the audio stream that was used for 
    verification/identification. The verification resource MUST then 
    record the audio and make it available to the client in the form of 
    a URI returned in the waveform-uri header field in the  
    VERIFICATION-COMPLETE event. If there was an error in recording the 
    stream or the audio clip is otherwise not available, the 
    verification resource MUST return an empty waveform-uri header 
    field. The default value for this field is "false". This header 
    field MAY appear in the VERIFY method, but NOT in the VERIFY-FROM-
    BUFFER method since it can control whether or not to save the 
    waveform for live verification / identification operations only. 
      
         save-waveform       =    "Save-Waveform" ":" boolean-value CRLF  
  
  
 Waveform-URI 
  
    If the save-waveform header field is set to true, the verification 
    resource MUST record the incoming audio stream of the verification 
    into a file and provide a URI for the client to access it. This 
    header MUST be present in the VERIFICATION-COMPLETE event if the 
    save-waveform header field is set to true. The URI value of the 
  
 S Shanmugham                  IETF-Draft                      Page 116 

                            MRCPv2 Protocol              October, 2004 

    header MUST be NULL if there was some error condition preventing the 
    server from recording. Otherwise, the URI generated by the server 
    SHOULD be globally unique across the server and all its verification 
    sessions. The URI SHOULD BE available until the session is torn 
    down. Since the save-waveform header field applies only to live 
    verification / identification operations, the waveform-uri will only 
    be returned in the VERIFICATION-COMPLETE event for live verification 
    / identification operations. 
          
       waveform-uri = "Waveform-URI" ":" Uri CRLF 
  
  
     
 Voiceprint-Exists 
     
    This header field is returned in a QUERY-VOICEPRINT or DELETE-
    VOICEPRINT response.  This is the status of the voiceprint specified 
    in the QUERY-VOICEPRINT method. For the DELETE-VOICEPRINT method 
    this field indicates the status of the voiceprint as the method 
    execution started. 
     
      voiceprint-exists    = "Voiceprint-Exists" ":" Boolean-value CRLF 
     
     
 Ver-Buffer-Utterance  
      
    This header field is used to indicate that this utterance could be 
    later considered for Speaker Verification.  This way, an application 
    can buffer utterances while doing regular recognition or 
    verification activities and speaker verification can later be 
    requested on the buffered utterances.  This header field is OPTIONAL 
    in the RECOGNIZE, VERIFY or RECORD method. The default value for 
    this field is "false".  
      
      ver-buffer-utterance = "Ver-Buffer-Utterance" : Boolean-value CRLF  
     
     
 Input-Waveform-Uri 
     
    This optional header field specifies an audio file that has to be 
    processed according to the current verification mode, either to 
    train the voiceprint or verify the user.  This enables the client to 
    implement the buffering use case also in the case where the 
    recognizer and verification resources live in two sessions.  It MAY 
    be part of the VERIFY method. 
     
      input-waveform-uri    = "Input-Waveform-URI" ":" Uri CRLF 
  
     
 Completion-Cause 
     
  
 S Shanmugham                  IETF-Draft                      Page 117 

                            MRCPv2 Protocol              October, 2004 

    This header field MUST be part of a VERIFICATION-COMPLETE event   
    coming from the verification resource to the client. This indicates 
    the reason behind the VERIFY or VERIFY-FROM-BUFFER method 
    completion. This header field MUST BE sent in the VERIFY, VERIFY-
    FROM-BUFFER, QUERY-VOICEPRINT responses, if they return with a 
    failure status and a COMPLETE state. 
          
      completion-cause = "Completion-Cause" ":" 1*DIGIT SP  
                         1*VCHAR CRLF  
          
      Cause-Code  Cause-Name         Description  
        000       success            VERIFY or VERIFY-FROM-BUFFER 
                                     request 
                                     completed successfully. The verify 
                                     decision can be "accepted", 
                                     "rejected", or "undecided". 
        001       error              VERIFY or VERIFY-FROM-BUFFER  
                                     Request terminated prematurely due 
                                     to a verification resource or  
                                     system error.  
        002       no-input-timeout   VERIFY request completed with no 
                                     result due to a no-input-timeout. 
        003       too-much-speech-timeout   VERIFY request completed 
                                     result due to too much speech           
        004       speech-too-early   VERIFY request completed with no   
                                     result due to spoke too soon. 
        005       buffer-empty       VERIFY-FROM-BUFFER request  
                                     completed  
                                     with no result due to empty buffer. 
        006       out-of-sequence    Verification operation failed due  
                                     to out-of-sequence method 
                                     invocations. For example calling 
                                     VERIFY before QUERY-VOICEPRINT. 
        007       repository-uri-failure 
                                     Failure accessing Repository URI. 
        008       repository-uri-missing 
                                     Repository-uri is not specified. 
        009       voiceprint-id-missing 
                                     Voiceprint-identification is not  
                                     specified. 
        010       voiceprint-id-not-exist 
                                     Voiceprint-identification doesn't  
                                     exist in the voiceprint repository. 
  
 Completion Reason 
     
    This header field MAY be specified in a VERIFICATION-COMPLETE event 
    coming from the verifier resource to the client. This contains the 
    reason text behind the VERIFY request completion. This field can be 
    use to communicate text describing the reason for the failure. 
     
  
 S Shanmugham                  IETF-Draft                      Page 118 

                            MRCPv2 Protocol              October, 2004 

      completion-reason   =    "Completion-Reason" ":"  
                               quoted-string CRLF 
  
 Speech Complete Timeout 
     
    This header field is the same as the one described for the 
    Recognizer resource.  
     
 New Audio Channel 
     
    This header field is the same as the one described for the 
    Recognizer resource. 
     
 Abort-Verification  
      
    This header field MUST BE sent in a STOP method to indicate if the 
    current VERIFY method in progress should be aborted or if it should 
    stop verifying and return the verification results until that point 
    in time. A value of "true" will abort the request and discard the 
    results. A value of "false" would stop verification and return the 
    verification result in the STOP response.  
      
      Abort-verification = "Abort-Verification " : Boolean-value CRLF  
     
 Start Input Timers 
     
    This header MAY BE sent as part of a VERIFY request. A value of 
    false tells the verification resource to start the VERIFY operation, 
    but not to start the no-input timer yet. The verification resource 
    should not start the timers until the client sends a START-INPUT-
    TIMERS request to the resource. This is useful in the scenario when 
    the verifier and synthesizer resources are not part of the same 
    session. Here when a kill-on-barge-in prompt is being played, you 
    may want the VERIFY request to be simultaneously active so that it 
    can detect and implement kill-on-barge-in. But at the same time you 
    don't want the verification resource to start the no-input timers 
    until the prompt is finished. The default value is "true".  
     
      start-input-timers =     "Start-Input-Timers" ":" 
                                    boolean-value CRLF 
  
     
     
 11.5.     Verification Result Elements 
     
     
    The verification results will be returned as XML data in a 
    VERIFICATION-COMPLETE event containing an NLSML document, having a 
    MIME-type application/x-nlsml.  The XML Schema and DTD for this 
    portion XML data is provided in a normative form in the Appendix. 
    MRCP-specific tag additions to this XML result format described in 
  
 S Shanmugham                  IETF-Draft                      Page 119 

                            MRCPv2 Protocol              October, 2004 

    this section MUST be in the MRCPv2 namespace.  In the result 
    structure, they must either be prefixed by a namespace prefix 
    declared within the result or must be children of an element 
    identified as belonging to the respective namespace.  For details on 
    how to use XML Namespaces, see [21].  Section 2 of [21] provides 
    details on how to declare namespaces and namespace prefixes. 
     
    Example 1: 
           <?xml version="1.0"?> 
           <result grammar="What-Grammar-URI" 
             xmlns:mrcp="http://www.ietf.org/mrcp2"> 
             <mrcp:result-type type="VERIFICATION" /> 
             <mrcp:verification-result> 
               <voiceprint id="johnsmith"> 
                 <adapted> true </adapted> 
                 <incremental> 
                   <num-frames> 50 </num-frames> 
                   <device> cellular-phone </device> 
                   <gender> female </gender> 
                   <decision> accepted </decision> 
                   <verification-score> 0.98514 </verification-score> 
                 </incremental> 
                 <cumulative> 
                   <num-frames> 1000 </num-frames> 
                   <device> cellular-phone </device> 
                   <gender> female </gender> 
                   <decision> accepted </decision> 
                   <verification-score> 0.91725</verification-score> 
                 </cumulative> 
               </voiceprint> 
               <voiceprint id="marysmith"> 
                 <cumulative> 
                   <verification-score> 0.93410 </verification-score> 
                 </cumulative> 
               </voiceprint> 
               <voiceprint uri="juniorsmith"> 
                 <cumulative> 
                   <verification-score> 0.74209 </verification-score> 
                 </cumulative> 
               </voiceprint> 
             </mrcp:verification-result> 
           </result> 
     
    Example 2: 
           <?xml version="1.0"?> 
           <result grammar="What-Grammar-URI" 
             xmlns:mrcp="http://www.ietf.org/mrcp2"> 
             xmlns:xmpl="http://www.example.org/2003/12/mrcp2"> 
             <mrcp:result-type type="VERIFICATION" /> 
             <mrcp:verification-result> 
               <voiceprint id="johnsmith"> 
  
 S Shanmugham                  IETF-Draft                      Page 120 

                            MRCPv2 Protocol              October, 2004 

                 <incremental> 
                   <num-frames> 50 </num-frames> 
                   <device> cellular-phone </device> 
                   <gender> female </gender> 
                   <needmoredata> true </needmoredata> 
                   <verification-score> 0.88514 </verification-score> 
                    <xmpl:raspiness> high </xmpl:raspiness> 
                    <xmpl:emotion> sadness </xmpl:emotion> 
                 </incremental> 
                 <cumulative> 
                   <num-frames> 1000 </num-frames> 
                   <device> cellular-phone </device> 
                   <gender> female </gender> 
                   <needmoredata> false </needmoredata> 
                   <verification-score> 0.9345 </verification-score> 
                 </cumulative> 
               </voiceprint> 
             </mrcp:verification-result> 
           </result> 
     
    Enrollment results XML markup can contain the following 
    elements/tags: 
     
       1. Voice-Print 
       2. Incremental 
       3. Cumulative 
       4. Decision                    
       5. Utterance-Length                      
       6. Device                      
       7. Gender                      
       8. Adapted                     
       9. Verification-Score                              
       10. Vendor-Specific-Results   
     
     
    1. VoicePrint 
     This element in the verification results provides information on how 
     the speech data matched a single voice print. The result data 
     returned may have more than one such entity in it in the case of 
     Identification or Multi-Verification. Each voice-print element and 
     the XML data within the element describe verification result 
     information for how well the speech data matched that particular 
     voice-print. The list of voice-print element data are ordered 
     according to their cumulative verification match scores, with the 
     highest as the first.  
      
    2. Cumulative 
    Within each voice-print element there MUST BE a "cumulative" element 
    with the cumulative scores of how well multiple utterances matched 
    the voice-print. 
      
  
 S Shanmugham                  IETF-Draft                      Page 121 

                            MRCPv2 Protocol              October, 2004 

    3. Incremental 
    The first voice-print element there MAY contain an "incremental" 
    element with the incremental scores of how well the last utterance 
    matched the voice-print. 
  
      
    4. Decision 
    This element is found within the incremental or cumulative element 
    within the verification results.   Its value indicates the decision 
    as determined by verification.  It can have the values of 
    "accepted", "rejected" or "undecided". 
     
    5. Utterance-Length 
    This element is found within the incremental or cumulative element 
    within the verification results. Its value indicates the size of the 
    last utterance or the cumulated set of utterances in milliseconds. 
  
    6. Device 
    This element is found within the incremental or cumulative element 
    within the verification results. Its value indicates the apparent 
    type of device used by the caller as determined by verification.  It 
    can have the values of "cellular-phone", "electret-phone", "carbon-
    button-phone" and "unknown". 
     
    7. Gender 
    This element is found within the incremental or cumulative element 
    within the verification results. Its value indicates the apparent 
    gender of the speaker as determined by verification. It can have the 
    values of "male", "female" or "unknown". 
     
    8. Adapted 
    This element is found within the voice-print element within the 
    verification results. When verification is trying to confirm the 
    voiceprint, this indicates if the voiceprint has been adapted as a 
    consequence of analyzing the source utterances.  It is not returned 
    during verification training. The value can be "true" or "false". 
     
    9. Verification-Score 
    This element is found within the incremental or cumulative element 
    within the verification results. Its value indicates the score of 
    the last utterance as determined by verification.   
     
    During verification, the higher the score the more likely it is that 
    the speaker is the same one as the one who spoke the voiceprint 
    utterances.  During training, the higher the score the more likely 
    the speaker is to have spoken all of the analyzed utterances.  The 
    value is a floating point between 0.0 and 1.0. If there are no such 
    utterances the score is 0. It should be noted that though the value 
    of the verification score is between 0.0 and 1.0 it should NOT BE 
    interpreted as a probability value.  
  
  
 S Shanmugham                  IETF-Draft                      Page 122 

                            MRCPv2 Protocol              October, 2004 

    11. Vendor-Specific-Results 
    This section describes the method used to describe vendor specific 
    results using the xml syntax. Vendor-specific additions to the 
    default result format MUST belong to the vendor's own namespace.  In 
    the result structure, they must either be prefixed by a namespace 
    prefix declared within the result or must be children of an element 
    identified as belonging to the respective namespace.  
  
     
 11.6.     START-SESSION 
     
    The START-SESSION method starts a Speaker Verification or 
    Identification session.  Execution of this method forces the 
    verification resource into a known initial state. If this method is 
    called during an ongoing verification session, the previous session 
    is implicitly aborted. If this method is invoked when VERIFY or 
    VERIFY-FROM-BUFFER is active, it would fail with a status code of 
    402.  
     
    Upon completion of the START-SESSION method, the verification 
    resource MUST terminate any ongoing verification sessions, and clear 
    any voiceprint designation.  
     
    A verification session needs to establish the voice print repository 
    that will be used as part of this session. This is specified through 
    the "Repository-URI" header field, in which a URI pointing to the 
    location of the voiceprint repository is given. 
     
    It also establishes the voice-print that is going to be matched or 
    trained during that verification session through the Voiceprint-
    Identifier header field. If this is an Identification session or if 
    you wanted to do Multi-Verification, this header would contain a 
    list of semi-colon separated voice print identifiers. 
     
    The header field "Adapt-Model" may also be present in the start 
    session method to indicate whether or not to adapt a voiceprint with 
    data collected during the session (if the voiceprint verification 
    phase succeeds). By default the voiceprint model should NOT be 
    adapted with data from a verification session. 
     
    The START-SESION must also establish if the session is for a train 
    or verify a voice-print. Hence the Verification-Mode header field 
    MUST BE sent in this method. The value of the "Verification-Mode" 
    header field MUST be one of either "train" or "verify". 
  
    Before a verification/identification resource is started, only 
    VERIFY-ROLLBACK and generic SET-PARAMS and GET-PARAMS operations can 
    be performed. The server should return 402(Method not valid in this 
    state) for all other operations, such as VERIFY, QUERY-VOICEPRINT. 
  

  
 S Shanmugham                  IETF-Draft                      Page 123 

                            MRCPv2 Protocol              October, 2004 

    A single session can be active at one time. 
     
 Example: 
    C->S:  MRCP/2.0 123 START-SESSION 314161 
           Channel-Identifier: 32AECB23433801@speakverify 
           Repository-URI: http://www.example.com/voiceprintdbase/ 
           Voiceprint-Identifier: johnsmith.voiceprint 
           Adapt-Model: true 
     
    S->C:  MRCP/2.0 49 314161 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
       
 11.7.     END-SESSION 
     
    The END-SESSION method terminates an ongoing verification session 
    and releases the verification voiceprint model in one of three ways: 
    a. aborting - the voiceprint adaptation or creation may be aborted 
       so that the voiceprint remains unchanged (or is not created). 
    b. committing - when terminating a voiceprint training session, the 
       new voiceprint is committed to the repository. 
    c. adapting - an existing voiceprint is modified using a successful 
       verification. 
     
    The header field "Abort-Model" may be included in the END-SESSION to 
    control whether or not to abort any pending changes to the 
    voiceprint. The default behavior is to commit (not abort) any 
    pending changes to the designated voiceprint. 
     
    The END-SESSION method may be safely executed multiple times without 
    first executing the START-SESSION method. Any additional executions 
    of this method without an intervening use of the START-SESSION 
    method have no effect on the system. 
     
     
 Example: 
    This example assumes there are a training session or a verification 
    session in progress. 
     
    C->S:  MRCP/2.0 123 END-SESSION 314174 
           Channel-Identifier: 32AECB23433801@speakverify 
           Abort-Model: true 
       
    S->C:  MRCP/2.0 49 314174 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
     
 11.8.     QUERY-VOICEPRINT 
     
    The QUERY-VOICEPRINT method is used to get a status on a particular 
    voice-print and can be used to find if a voice-print or repository 
    exists and if its trained. 
     
  
 S Shanmugham                  IETF-Draft                      Page 124 

                            MRCPv2 Protocol              October, 2004 

    The response to the QUERY-VOICEPRINT method request will contain an 
    indication of the status of the designated voiceprint in the 
    "Voiceprint-Exists" header field, allowing the client to determine 
    whether to use the current voiceprint for verification, train a new 
    voiceprint, or choose a different voiceprint. 
     
    A Voiceprint is completely specified by providing a repository 
    location and a voiceprint identifier. The particular voice-print or 
    identity within the repository is specified by string identifier 
    unique within the repository. The "Voiceprint-Identity" header field 
    MUST carry this unique voiceprint identifier within a given 
    repository. 
     
     
 Example1: 
    This example assumes a verification session is in progress and the 
    voiceprint exists in the voiceprint repository. 
     
    C->S:  MRCP/2.0 123 QUERY-VOICEPRINT 314168 
           Channel-Identifier: 32AECB23433801@speakverify 
           Repository-URI: http://www.example.com/voice-prints/ 
           Voiceprint-Identifier: johnsmith.voiceprint 
  
    S->C:  MRCP/2.0 123 314168 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
           Repository-URI: http://www.example.com/voice-prints/ 
           Voiceprint-Identifier: johnsmith.voiceprint 
           Voiceprint-Exists: true 
            
 Example2: 
    This example assumes that the URI provided in the 'Repository-URI' 
    header field is a bad URI. 
     
    C->S:  MRCP/2.0 123 QUERY-VOICEPRINT 314168 
           Channel-Identifier: 32AECB23433801@speakverify 
           Repository-URI: http://www.example.com/bad-uri/ 
           Voiceprint-Identifier: johnsmith.voiceprint 
  
    S->C:  MRCP/2.0 123 314168 405 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
           Repository-URI: http://www.example.com/bad-uri/ 
           Voiceprint-Identifier: johnsmith.voiceprint 
           Completion-Cause: 007 repository-uri-failure 
            
            
 11.9.     DELETE-VOICEPRINT 
     
    The DELETE-VOICEPRINT method removes a voiceprint from a repository 
    or speaker identification repository. This method MUST carry 
    Repository-URI and the Voiceprint-Identifier header fields.  
     
  
 S Shanmugham                  IETF-Draft                      Page 125 

                            MRCPv2 Protocol              October, 2004 

    If a voiceprint record doesn't exist, the DELETE-VOICEPRINT method 
    can silently ignore the message and still return 200 status code. 
  
 Example: 
    This example demonstrates a message to remove a specific voiceprint. 
     
    C->S:  MRCP/2.0 123 DELETE-VOICEPRINT 314168 
           Channel-Identifier: 32AECB23433801@speakverify 
                     Repository-URI: http://www.example.com/bad-uri/ 
           Voiceprint-Identifier: johnsmith.voiceprint 
  
    S->C:  MRCP/2.0 49 314168 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
     
 11.10.    VERIFY 
     
    The VERIFY method is used to send the utterance's audio stream to 
    the verification resource, which will then process it according to 
    the current Verification-Mode, either to train/adapt the voiceprint 
    or verify/identify the user. If the voiceprint is new or was deleted 
    by a previous DELETE-VOICEPRINT method, the VERIFY method would 
    train the voiceprint. If the voiceprint already exits, it is adapted 
    and not re-trained by the VERIFY command. 
     
    When both a recognizer and verification resource share the same 
    session, the VERIFY method MUST be called prior to calling the 
    RECOGNIZE method on the recognizer resource.  In such cases, server 
    vendors will know that verification must be enabled for a subsequent 
    call to RECOGNIZE.  
     
 Example: 
    C->S:  MRCP/2.0 49 VERIFY 543260 
           Channel-Identifier: 32AECB23433801@speakverify 
     
    S->C:  MRCP/2.0 49 543260 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speakverify 
     
    When the VERIFY request is done, the MRCP server should send a 
    'VERIFICATION-COMPLETE' event to the client. 
     
  
 11.11.    VERIFY-FROM-BUFFER 
     
    The VERIFY-FROM-BUFFER method begins an ongoing evaluation of the 
    currently buffered audio against the voiceprint. Only one VERIFY or 
    VERIFY-FROM-BUFFER method can be active at any one time.  
     
    The buffered audio is not consumed by this evaluation operation and 
    thus VERIFY-FROM-BUFFER may be called multiple times using different 
    voiceprints.  
     
  
 S Shanmugham                  IETF-Draft                      Page 126 

                            MRCPv2 Protocol              October, 2004 

    For VERIFY-FROM-BUFFER method, the server can optionally return an 
    "IN-PROGRESS" response followed by the "VERIFICATION-COMPLETE" 
    event. 
     
    When the VERIFY-FROM-BUFFER method is invoked and the verification 
    buffer is in use the server MUST return an IN-PRGORESS response and 
    waits until the buffer is available for verify processing again. The 
    verification buffer is owned by the verification resource but shares 
    write access with other input resources on the same session, such as 
    recognition and recording. Hence, it is considered to be in use, if 
    there is a read or write operation such as, a RECORD or RECOGNIZE 
    with the ver-buffer-utterance header field set to "true", on a 
    resource that shares this buffer. Note that, if RECORD or RECOGNIZE 
    command returns with a failure cause code, the VERIFY-FROM-BUFFER 
    command waiting to process that buffer MUST also fail with a 
    Completion-Cause of 005 (buffer-empty).   
     
 Example: 
    This example illustrates the usage of some buffering methods. In 
    this scenario the client first performed a live verification, but 
    the utterance is rejected. In the meantime, the utterance is also 
    saved to the audio buffer. Then, another voiceprint is used to do 
    verification against the audio buffer and the utterance is accepted. 
    Here, we assume both 'num-min-verification-phrases' and 'num-max-
    verification-phrases' are 1. 
  
    C->S:  MRCP/2.0 123 START-SESSION 314161 
           Channel-Identifier: 32AECB23433801@speakverify 
           Adapt-Model: true 
           Repository-URI: http://www.example.com/voice-prints 
           Voiceprint-Identifier: johnsmith.voiceprint 
     
    S->C:  MRCP/2.0 49 314161 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
     
    C->S:  MRCP/2.0 123 VERIFY 314162 
           Channel-Identifier: 32AECB23433801@speakverify 
           Ver-buffer-utterance: true 
     
    S->C:  MRCP/2.0 49 314164 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speakverify 
     
    S->C:  MRCP/2.0 123 VERIFICATION-COMPLETE 314162 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
           Completion-Cause: 000 success 
           Content-Type: application/x-nlsml 
           Content-Length: 123 
     
           <?xml version="1.0"?> 
           <result grammar="What-Grammar-URI"> 
           <extensions> 
  
 S Shanmugham                  IETF-Draft                      Page 127 

                            MRCPv2 Protocol              October, 2004 

              <result-type type="VERIFICATION" /> 
              <verification-result> 
                <voiceprint id="johnsmith"> 
                <incremental> 
                     <num-frames> 50 </num-frames> 
                     <device> cellular-phone </device> 
                     <gender> female </gender> 
                     <decision> rejected </decision> 
                     <verification-score> 0.05465 </verification-score> 
                </incremental> 
                <cumulative> 
                     <num-frames> 50 </num-frames> 
                     <device> cellular-phone </device> 
                     <gender> female </gender> 
                     <decision> rejected </decision> 
                     <verification-score> 0.09664 </verification-score> 
                </cumulative> 
                </voiceprint> 
              </verification-result> 
           </extensions> 
           </result> 
            
    C->S:  MRCP/2.0 123 QUERY-VOICEPRINT 314163 
           Channel-Identifier: 32AECB23433801@speakverify 
           Repository-URI: http://www.example.com/voiceprints/ 
           Voiceprint-Identifier: johnsmith 
            
    S->C:  MRCP/2.0 123 314163 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
           Repository-URI: http://www.example.com/voiceprints/ 
           Voiceprint-Identifier: johnsmith.voiceprint 
           Voiceprint-Exists: true 
            
    C->S:  MRCP/2.0 123 START-SESSION 314164 
           Channel-Identifier: 32AECB23433801@speakverify 
           Adapt-Model: true 
           Repository-URI: http://www.example.com/voice-prints 
           Voiceprint-Identifier: marysmith.voiceprint 
     
    S->C:  MRCP/2.0 49 314164 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
            
    C->S:  MRCP/2.0 123 VERIFY-FROM-BUFFER 314165 
           Channel-Identifier: 32AECB23433801@speakverify 
           Verification-Mode: verify 
  
    S->C:  MRCP/2.0 49 314165 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speakverify 
     
    S->C:  MRCP/2.0 123 VERIFICATION-COMPLETE 314165 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
  
 S Shanmugham                  IETF-Draft                      Page 128 

                            MRCPv2 Protocol              October, 2004 

           Completion-Cause: 000 success 
           Content-Type: application/x-nlsml 
           Content-Length: 123 
     
           <?xml version="1.0"?> 
           <result grammar="What-Grammar-URI"> 
           <extensions> 
              <result-type type="VERIFICATION" /> 
              <verification-result> 
                <voiceprint id="marysmith"> 
                <incremental> 
                     <num-frames> 50 </num-frames> 
                     <device> cellular-phone </device> 
                     <gender> female </gender> 
                     <decision> accepted </decision> 
                     <verification-score> 0.98 </verification-score> 
                </incremental> 
                <cumulative> 
                     <num-frames> 50 </num-frames> 
                     <device> cellular-phone </device> 
                     <gender> female </gender> 
                     <decision> accepted </decision> 
                     <verification-score> 0.85 </verification-score> 
                </cumulative> 
                </voiceprint> 
              </verification-result> 
           </extensions> 
           </result> 
  
     
    C->S:  MRCP/2.0 49 END-SESSION 314166 
           Channel-Identifier: 32AECB23433801@speakverify 
  
    S->C:  MRCP/2.0 49 314166 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
     
 11.12.    VERIFY-ROLLBACK 
     
    The VERIFY-ROLLBACK method discards the last buffered utterance or 
    discards the last live utterances (when the mode is "train" or 
    "verify"). This method should be invoked when the caller provides 
    undesirable input such as non-speech noises, side-speech, out-of-
    grammar utterances, commands, etc. Note that this method does not 
    provide a stack of rollback states. Executing VERIFY-ROLLBACK twice 
    in succession without an intervening recognition operation has no 
    effect on the second attempt. 
     
 Example: 
    C->S:  MRCP/2.0 49 VERIFY-ROLLBACK 314165 
           Channel-Identifier: 32AECB23433801@speakverify 
  
  
 S Shanmugham                  IETF-Draft                      Page 129 

                            MRCPv2 Protocol              October, 2004 

    S->C:  MRCP/2.0 49 314165 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
       
 11.13.    STOP 
     
    The STOP method from the client to the server tells the verification 
    resource to stop the VERIFY or VERIFY-FROM-BUFFER request if one is 
    active. If such a request is active and the STOP request 
    successfully terminated it, then the response header contains an 
    active-request-id-list header field containing the request-id of the 
    VERIFY or VERIFY-FROM-BUFFER request that was terminated. In this 
    case, no VERIFICATION-COMPLETE event will be sent for the terminated 
    request. If there was no verify request active, then the response 
    MUST NOT contain an active-request-id-list header field. Either way 
    the response MUST contain a status of 200(Success).  
     
    The STOP method can carry a "Abort-Verification" header field which 
    specifies if the verification result until that point should be 
    discarded or returned. If this header field is not present or if the 
    value is "true", the verification result is discarded and STOP 
    response does not contain any result data. If the field is present 
    and its value is "false", the STOP_ response MUST contain a 
    "Completion-Cause" header field and carry the Verification result 
    data in its body.  
     
    An aborted VERIFY request does an automatic roll-back and will not 
    affect the cumulative score. A VERIFY request that was stopped with 
    no "Abort-Verification" header field or with the "Abort-
    Verification" header field set to "false" will affect cumulative 
    scores and would need to be explicitly rolled-back if it should not 
    be considered for cumulative scores.   
          
 Example: 
    This example assumes a voiceprint identity has already been 
    established. 
     
    C->S:  MRCP/2.0 123 VERIFY 314177 
           Channel-Identifier: 32AECB23433801@speakverify 
           Verification-Mode: verify 
     
    S->C:  MRCP/2.0 49 314177 200 IN-PROGRESS  
           Channel-Identifier: 32AECB23433801@speakverify 
          
    C->S:  MRCP/2.0 49 STOP 314178 
           Channel-Identifier: 32AECB23433801@speakverify 
     
    S->C:  MRCP/2.0 123 314178 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
           Active-Request-Id-List: 314177 
     

  
 S Shanmugham                  IETF-Draft                      Page 130 

                            MRCPv2 Protocol              October, 2004 

 11.14.    START-INPUT-TIMERS 
  
    This request is sent from the client to the verification resource to 
    start the no-input timer, usually once the audio prompts to the 
    caller have played to completion.  
     
 Example: 
    C->S:  MRCP/2.0 49 START-INPUT-TIMERS 543260 
           Channel-Identifier: 32AECB23433801@speakverify 
  
    S->C:  MRCP/2.0 49 543260 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
  
 11.15.    VERIFICATION-COMPLETE 
     
    The VERIFICATION-COMPLETE event follows a call to VERIFY or VERIFY-
    FROM-BUFFER and is used to communicate to the client the 
    verification results.  This event will contain only verification 
    results. 
     
    Example: 
    S->C:  MRCP/2.0 123 VERIFICATION-COMPLETE 543259 COMPLETE 
           Completion-Cause: 000 success 
           Content-Type: application/x-nlsml 
           Content-Length: 123 
     
           <?xml version="1.0"?> 
           <result grammar="What-Grammar-URI"> 
           <extensions> 
              <result-type type="VERIFICATION" /> 
              <verification-result> 
                <voiceprint id="johnsmith"> 
                <incremental> 
                     <num-frames> 50 </num-frames> 
                     <device> cellular-phone </device> 
                     <gender> female </gender> 
                     <decision> accepted </decision> 
                     <verification-score> 0.85 </verification-score> 
                </incremental> 
                <cumulative> 
                     <num-frames> 150 </num-frames> 
                     <device> cellular-phone </device> 
                     <gender> female </gender> 
                     <decision> accepted </decision> 
                     <verification-score> 0.75 </verification-score> 
                </cumulative> 
                </voiceprint> 
              </verification-result> 
           </extensions> 
           </result> 
     
  
 S Shanmugham                  IETF-Draft                      Page 131 

                            MRCPv2 Protocol              October, 2004 

 11.16.    START-OF-SPEECH 
     
    The START-OF-SPEECH event is returned from the server to the client 
    once the server has detected speech.  This event is always returned 
    by the verification resource when speech has been detected, 
    irrespective of the fact that both the recognizer and verification 
    resource are sharing the same session or not. 
     
     
    Example: 
    S->C:  MRCP/2.0 49 START-OF-SPEECH 543259 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speakverify 
  
 11.17.    CLEAR-BUFFER 
     
    The CLEAR-BUFFER method can be used to clear the verification 
    buffer. This buffer is used to buffer speech during a recognition, 
    record or verification operations that may later be used for 
    verification from buffer. As noted before, the verification resource 
    is shared by other input resources like, recognizers and recorders. 
    Hence, a CLEAR-BUFFER would fail if the verification buffer is in 
    use. This happens when any one of the input resources that shares 
    this buffer has an active read or write operation such as RECORD, 
    RECOGNIZE or VERIFY with the ver-buffer-utterance header field set 
    to "true".  
     
    Example: 
    C->S:  MRCP/2.0 49 CLEAR-BUFFER 543260 
           Channel-Identifier: 32AECB23433801@speakverify 
     
    S->C:  MRCP/2.0 49 543260 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
  
 11.18.    GET-INTERMEDIATE-RESULT 
     
    The GET-INTERMEDIATE-RESULT method can be used to poll for 
    intermediate results of a verification request that is in progress. 
    This does not change the state of the resource. It just collects the 
    verification results until that point and returns the information in 
    the method response. The response to this method will contain only 
    verification results. The method response MUST NOT contain a 
    Completion-Cause header field as the request is not complete yet. 
    If the resource does not have a verification in progress the 
    response would have a 402 failure code and no result in the body. 
     
    Example: 
    C->S:  MRCP/2.0 49 GET-INTERMEDIATE-RESULTS 543260 
           Channel-Identifier: 32AECB23433801@speakverify 
     
    S->C:  MRCP/2.0 49 543260 200 COMPLETE 
           Channel-Identifier: 32AECB23433801@speakverify 
  
 S Shanmugham                  IETF-Draft                      Page 132 

                            MRCPv2 Protocol              October, 2004 

           Content-Type: application/x-nlsml 
           Content-Length: 123 
     
           <?xml version="1.0"?> 
           <result grammar="What-Grammar-URI"> 
           <extensions> 
              <result-type type="VERIFICATION" /> 
              <verification-result> 
                <voiceprint id="marysmith"> 
                <incremental> 
                     <num-frames> 50 </num-frames> 
                     <device> cellular-phone </device> 
                     <gender> female </gender> 
                     <decision> accepted </decision> 
                     <verification-score> 0.85 </verification-score> 
                </incremental> 
                <cumulative> 
                     <num-frames> 150 </num-frames> 
                     <device> cellular-phone </device> 
                     <gender> female </gender> 
                     <decision> accepted </decision> 
                     <verification-score> 0.65 </verification-score> 
                </cumulative> 
                </voiceprint> 
              </verification-result> 
           </extensions> 
           </result> 
       
 12.  Security Considerations 
  
    The MRCPv2 protocol may carry sensitive information such as account 
    numbers, passwords etc as well as use media for identification and 
    verification purposes. For this reason it is important that the 
    client have the option of secure communication with the server for 
    both the control messages as well as the media, though the client is 
    not required to use it. This is achieved by imposing following 
    requirements on MRCPv2 server implementations. All MRCPv2 servers 
    MUST implement digest authentication (sip:) and SHOULD implement 
    sips: in its SIP implementation. All MRCPv2 servers must support TLS 
    for the transport of control messages between the client and server. 
    All MRCPv2 servers MUST support Secure Real-Time Transport Protocol 
    (SRTP) as an option to send and receive media.   
     
 13.  Examples:   
  
 13.1.     Message Flow 
     
    The following is an example of a typical MRCPv2 session of speech 
    synthesis and recognition between a client and a server.   
     

  
 S Shanmugham                  IETF-Draft                      Page 133 

                            MRCPv2 Protocol              October, 2004 

    Opening a session to the MRCPv2 server. This is exchange does not 
    allocate a resource or setup media. It simply establishes a SIP 
    session with the MRCPv2 server.  
     
    C->S: 
           INVITE sip:mresources@mediaserver.com SIP/2.0 
           Max-Forwards: 6 
           To: MediaServer <sip:mresources@mediaserver.com> 
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 314159 INVITE 
           Contact: <sip: sarvi@cisco.com> 
           Content-Type: application/sdp 
           Content-Length: 142 
            
           v=0 
           o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4 
           s=SDP Seminar 
           i=A session for processing media 
           c=IN IP4 224.2.17.12/127 
     
    S->C: 
           SIP/2.0 200 OK 
           To: MediaServer <sip:mresources@mediaserver.com> 
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 314159 INVITE 
           Contact: <sip: sarvi@cisco.com> 
           Content-Type: application/sdp 
           Content-Length: 131 
            
           v=0 
           o=sarvi 2890844526 2890842807 IN IP4 126.16.64.4 
           s=SDP Seminar 
           i=A session for processing media 
           c=IN IP4 224.2.17.12/127 
     
    C->S: 
           ACK sip:mrcp@mediaserver.com SIP/2.0 
           Max-Forwards: 6 
           To: MediaServer <sip:mrcp@mediaserver.com>;tag=a6c85cf 
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 314160 ACK 
           Content-Length: 0 
     
    The client requests the server to create synthesizer resource 
    control channel to do speech synthesis. This also adds a media pipe 
    to send the generated speech. Note that in this example, the client 
    request the reuse of an existing MRCPv2 SCTP pipe between the client 
    and the server.    
  
 S Shanmugham                  IETF-Draft                      Page 134 

                            MRCPv2 Protocol              October, 2004 

     
    C->S: 
           INVITE sip:mresources@mediaserver.com SIP/2.0 
           Max-Forwards: 6 
           To: MediaServer <sip:mresources@mediaserver.com> 
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 314161 INVITE 
           Contact: <sip: sarvi@cisco.com> 
           Content-Type: application/sdp 
           Content-Length: 142 
            
           v=0 
           o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4 
           s=SDP Seminar 
           i=A session for processing media 
           c=IN IP4 224.2.17.12/127 
           m=control 9 SCTP application/mrcpv2 
           a=setup:active 
           a=connection:existing 
           a=resource:speechsynth  
           a=cmid:1 
           m=audio 49170 RTP/AVP 0 96 
           a=rtpmap:0 pcmu/8000 
           a=recvonly  
           a=mid:1 
            
     
    S->C: 
           SIP/2.0 200 OK 
           To: MediaServer <sip:mresources@mediaserver.com> 
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 314161 INVITE 
           Contact: <sip: sarvi@cisco.com> 
           Content-Type: application/sdp 
           Content-Length: 131 
            
           v=0 
           o=sarvi 2890844526 2890842808 IN IP4 126.16.64.4 
           s=SDP Seminar 
           i=A session for processing media 
           c=IN IP4 224.2.17.12/127 
           m=control 32416 SCTP application/mrcpv2 
           a=setup:passive 
           a=connection:existing 
           a=channel:32AECB23433802@speechsynth  
           a=cmid:1 
           m=audio 48260 RTP/AVP 0 
           a=rtpmap:0 pcmu/8000 
           a=sendonly  
  
 S Shanmugham                  IETF-Draft                      Page 135 

                            MRCPv2 Protocol              October, 2004 

           a=mid:1 
            
    C->S: 
           ACK sip:mrcp@mediaserver.com SIP/2.0 
           Max-Forwards: 6 
           To: MediaServer <sip:mrcp@mediaserver.com>;tag=a6c85cf 
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 314162 ACK 
           Content-Length: 0 
     
    This exchange allocates an additional resource control channel for a 
    recognizer. Since a recognizer would need to receive an audio stream 
    for recognition, this interaction also updates the audio stream to 
    sendrecv making it a 2-way audio stream. 
      
    C->S: 
           INVITE sip:mresources@mediaserver.com SIP/2.0 
           Max-Forwards: 6 
           To: MediaServer <sip:mresources@mediaserver.com> 
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 314163 INVITE 
           Contact: <sip: sarvi@cisco.com> 
           Content-Type: application/sdp 
           Content-Length: 142 
            
           v=0 
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4 
           s=SDP Seminar 
           i=A session for processing media 
           c=IN IP4 224.2.17.12/127 
           m=control 9 SCTP application/mrcpv2 
           a=setup:active 
           a=connection:existing 
           a=resource:speechsynth  
           a=cmid:1 
           m=audio 49170 RTP/AVP 0 96 
           a=rtpmap:0 pcmu/8000 
           a=recvonly  
           a=mid:1 
           m=control 9 SCTP application/mrcpv2 
           a=setup:active 
           a=connection:existing 
           a=resource:speechrecog  
           a=cmid:2 
           m=audio 49180 RTP/AVP 0 96 
           a=rtpmap:0 pcmu/8000 
           a=rtpmap:96 telephone-event/8000 
           a=fmtp:96 0-15 
           a=sendonly  
  
 S Shanmugham                  IETF-Draft                      Page 136 

                            MRCPv2 Protocol              October, 2004 

           a=mid:2 
            
     
    S->C: 
           SIP/2.0 200 OK 
           To: MediaServer <sip:mresources@mediaserver.com> 
           From: sarvi <sip:sarvi@cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 314163 INVITE 
           Contact: <sip: sarvi@cisco.com> 
           Content-Type: application/sdp 
           Content-Length: 131 
            
           v=0 
           o=sarvi 2890844526 2890842809 IN IP4 126.16.64.4 
           s=SDP Seminar 
           i=A session for processing media 
           c=IN IP4 224.2.17.12/127 
           m=control 32416 SCTP application/mrcpv2 
           a=channel:32AECB23433801@speechsynth  
           a=cmid:1 
           m=audio 48260 RTP/AVP 0 
           a=rtpmap:0 pcmu/8000 
           a=sendonly  
           a=mid:1 
           m=control 32416 SCTP application/mrcpv2 
           a=channel:32AECB23433802@speechrecog  
           a=cmid:2 
           m=audio 48260 RTP/AVP 0 
           a=rtpmap:0 pcmu/8000 
           a=rtpmap:96 telephone-event/8000 
           a=fmtp:96 0-15 
           a=recvonly  
           a=mid:2 
     
    C->S: 
           ACK sip:mrcp@mediaserver.com SIP/2.0 
           Max-Forwards: 6 
           To: MediaServer <sip:mrcp@mediaserver.com>;tag=a6c85cf 
           From: Sarvi <sip:sarvi@cisco.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 314164 ACK 
           Content-Length: 0 
     
    A MRCPv2 SPEAK request initiates speech.   
     
    C->S:  MRCP/2.0 386 SPEAK 543257 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Kill-On-Barge-In: false 
           Voice-gender: neutral 
           Voice-category: teenager 
  
 S Shanmugham                  IETF-Draft                      Page 137 

                            MRCPv2 Protocol              October, 2004 

                     Prosody-volume: medium 
           Content-Type: application/synthesis+ssml 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
           <speak> 
           <paragraph> 
                    <sentence>You have 4 new messages.</sentence> 
                    <sentence>The first is from <say-as  
                    type="name">Stephanie Williams</say-as> <mark 
           name="Stephanie"/> 
                    and arrived at <break/> 
                    <say-as type="time">3:45pm</say-as>.</sentence> 
            
                    <sentence>The subject is <prosody 
                    rate="-20%">ski trip</prosody></sentence> 
           </paragraph> 
           </speak> 
     
    S->C:  MRCP/2.0 49 543257 200 IN-PROGRESS  
           Channel-Identifier: 32AECB23433802@speechsynth 
     
    The synthesizer hits the special marker in the message to be spoken 
    and faithfully informs the client of the event. 
     
    S->C:  MRCP/2.0 46 SPEECH-MARKER 543257 IN-PROGRESS  
           Channel-Identifier: 32AECB23433802@speechsynth 
           Speech-Marker: Stephanie 
            
    The synthesizer finishes with the SPEAK request. 
     
    S->C:  MRCP/2.0 48 SPEAK-COMPLETE 543257 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
    The recognizer is issued a request to listen for the customer 
    choices.  
     
    C->S:MRCP/2.0 343 RECOGNIZE 543258 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Content-Type: application/srgs+xml 
           Content-Length: 104 
                      
           <?xml version="1.0"?> 
            
           <!-- the default grammar language is US English --> 
           <grammar xml:lang="en-US" version="1.0"> 
            
           <!-- single language attachment to a rule expansion --> 
                <rule id="request"> 
                    Can I speak to 
                    <one-of xml:lang="fr-CA"> 
  
 S Shanmugham                  IETF-Draft                      Page 138 

                            MRCPv2 Protocol              October, 2004 

                             <item>Michel Tremblay</item> 
                             <item>Andre Roy</item> 
                    </one-of> 
                </rule> 
            
           </grammar> 
            
    S->C:  MRCP/2.0 49 543258 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
     
    The client issues the next MRCPv2 SPEAK method. It is generally 
    RECOMMENDED when playing a prompt to the user with kill-on-barge-in 
    and asking for input, that the client issue the RECOGNIZE request 
    ahead of the SPEAK request for optimum performance and user 
    experience. This way, it is guaranteed that the recognizer is online 
    before the prompt starts playing and the user's speech will not be 
    truncated at the beginning (especially for power users). 
     
    C->S:  MRCP/2.0 289 SPEAK 543259 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Kill-On-Barge-In: true 
           Content-Type: application/sml 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
           <speak> 
           <paragraph> 
                    <sentence>Welcome to ABC corporation.</sentence> 
                    <sentence>Who would you like Talk to.</sentence> 
           </paragraph> 
           </speak> 
     
    S->C:  MRCP/2.0 52 543259 200 IN-PROGRESS 
           Channel-Identifier: 32AECB23433802@speechsynth 
     
    Since the last SPEAK request had Kill-On-Barge-In set to "true", the 
    speech synthesizer is interrupted when the user starts speaking. And 
    the client is notified.  
     
    Now, since the recognition and synthesizer resources are on the same 
    session, they may have worked with each other to deliver kill-on-
    barge-in. Whether the synthesizer and recognizer are in the same 
    session or not the recognizer MUST generate the START-OF-SPEECH 
    event to the client.  
     
    The client MUST then blindly turn around and issued a BARGE-IN-
    OCCURRED method to the synthesizer resource(if a SPEAK request was 
    active). The synthesizer, if kill-on-barge-in was enabled on the 
    current SPEAK request, would have then interrupted it and issued a 
    SPEAK-COMPLETE event to the client.  
     
  
 S Shanmugham                  IETF-Draft                      Page 139 

                            MRCPv2 Protocol              October, 2004 

    The completion-cause code differentiates if this is normal 
    completion or a kill-on-barge-in interruption.  
     
    S->C:  MRCP/2.0 49 START-OF-SPEECH 543258 IN-PROGRESS 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Proxy-Sync-Id: 987654321 
            
            
    C->S:  MRCP/2.0 69 BARGE-IN-OCCURRED 543259 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Proxy-Sync-Id: 987654321 
     
    S->C:  MRCP/2.0 72 543259 200 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Active-Request-Id-List: 543258 
     
    S->C:  MRCP/2.0 73 SPEAK-COMPLETE 543259 COMPLETE 
           Channel-Identifier: 32AECB23433802@speechsynth 
           Completion-Cause: 001 barge-in 
     
    The recognition resource matched the spoken stream to a grammar and 
    generated results. The result of the recognition is returned by the 
    server as part of the RECOGNITION-COMPLETE event. 
     
    S->C:  MRCP/2.0 412 RECOGNITION-COMPLETE 543258 COMPLETE 
           Channel-Identifier: 32AECB23433801@speechrecog 
           Completion-Cause: 000 success  
           Waveform-URI: http://web.media.com/session123/audio.wav 
           Content-Type: application/x-nlsml 
           Content-Length: 104 
            
           <?xml version="1.0"?> 
           <result x-model="http://IdentityModel" 
             xmlns:xf="http://www.w3.org/2000/xforms" 
             grammar="session:request1@form-level.store"> 
               <interpretation> 
                   <xf:instance name="Person"> 
                       <Person> 
                           <Name> Andre Roy </Name> 
                       </Person> 
                   </xf:instance> 
                   <input>   may I speak to Andre Roy </input> 
               </interpretation> 
           </result> 
     
    When the client wants to tear down the whole session and all its 
    resources, it MUST issue a SIP BYE to close the SIP session. This 
    will de-allocate all the control channels and resources allocated 
    under the session. 
     
      C->S:BYE sip:mrcp@mediaserver.com SIP/2.0 
  
 S Shanmugham                  IETF-Draft                      Page 140 

                            MRCPv2 Protocol              October, 2004 

           Max-Forwards: 6 
           From: Sarvi <sip:sarvi@cisco.com>;tag=a6c85cf 
           To: MediaServer <sip:mrcp@mediaserver.com>;tag=1928301774 
           Call-ID: a84b4c76e66710 
           CSeq: 231 BYE 
           Content-Length: 0 
     
 13.2.     Recognition Result Examples 
     
   Simple ASR Ambiguity 
     
    System: To which city will you be traveling? 
    User: I want to go to Pittsburgh. 
     
    <result grammar="http://flight"> 
      <interpretation confidence="60"> 
         <instance> 
            <airline> 
               <to_city>Pittsburgh</to_city> 
            <airline> 
         <instance> 
         <input mode="speech"> 
            I want to go to Pittsburgh 
         </input> 
      </interpretation> 
      <interpretation confidence="40" 
         <instance> 
            <airline> 
               <to_city>Stockholm</to_city> 
            </airline> 
         </instance> 
         <input>I want to go to Stockholm</input> 
      </interpretation> 
    </result> 
     
   Mixed Initiative: 
     
    System: What would you like? 
    User: I would like 2 pizzas, one with pepperoni and cheese, one with 
    sausage and a bottle of coke, to go. 
     
    This representation includes an order object which in turn contains 
    objects named "food_item", "drink_item" and "delivery_method". This 
    representation assumes there are no ambiguities in the speech or 
    natural language processing. Note that this representation also 
    assumes some level of intrasentential anaphora resolution, i.e., to 
    resolve the two "one's" as "pizza". 
     
    <result grammar="http://foodorder"> 
      <interpretation confidence="100" > 
         <instance> 
  
 S Shanmugham                  IETF-Draft                      Page 141 

                            MRCPv2 Protocol              October, 2004 

          <order> 
            <food_item confidence="100"> 
              <pizza> 
                <ingredients confidence="100"> 
                  pepperoni 
                </ingredients> 
                <ingredients confidence="100"> 
                  cheese 
                </ingredients> 
              </pizza> 
              <pizza> 
                <ingredients>sausage</ingredients> 
              </pizza> 
            </food_item> 
            <drink_item confidence="100"> 
              <size>2-liter</size> 
            </drink_item> 
            <delivery_method>to go</delivery_method> 
          </order> 
        </instance> 
        <input mode="speech">I would like 2 pizzas, 
             one with pepperoni and cheese, one with sausage 
             and a bottle of coke, to go. 
        </input> 
      </interpretation> 
    </result> 
     
   DTMF Input 
     
    A combination of dtmf input and speech would be represented using 
    nested input elements. For example: 
     
    User: My pin is (dtmf 1 2 3 4) 
     
    <input> 
      <input mode="speech" confidence ="1.0" 
         timestamp-start="2000-04-03T0:00:00"  
         timestamp-end="2000-04-03T0:00:01.5">My pin is 
      </input> 
      <input mode="dtmf" confidence ="1.0" 
         timestamp-start="2000-04-03T0:00:01.5"  
         timestamp-end="2000-04-03T0:00:02.0">1 2 3 4 
      </input> 
    </input> 
     
    Note that grammars that recognize mixtures of speech and DTMF are 
    not currently possible in VoiceXML; however this representation may 
    be needed for other applications of NLSML, and it may be introduced 
    in future versions of VoiceXML. 
     

  
 S Shanmugham                  IETF-Draft                      Page 142 

                            MRCPv2 Protocol              October, 2004 

   Interpreting Meta-Dialog and Meta-Task Utterances 
     
    The natural language requires that the semantics specification must 
    be capable of representing a number of types of meta-dialog and 
    meta-task utterances (Task-Specific Information/Meta-task 
    Information Requirements 1-8 and Generic Information about the 
    Communication Process Requirements 1-6). This specification is 
    flexible enough so that meta utterances can be represented on an 
    application-specific basis without defining specific formats in this 
    specification. 
     
    Here are two examples of how meta-task and meta-dialog utterances 
    might be represented. 
     
    System: What toppings do you want on your pizza? 
    User: What toppings do you have? 
     
    <interpretation grammar="http://toppings"> 
       <instance> 
          <question> 
             <questioned_item>toppings<questioned_item> 
             <questioned_property> 
              availability 
             </questioned_property> 
          </question> 
       </instance> 
       <input mode="speech"> 
         what toppings do you have? 
       </input> 
    </interpretation> 
     
    User: slow down. 
     
    <interpretation grammar="http://generalCommandsGrammar"> 
       <instance> 
        <command> 
           <action>reduce speech rate</action> 
           <doer>system</doer> 
        </command> 
       </instance> 
      <input mode="speech">slow down</input> 
    </interpretation> 
     
   Anaphora and Deixis 
    
    This specification can be used on an application-specific basis to 
    represent utterances that contain unresolved anaphoric and deictic 
    references. Anaphoric references, which include pronouns and 
    definite noun phrases that refer to something that was mentioned in 
    the preceding linguistic context, and deictic references, which 
    refer to something that is present in the non-linguistic context, 
  
 S Shanmugham                  IETF-Draft                      Page 143 

                            MRCPv2 Protocol              October, 2004 

    present similar problems in that there may not be sufficient 
    unambiguous linguistic context to determine what their exact role in 
    the interpretation should be. In order to represent unresolved 
    anaphora and deixis using this specification, one strategy would be 
    for the developer to define a more surface-oriented representation 
    that leaves the specific details of the interpretation of the 
    reference open. (This assumes that a later component is responsible 
    for actually resolving the reference) 
     
    Example: (ignoring the issue of representing the input from the 
    pointing gesture.) 
     
    System: What do you want to drink? 
    User: I want this (clicks on picture of large root beer.) 
     
    <result> 
       <interpretation> 
          <instance>  
           <doer>I</doer> 
           <action>want</action> 
           <object>this</object> 
          </instance> 
          <input mode="speech">I want this</input> 
       </interpretation> 
    </result> 
     
    Future versions of the W3C Speech Interface Framework may address 
    issues of representing resolved anaphora. 
     
   Distinguishing Individual Items from Sets with One Member 
     
    For programming convenience, it is useful to be able to distinguish 
    between individual items and sets containing one item in the XML 
    representation of semantic results. For example, a pizza order might 
    consist of exactly one pizza, but a pizza might contain zero or more 
    toppings. Since there is no standard way of marking this distinction 
    directly in XML, in the current framework, the developer is free to 
    adopt any conventions that would convey this information in the XML 
    markup. One strategy would be for the developer to wrap the set of 
    items in a grouping element, as in the following example. 
     
    <order> 
       <pizza> 
          <topping-group> 
             <topping>mushrooms</topping> 
          </topping-group> 
       </pizza> 
       <drink>coke</drink> 
    </order> 
     

  
 S Shanmugham                  IETF-Draft                      Page 144 

                            MRCPv2 Protocol              October, 2004 

    In this example, the programmer can assume that there is supposed to 
    be exactly one pizza and one drink in the order, but the fact that 
    there is only one topping is an accident of this particular pizza 
    order. 
     
    If a data model is used this distinction can be made in the data 
    model by stating that the value of the "maxOccurs" attribute can be 
    greater than 1. 
     
   Extensibility 
     
    One of the natural language requirements states that the 
    specification must be extensible. The specification supports this 
    requirement because of its flexibility, as discussed in the 
    discussions of meta utterances and anaphora. NLSML can easily be 
    used in sophisticated systems to convey application-specific 
    information that more basic systems would not make use of, for 
    example defining speech acts. Defining standard representations for 
    items such as dates, times, etc. could also be done. 
     
     
    Normative Reference 
       
    [1]    Fielding, R., Gettys, J., Mogul, J., Frystyk. H.,  
           Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext 
           transfer protocol -- HTTP/1.1", RFC 2616, June 1999.  
  
    [2]    Schulzrinne, H., Rao, A., and R. Lanphier, "Real Time 
           Streaming Protocol (RTSP)", RFC 2326, April 1998 
       
    [3]    Crocker, D. and P. Overell, "Augmented BNF for syntax 
           specifications: ABNF", RFC 2234, November 1997. 
     
    [4]    Rosenberg, J., Schulzrinne, H., Schooler, E., Camarillo, G., 
           Johnston, A. Peterson, J., Sparks, R., Handley, M., Schooler, 
           E., "SIP: Session Initiation Protocol", RFC 3261, 
           June 2002. 
     
    [6]    Handley, M. and V. Jacobson, "SDP: session description  
           protocol", RFC 2327, April 1998. 
            
     [7]   World Wide Web Consortium, "Voice Extensible Markup Language 
           (VoiceXML) Version 2.0", W3C Candidate Recommendation, March 
           2004. 
      
     [8]   Crocker, D., "STANDARD FOR THE FORMAT OF ARPA INTERNET TEXT 
           MESSAGES", RFC 822, August 1982. 
      
     [9]   Bradner, S., "Key words for use in RFCs to Indicate 
           Requirement Levels", RFC 2119, March 1997. 
      
  
 S Shanmugham                  IETF-Draft                      Page 145 

                            MRCPv2 Protocol              October, 2004 

     [10]  World Wide Web Consortium, "Speech Synthesis Markup Language 
           (SSML) Version 1.0", W3C Candidate Recommendation, September 
           2004. 
      
     [11]  World Wide Web Consortium, "Speech Recognition Grammar 
           Specification Version 1.0", W3C Candidate Recommendation, 
           March 2004. 
      
     [12]  Bradner, S., "The Internet Standards Process - Revision 3", 
           RFC 2026, October 1996 
      
     [13]  Yergeau, F., "UTF-8, a transformation format of Unicode and 
           ISO 10646", RFC 2044, October 1996 
      
     [14]  Freed, N., Borenstein, N., "Multipupose Internet Mail 
           Extensions (MIME) Part Two: Media Types", RFC 2046, November 
           1996 
      
     [15]  Levinson, E., "Content-ID and Message-ID Uniform Resource 
           Locators", RFC 2111, March 1997 
      
     [16]  Schulzrinne, H., Petrack, S., "RTP Payload for DTMF Digits, 
           Telephony Tones and Telephony Signals", RFC 2833, May 2000 
      
     [17]  Alvestrand, H., "Tags for the Identification of Languages", 
           RFC 3066, January 2001 
      
     [18]  Camarillo, G., Eriksson, G., Holler, J., "Grouping of Media 
           Lines in the Session Description Protocol (SDP) ", RFC 3388, 
           December 2002  
      
     [19]  T. Bray et al., "Namespaces in XML", W3C Recommendation, 14 
           January 1999.  
     
     [20]  Yon, D., Camarillo, G., "Connection-Oriented Media Transport 
           in the Session Description Protocol  (SDP)", draft-ietf-
           mmusic-sdp-comedia-09.txt, (work in progress), September 
           2004. 
  
     [21]  Lenox, J., "Connection-Oriented Media Transport over the 
           Transport Layer Security(TLS) Protocol in the Session 
           Description Protocol (SDP)", (work in progress), draft-ietf-
           mmusic-comedia-tls-02.txt 
     
  
     
    Appendix 
     
    A.1 ABNF Message Definitions  
     
        LWS    =    [*WSP CRLF] 1*WSP ; linear whitespace 
  
 S Shanmugham                  IETF-Draft                      Page 146 

                            MRCPv2 Protocol              October, 2004 

         
        SWS    =    [LWS] ; sep whitespace 
         
        UTF8-NONASCII    =    %xC0-DF 1UTF8-CONT  
                         /    %xE0-EF 2UTF8-CONT 
                         /    %xF0-F7 3UTF8-CONT 
                          /    %xF8-Fb 4UTF8-CONT 
                          /    %xFC-FD 5UTF8-CONT 
         
        UTF8-CONT   =    %x80-BF 
        VCHAR       =    %x21-7E 
        param       =    *pchar 
         
        quoted-string    =    SWS DQUOTE *(qdtext / quoted-pair )  
                               DQUOTE 
         
        qdtext      =    LWS / %x21 / %x23-5B / %x5D-7E 
                     /    UTF8-NONASCII 
         
        quoted-pair =    "\" (%x00-09 / %x0B-0C 
                     /    %x0E-7F) 
         
        token       =    1*(alphanum / "-" / "." / "!" / "%" / "*" 
                     / "_" / "+" / "`" / "'" / "~" ) 
         
        reserved    =    ";" / "/" / "?" / ":" / "@" / "&" / "="  
                     / "+" / "$" / "," 
         
        mark        =    "-" / "_" / "." / "!" / "~" / "*" / "'" 
                     /    "(" / ")" 
         
        unreserved  =    alphanum / mark 
         
        pchar       =    unreserved / escaped 
                     /    ":" / "@" / "&" / "=" / "+" / "$" / "," 
         
        alphanum    =    ALPHA / DIGIT 
         
        escaped      =    "%" HEXDIG HEXDIG 
         
        fragment    =    *uric 
         
        uri         =    [ absoluteURI / relativeURI ]  
                          [ "#" fragment ] 
         
        absoluteURI =    scheme ":" ( hier-part / opaque-part ) 
         
        relativeURI =    ( net-path / abs-path / rel-path )  
                          [ "?" query ] 
         
        hier-part   =    ( net-path / abs-path ) [ "?" query ] 
  
 S Shanmugham                  IETF-Draft                      Page 147 

                            MRCPv2 Protocol              October, 2004 

         
        net-path    =    "//" authority [ abs-path ] 
         
        abs-path    =    "/" path-segments 
         
        rel-path    =    rel-segment [ abs-path ] 
         
        rel-segment =    1*( unreserved / escaped / ";" / "@"  
                     /    "&" / "=" / "+" / "$" / "," )  
         
        opaque-part =    uric-no-slash *uric 
         
        uric        =    reserved / unreserved / escaped 
         
        uric-no-slash    =    unreserved / escaped / ";" / "?" / ":"  
                          / "@" / "&" / "=" / "+" / "$" / "," 
         
        path-segments    =    segment *( "/" segment ) 
         
        segment      =    *pchar *( ";" param ) 
         
        scheme      =    ALPHA *( ALPHA / DIGIT / "+" / "-" / "." ) 
         
        authority   =    srvr / reg-name 
         
        srvr        =    [ [ userinfo "@" ] hostport ] 
         
        reg-name    =    1*( unreserved / escaped / "$" / "," 
                     /    ";" / ":" / "@" / "&" / "=" / "+" ) 
         
        query       =    *uric 
         
        userinfo    =    ( user ) [ ":" password ] "@" 
         
        user        =    1*( unreserved / escaped  
                     /    user-unreserved ) 
         
        user-unreserved  =    "&" / "=" / "+" / "$" / "," / ";"  
                          /    "?" / "/" 
         
        password    =    *( unreserved / escaped  
                          /    "&" / "=" / "+" / "$" / "," ) 
         
        hostport    =    host [ ":" port ] 
         
        host        =    hostname / IPv4address / IPv6reference 
         
        hostname    =    *( domainlabel "." ) toplabel [ "." ] 
         
        domainlabel =    alphanum / alphanum *( alphanum / "-" ) 
                          alphanum 
  
 S Shanmugham                  IETF-Draft                      Page 148 

                            MRCPv2 Protocol              October, 2004 

         
        toplabel    =    ALPHA / ALPHA *( alphanum / "-" ) 
                          alphanum 
         
        IPv4address =    1*3DIGIT "." 1*3DIGIT "." 1*3DIGIT "."  
                          1*3DIGIT 
         
        IPv6reference    =    "[" IPv6address "]" 
         
        IPv6address =    hexpart [ ":" IPv4address ] 
         
        Hexpart      =    hexseq / hexseq "::" [ hexseq ] / "::"  
                          [ hexseq ] 
         
        hexseq      =    hex4 *( ":" hex4) 
         
        hex4        =    1*4HEXDIG 
         
        port        =    1*DIGIT 
         
        cmid-attribute   =    "a=cmid:" identification-tag 
         
        identification-tag    =    token 
         
        generic-message  =    start-line  
                               message-header  
                               CRLF  
                               [ message-body ]  
         
        message-body      =    *OCTET 
                  
        start-line  =    request-line / status-line / event-line  
         
        request-line =    mrcp-version SP message-length SP method-name 
                          SP request-id CRLF  
         
        status-line =    mrcp-version SP message-length SP request-id  
                          SP status-code SP request-state CRLF  
         
        event-line  =    mrcp-version SP message-length SP event-name 
                          SP request-id SP request-state CRLF  
         
        method-name =    generic-method  
                     /    synthesizer-method 
                     /    recorder-method 
                     /    recognizer-method 
                     /    verifier-method 
                     /    extension-method 
         
        extension-method =    1*(ALPHA / "-") 
         
  
 S Shanmugham                  IETF-Draft                      Page 149 

                            MRCPv2 Protocol              October, 2004 

        generic-method   =    "SET-PARAMS" 
                          /    "GET-PARAMS" 
         
        request-state    =    "COMPLETE"  
                          /    "IN-PROGRESS"         
                          /    "PENDING"  
         
        event-name       =    synthesizer-event 
                          /    recognizer-event 
                          /    recorder-event 
                          /    verifier-event 
                          /    extension-event 
         
        extension-event  =    1*(ALPHA / "-") 
               
        message-header   =    1*(generic-header / resource-header)  
         
        resource-header  =    recognizer-header 
                          /    synthesizer-header 
                          /    recorder-header 
                          /    verifier-header      
                          /    extension-header 
         
        extension-header =    1*(ALPHANUM) CRLF 
         
        generic-header   =    channel-identifier 
                          /    active-request-id-list  
                          /    proxy-sync-id  
                          /    content-id  
                          /    content-type  
                          /    content-length  
                         /    content-base  
                          /    content-location  
                          /    content-encoding  
                          /    cache-control  
                          /    logging-tag    
                          /    vendor-specific 
                                     
        ; -- content-id is as defined in RFC 2111, RFC2046 and RFC822 
         
        mrcp-version      =    "MRCP" "/" 1*DIGIT "." 1*DIGIT  
         
        request-id       =    1*DIGIT  
         
        status-code      =    1*DIGIT 
         
        channel-identifier    =    "Channel-Identifier" ":"  
                                    channel-id CRLF 
         
        channel-id       =    1*HEXDIG "@" 1*VCHAR 
         
  
 S Shanmugham                  IETF-Draft                      Page 150 

                            MRCPv2 Protocol              October, 2004 

        active-request-id-list =    "Active-Request-Id-List" ":"   
                                    request-id *("," request-id) CRLF  
         
        proxy-sync-id    =    "Proxy-Sync-Id" ":" 1*VCHAR CRLF     
         
        content-length   =    "Content-Length" ":" 1*DIGIT CRLF 
         
        content-base      =    "Content-Base" ":" absoluteURI CRLF 
         
        Content-Type      =    "Content-Type" ":" media-type 
         
        media-type       =    type "/" subtype *( ";" parameter ) 
         
        type        =    token 
         
        subtype      =    token 
         
        parameter   =    attribute "=" value 
         
        attribute   =    token 
         
        value       =    token / quoted-string 
                  
        content-encoding =    "Content-Encoding" ":"  
                               *WSP content-coding  
                               *(*WSP "," *WSP content-coding *WSP ) 
                               CRLF 
         
        content-coding   =    token 
         
         
        content-location =    "Content-Location" ":"  
                               ( absoluteURI / relativeURI )  CRLF 
         
        cache-control    =    "Cache-Control" ":"  
                               [*WSP cache-directive 
                               *( *WSP "," *WSP cache-directive *WSP )] 
                               CRLF 
         
        cache-directive  =    "max-age" "=" delta-seconds      
                          /    "max-stale" "=" [ delta-seconds ]  
                          /    "min-fresh" "=" delta-seconds   
         
        logging-tag      =    "Logging-Tag" ":" 1*VCHAR CRLF  
         
        vendor-specific  =    "Vendor-Specific-Parameters" ":"  
                               [vendor-specific-av-pair   
                               *[";" vendor-specific-av-pair]] CRLF   
         
        vendor-specific-av-pair    =    vendor-av-pair-name "="   
                                         vendor-av-pair-value  
  
 S Shanmugham                  IETF-Draft                      Page 151 

                            MRCPv2 Protocol              October, 2004 

         
        vendor-av-pair-name   =    1*VCHAR 
         
        vendor-av-pair-value  =    1*VCHAR 
         
        set-cookie  =    "Set-Cookie:" cookies CRLF 
         
        cookies      =    cookie *("," *LWS cookie) 
         
        cookie      =    NAME "=" VALUE *(";" cookie-av) 
         
        NAME        =    attribute 
         
        VALUE       =    value 
         
        cookie-av   =    "Comment" "=" value 
                     /    "Domain" "=" value 
                     /    "Max-Age" "=" value 
                     /    "Path" "=" value 
                     /    "Secure" 
                     /    "Version" "=" 1*DIGIT 
                     /    "Age" "=" delta-seconds 
                                   
        set-cookie2 =    "Set-Cookie2:" cookies2 CRLF 
         
        cookies2    =    cookie2 *("," *LWS cookie2) 
         
        cookie2      =    NAME "=" VALUE *(";" cookie-av2) 
         
        NAME   =    attribute 
         
        VALUE  =    value 
         
        cookie-av2  =    "Comment" "=" value 
                     /    "CommentURL" "=" <"> http_URL <"> 
                     /    "Discard" 
                     /    "Domain" "=" value 
                     /    "Max-Age" "=" value 
                     /    "Path" "=" value 
                     /    "Port" [ "=" <"> portlist <"> ] 
                     /    "Secure" 
                     /    "Version" "=" 1*DIGIT 
                     /    "Age" "=" delta-seconds 
         
        portlist    =    portnum *("," *LWS portnum) 
         
        portnum      =    1*DIGIT 
         
        ; Synthesizer ABNF 
         
        synthesizer-method    =    "SPEAK"  
  
 S Shanmugham                  IETF-Draft                      Page 152 

                            MRCPv2 Protocol              October, 2004 

                               /    "STOP"  
                               /    "PAUSE"  
                               /    "RESUME"  
                               /    "BARGE-IN-OCCURRED"  
                               /    "CONTROL"  
         
        synthesizer-event      =    "SPEECH-MARKER"  
                               /    "SPEAK-COMPLETE"  
         
        synthesizer-header    =    jump-size        
                               /    kill-on-barge-in   
                               /    speaker-profile    
                               /    completion-cause 
                               /    completion-reason   
                               /    voice-parameter    
                               /    prosody-parameter    
                               /    speech-marker      
                               /    speech-language    
                               /    fetch-hint         
                               /    audio-fetch-hint   
                               /    fetch-timeout      
                               /    failed-uri         
                               /    failed-uri-cause   
                               /    speak-restart      
                               /    speak-length 
                               /    load-lexicon 
                               /    lexicon-search-order       
         
         
        jump-size   =    "Jump-Size" ":" speech-length-value CRLF  
         
        speech-length-value   =    numeric-speech-length  
                               /    text-speech-length  
         
        text-speech-length    =    1*ALPHA SP "Tag"  
                                        
        numeric-speech-length =    ("+" / "-") 1*DIGIT SP   
                                    numeric-speech-unit 
          
        numeric-speech-unit   =    "Second"  
                               /    "Word"  
                               /    "Sentence"  
                               /    "Paragraph"  
         
        delta-seconds         =    1*DIGIT      
         
        kill-on-barge-in      =    "Kill-On-Barge-In" ":" boolean-value  
                                    CRLF  
         
        boolean-value         =    "true" / "false"  
         
  
 S Shanmugham                  IETF-Draft                      Page 153 

                            MRCPv2 Protocol              October, 2004 

        speaker-profile       =    "Speaker-Profile" ":" absoluteURI  
                                    CRLF  
         
        completion-cause      =    "Completion-Cause" ":" 1*DIGIT SP  
                                    1*VCHAR CRLF  
         
        completion-reason      =    "Completion-Reason" ":"  
                                    quoted-string CRLF 
         
        voice-parameter       =    "Voice-" voice-param-name ":"  
                                    [voice-param-value] CRLF  
         
        voice-param-name      =    1*VCHAR 
         
        voice-param-value      =    1*VCHAR 
         
        prosody-parameter      =    "Prosody-" prosody-param-name ":"  
                                    [prosody-param-value] CRLF  
         
        prosody-param-name    =    1*VCHAR 
         
        prosody-param-value   =    1*VCHAR 
         
        timestamp        =    "Timestamp" "=" time-stamp-value CRLF 
         
        speech-marker    =    "Speech-Marker" ":" 1*VCHAR  
                               [";" timestamp ] CRLF 
         
        speech-marker    =    "Speech-Marker" ":" 1*VCHAR CRLF  
         
        speech-language  =    "Speech-Language" ":" [1*VCHAR] CRLF  
         
        fetch-hint       =    "Fetch-Hint" ":" [1*ALPHA] CRLF  
         
        audio-fetch-hint =    "Audio-Fetch-Hint" ":" [1*ALPHA] CRLF  
         
        fetch-timeout    =    "Fetch-Timeout" ":" [1*DIGIT] CRLF  
         
        failed-uri       =    "Failed-URI" ":" absoluteURI CRLF  
         
        failed-uri-cause =    "Failed-URI-Cause" ":" 1*ALPHANUM CRLF  
         
        speak-restart    =    "Speak-Restart" ":" boolean-value CRLF  
         
        speak-length      =    "Speak-Length" ":" speech-length-value  
                               CRLF  
        speech-length-value   =    numeric-speech-length  
                               /    text-speech-length 
         
        text-speech-length    =    1*ALPHA SP "Tag"  
                                        
  
 S Shanmugham                  IETF-Draft                      Page 154 

                            MRCPv2 Protocol              October, 2004 

        numeric-speech-length =    ("+" / "-") 1*DIGIT SP   
                                    numeric-speech-unit  
         
        numeric-speech-unit   =    "Second"  
                               /    "Word"  
                               /    "Sentence"  
                               /    "Paragraph"  
         
        load-lexicon           =    "Load-Lexicon" : boolean CRLF 
         
        lexicon-search-order  =    "Lexicon-Search-Order" : absoluteURI 
                                    *[";" absoluteURI] CRLF 
         
        ; Recognizer ABNF  
         
        recognizer-method      =    recog-only-method 
                               /    enrollment-method 
         
        recog-only-method      =    "DEFINE-GRAMMAR"  
                               /    "RECOGNIZE"  
                               /    "INTERPRET" 
                               /    "GET-RESULT"  
                               /    "START-INPUT-TIMERS"  
                               /    "STOP" 
         
        enrollment-method      =    "START-PHRASE-ENROLLMENT"  
                               /    "ENROLLMENT-ROLLBACK" 
                               /    "END-PHRASE-ENROLLMENT" 
                               /    "MODIFY-PHRASE" 
                               /    "DELETE-PHRASE" 
         
        recognizer-event      =    "START-OF-SPEECH" 
                               /    "RECOGNITION-COMPLETE" 
                               /    "INTERPRETATION-COMPLETE" 
         
        recognizer-header      =    recog-only-header 
                               /    enrollment-header 
         
         
        recog-only-header      =    confidence-threshold      
                               /    sensitivity-level         
                               /    speed-vs-accuracy         
                               /    n-best-list-length        
                               /    no-input-timeout          
                               /    recognition-timeout       
                               /    waveform-uri   
                               /    input-waveform-uri            
                               /    completion-cause          
                               /    completion-reason 
                               /    recognizer-context-block  
                               /    start-input-timers  
  
 S Shanmugham                  IETF-Draft                      Page 155 

                            MRCPv2 Protocol              October, 2004 

                               /    speech-complete-timeout   
                               /    speech-incomplete-timeout  
                               /    dtmf-interdigit-timeout   
                               /    dtmf-term-timeout         
                               /    dtmf-term-char            
                               /    fetch-timeout             
                               /    failed-uri                
                               /    failed-uri-cause          
                               /    save-waveform             
                               /    new-audio-channel 
                               /    speech-language         
                               /    ver-buffer-utterance 
                               /    recognition-mode 
                               /    cancel-if-queue 
                               /    hotword-max-duration 
                               /    hotword-min-duration 
                               /    interpret-text 
         
        enrollment-header      =    num-min-consistent-pronunciations 
                               /    consistency-threshold   
                               /    clash-threshold         
                               /    personal-grammar-uri  
                               /    phrase-id               
                               /    phrase-nl               
                               /    weight                  
                               /    save-best-waveform      
                               /    new-phrase-id           
                               /    confusable-phrases-uri  
                               /    abort-phrase-enrollment 
         
        confidence-threshold  =    "Confidence-Threshold" ":"  
                                    [1*DIGIT] CRLF  
         
        sensitivity-level      =    "Sensitivity-Level" ":" [1*DIGIT]  
                                    CRLF  
         
        speed-vs-accuracy      =    "Speed-Vs-Accuracy" ":" [1*DIGIT]  
                                    CRLF  
         
        n-best-list-length    =    "N-Best-List-Length" ":" [1*DIGIT]  
                                    CRLF  
         
        no-input-timeout      =    "No-Input-Timeout" ":" [1*DIGIT]  
                                    CRLF  
         
        recognition-timeout   =    "Recognition-Timeout" ":" [1*DIGIT] 
                                    CRLF  
         
        waveform-uri           =    "Waveform-URI" ":" absoluteURI CRLF  
         
        completion-cause      =    "Completion-Cause" ":" 1*DIGIT SP  
  
 S Shanmugham                  IETF-Draft                      Page 156 

                            MRCPv2 Protocol              October, 2004 

                                    1*VCHAR CRLF  
         
        recognizer-context-block   =    "Recognizer-Context-Block" ":"  
                                    [1*VCHAR] CRLF  
         
        start-input-timers    =    "Start-Input-Timers" ":"   
                                    boolean-value CRLF  
          
        speech-complete-timeout    =    "Speech-Complete-Timeout" ":"   
                                         [1*DIGIT] CRLF  
         
        speech-incomplete-timeout  =    "Speech-Incomplete-Timeout" ":"   
                                        [1*DIGIT] CRLF  
         
        dtmf-interdigit-timeout    =    "DTMF-Interdigit-Timeout" ":"   
                                        [1*DIGIT] CRLF  
         
        dtmf-term-timeout      =    "DTMF-Term-Timeout" ":" [1*DIGIT]  
                                    CRLF  
         
        dtmf-term-char   =    "DTMF-Term-Char" ":" [VCHAR] CRLF  
         
        fetch-timeout    =    "Fetch-Timeout" ":" [1*DIGIT] CRLF  
         
        save-waveform    =    "Save-Waveform" ":" [boolean-value] CRLF  
         
        new-audio-channel =    "New-Audio-Channel" ":"  
                               boolean-value CRLF 
         
        recognition-mode =    "Recognition-Mode" ":" 1*ALPHA CRLF 
         
        cancel-if-queue  =    "Cancel-If-Queue" ":" boolean-value CRLF 
         
        hotword-max-duration  =    "Hotword-Max-Duration" ":"  
                                    1*DIGIT CRLF 
         
        hotword-min-duration  =    "Hotword-Min-Duration" ":"  
                                    1*DIGIT CRLF 
         
         
        num-min-consistent-pronunciations    =  
                "Num-Min-Consistent-Pronunciations" ":" 1*DIGIT CRLF  
         
         
        consistency-threshold =    "Consistency-Threshold" ":" 1*DIGIT  
                                    CRLF 
          
        clash-threshold       =    "Clash-Threshold" ":" 1*DIGIT CRLF 
         
        personal-grammar-uri  =    "Personal-Grammar-URI" ":" Uri CRLF 
         
  
 S Shanmugham                  IETF-Draft                      Page 157 

                            MRCPv2 Protocol              October, 2004 

        phrase-id        =    "Phrase-ID" ":" 1*VCHAR CRLF 
         
        phrase-nl        =    "Phrase-NL" ":" 1*VCHAR CRLF 
         
        weight           ="Weight" ":" WEIGHT CRLF 
         
        save-best-waveform    =    "Save-Best-Waveform" ":"  
                                    boolean-value CRLF 
         
        new-phrase-id    =    "New-Phrase-ID" ":" 1*VCHAR CRLF 
         
        confusable-phrases-uri =    "Confusable-Phrases-URI" ":"  
                                    Uri CRLF 
         
        abort-phrase-enrollment    =    "Abort-Phrase-Enrollment" ":"  
                                         boolean- value CRLF 
         
         
        ; Verifier ABNF 
         
        verifier-method  =    "START-SESSION" 
                          /    "END-SESSION" 
                          /    "QUERY-VOICEPRINT" 
                          /    "DELETE-VOICEPRINT" 
                          /    "VERIFY" 
                          /    "VERIFY-FROM-BUFFER" 
                          /    "VERIFY-ROLLBACK" 
                          /    "STOP" 
                          /    "START-INPUT-TIMERS" 
         
         
        verifier-event   =    "VERIFICATION-COMPLETE" 
                          /    "START-OF-SPEECH" 
         
         
        verifier-header  =    repository-uri  
                          /    voiceprint-identifier 
                          /    verification-mode  
                          /    adapt-model  
                          /    abort-model  
                          /    security-level          
                          /    num-min-verification-phrases 
                          /    num-max-verification-phrases 
                          /    no-input-timeout            
                          /    save-waveform               
                          /    waveform-uri                
                          /    voiceprint-exists           
                          /    ver-buffer-utterance     
                          /    input-waveform-uri         
                          /    completion-cause            
                          /    completion-reason 
  
 S Shanmugham                  IETF-Draft                      Page 158 

                            MRCPv2 Protocol              October, 2004 

                          /    speech-complete-timeout           
                          /    new-audio-channel 
                          /    abort-verification 
         
         
        repository-uri   =    "Respository-URI" ":" Uri CRLF 
         
        voiceprint-identifier =    "Voiceprint-Identifier" ":"  
                                    1*VCHAR "." 3VCHAR  
                                    [";" 1*VCHAR "." 3VCHAR] CRLF 
         
        verification-mode =    "Verification-Mode" ":"  
                               verification-mode-string 
         
        verification-mode-string   =    "train" / "verify" 
         
        adapt-model      =    "Adapt-Model" ":" Boolean-value CRLF 
         
        abort-model      =    "Abort-Model" ":" Boolean-value CRLF 
         
        security-level   =    "Security-Level" ":"  
                               security-level-string CRLF 
         
        security-level-string =    "high"  
                               /    "medium-high"  
                               /    "medium"  
                               /    "medium-low"  
                               /    "low" 
         
        num-min-verification-phrases =  "Num-Min-Verification-Phrases"  
                                         ":" 1*DIGIT CRLF 
         
        num-min-verification-phrases =  "Num-Max-Verification-Phrases"  
                                         ":" 1*DIGIT CRLF 
              
        no-input-timeout =    "No-Input-Timeout" ":" [1*DIGIT] CRLF 
         
        save-waveform    =    "Save-Waveform" ":"  
                               boolean-value CRLF  
         
        waveform-uri      =    "Waveform-URI" ":" Uri CRLF 
         
        voiceprint-exists =    "Voiceprint-Exists" ":"  
                               boolean-value CRLF 
         
        ver-buffer-utterance  =    "Ver-Buffer-Utterance" ":"  
                               boolean-value CRLF  
         
        input-waveform-uri    =    "Input-Waveform-URI" ":" Uri CRLF 
         
        completion-cause      =    "Completion-Cause" ":" 1*DIGIT SP  
  
 S Shanmugham                  IETF-Draft                      Page 159 

                            MRCPv2 Protocol              October, 2004 

                                    1*VCHAR CRLF  
         
        abort-verification    =    "Abort-Verification " :  
                                    boolean-value CRLF  
         
         
        ; Recorder ABNF 
         
        recorder-method       =    "RECORD 
                               /    "STOP" 
         
         
         
        recorder-event        =    "START-OF-SPEECH" 
                               /    "RECORD-COMPLETE" 
  
         
        recorder-header       =    sensitivity-level 
                               /    no-input-timeout 
                               /    completion-cause 
                               /    completion-reason 
                               /    failed-uri 
                               /    failed-uri-cause 
                               /    record-uri 
                               /    media-type 
                               /    max-time 
                               /    final-silence 
                               /    capture-on-speech 
                               /    new-audio-channel 
         
         
        sensitivity-level      =    "Sensitivity-Level" ":" [1*DIGIT]  
                                    CRLF 
         
        no-input-timeout      =    "No-Input-Timeout" ":" [1*DIGIT]  
                                    CRLF 
         
        completion-cause      =    "Completion-Cause" ":" 1*DIGIT SP 
                                    1*VCHAR CRLF 
         
        failed-uri       =    "Failed-URI" ":" Uri CRLF 
         
        failed-uri-cause =    "Failed-URI-Cause" ":" 1*ALPHANUM CRLF 
         
        record-uri       =    "Record-URI" ":" Uri CRLF 
         
        media-type       =    "Media-Type" ":" media-type CRLF 
         
        max-time         =    "Max-Time" ":" 1*DIGIT CRLF 
         
        final-silence    =    "Final-Silence" ":" 1*DIGIT CRLF 
  
 S Shanmugham                  IETF-Draft                      Page 160 

                            MRCPv2 Protocol              October, 2004 

         
        capture-on-speech =    "Capture-On-Speech " ":"  
                               1*DIGIT CRLF 
         
    A.2 XML Schema and DTD 














































  
 S Shanmugham                  IETF-Draft                      Page 161 

                            MRCPv2 Protocol              October, 2004 

    
    A.2.1 Recognition Results 
     
    NLSML Schema Definition 
     
    <?xml version="1.0" encoding="UTF-8"?> 
    <xs:schema  xmlns:xs="http://www.w3.org/2001/XMLSchema"  
                targetNamespace="http://www.ietf.org/xml/schema/mrcp2"  
                xmlns="http://www.ietf.org/xml/schema/mrcp2"  
                elementFormDefault="qualified"  
                attributeFormDefault="unqualified" > 
      <xs:element name="result"> 
      <xs:annotation> 
      <xs:documentation> Natural Language Semantic Markup Schema  
      </xs:documentation> 
      </xs:annotation> 
      <xs:complexType> 
      <xs:sequence> 
           <xs:element name="interpretation" maxOccurs="unbounded"> 
           <xs:complexType> 
           <xs:sequence> 
                <xs:element name="instance" minOccurs="0"> 
                <xs:complexType> 
                <xs:sequence> 
                     <xs:any/> 
                </xs:sequence> 
                </xs:complexType> 
                </xs:element> 
                <xs:element name="input"> 
                <xs:complexType mixed="true"> 
                <xs:choice> 
                     <xs:element name="noinput" minOccurs="0"/> 
                     <xs:element name="nomatch" minOccurs="0"/> 
                     <xs:element name="input" minOccurs="0"/> 
                </xs:choice> 
                <xs:attribute  name="confidence"  
                               type="confidenceinfo"  
                               default="1.0"/> 
                <xs:attribute  name="timestamp-start"  
                               type="xs:string"/> 
                <xs:attribute  name="timestamp-end"  
                               type="xs:string"/> 
                </xs:complexType> 
                </xs:element> 
           </xs:sequence> 
           <xs:attribute  name="confidence" type="confidenceinfo" 
                          default="1.0"/> 
           <xs:attribute  name="grammar" type="xs:anyURI" 
                          use="optional"/> 
           <xs:attribute  name="x-model" type="xs:anyURI" 
                          use="optional"/> 
  
 S Shanmugham                  IETF-Draft                      Page 162 

                            MRCPv2 Protocol              October, 2004 

           </xs:complexType> 
           </xs:element> 
      </xs:sequence> 
      <xs:attribute  name="grammar" type="xs:anyURI"  
                     use="optional"/> 
      <xs:attribute  name="x-model" type="xs:anyURI"  
                     use="optional"/> 
      </xs:complexType> 
      </xs:element> 
     
      <xs:simpleType name="confidenceinfo"> 
      <xs:restriction base="xs:float"> 
           <xs:minInclusive value="0.0"/> 
           <xs:maxInclusive value="1.0"/> 
      </xs:restriction> 
      </xs:simpleType> 
    </xs:schema> 
     
    NLSML Document Type Definition 
     
           <!--      NLSML Results DTD 
           --> 
     
           <!ELEMENT result (interpretation*)> 
           <!ATTLIST result 
            grammar CDATA #IMPLIED 
            x-model CDATA #IMPLIED 
           > 
           <!ELEMENT interpretation (instance,input?)> 
           <!ATTLIST interpretation 
            confidence CDATA "1.0" 
            grammar CDATA #IMPLIED 
            x-model CDATA #IMPLIED 
           > 
           <!ELEMENT input (#PCDATA|noinput|nomatch|input)*> 
           <!ATTLIST input 
            mode (dtmf | speech) "speech" 
            timestamp-start CDATA #IMPLIED 
            timestamp-end CDATA #IMPLIED 
            confidence CDATA "1.0" 
           > 
           <!ELEMENT nomatch (#PCDATA)*> 
           <!ELEMENT noinput EMPTY> 
           <!ELEMENT instance (#PCDATA|EMPTY)*> 
     
    A.2.2 Enrollment Results 
     
    Enrollment Results Schema Definition 
      <!-- MRCP Enrollment Schema 
           (See http://www.oasis-open.org/committees/relax-ng/spec.html) 
      --> 
  
 S Shanmugham                  IETF-Draft                      Page 163 

                            MRCPv2 Protocol              October, 2004 

       
           <element name="enrollment-result" 
                    datatypeLibrary="http://www.w3.org/2001/XMLSchema-
           datatypes" 
                    ns="" xmlns="http://relaxng.org/ns/structure/1.0"> 
             <interleave> 
               <element name="num-clashes"> 
                 <data type="nonNegativeInteger"/> 
               </element> 
               <element name="num-good-repetitions"> 
                 <data type="nonNegativeInteger"/> 
               </element> 
               <element name="num-repetitions-still-needed"> 
                 <data type="nonNegativeInteger"/> 
               </element> 
               <element name="consistency-status"> 
                 <choice> 
                   <value>CONSISTENT</value> 
                   <value>INCONSISTENT</value> 
                   <value>UNDECIDED</value> 
                 </choice> 
               </element> 
               <optional> 
                 <element name="clash-phrase-ids"> 
                   <oneOrMore> 
                     <element name="item"> 
                       <data type="token"/> 
                     </element> 
                   </oneOrMore> 
                 </element> 
               </optional> 
               <optional> 
                 <element name="transcriptions"> 
                   <oneOrMore> 
                     <element name="item"> 
                       <text/> 
                     </element> 
                   </oneOrMore> 
                 </element> 
               </optional> 
               <optional> 
                 <element name="confusable-phrases"> 
                   <oneOrMore> 
                     <element name="item"> 
                       <text/> 
                     </element> 
                   </oneOrMore> 
                 </element> 
               </optional> 
             </interleave>   
           </element> 
  
 S Shanmugham                  IETF-Draft                      Page 164 

                            MRCPv2 Protocol              October, 2004 

  
   Enrollment Results Document Type Definition 
  
           <!--      MRCP Enrollment Results DTD 
           --> 
           <!ELEMENT enrollment-result (num-clashes,  
                     num-good-repetitions,num-repetitions-still-needed, 
                     consistency-status, clash-phrase-ids?, 
                     transcriptions?, confusable-phrases?)> 
           <!ELEMENT num-clashes (#PCDATA)> 
           <!ELEMENT num-good-repetitions (#PCDATA)> 
           <!ELEMENT num-repetitions-still-needed (#PCDATA)> 
           <!ELEMENT consistency-status (#PCDATA)> 
           <!ELEMENT clash-phrase-ids (item)> 
           <!ELEMENT transcriptions (item)> 
           <!ELEMENT confusable-phrases (item)> 
           <!ELEMENT item (#PCDATA)> 
            
   A.2.3 Verification Results 
    
   Verification Results Schema Definition 
     
      <!-- MRCP Verification Results Schema  
           (See http://www.oasis-open.org/committees/relax-ng/spec.html) 
       --> 
            
           <grammar datatypeLibrary="http://www.w3.org/2001/XMLSchema-
           datatypes" 
                    ns="" xmlns="http://relaxng.org/ns/structure/1.0"> 
            
             <start> 
               <element name="verification-result"> 
                 <element name="num-frames"> 
                   <ref name="num-framesContent"/> 
                 </element> 
                 <element name="voiceprint"> 
                   <ref name="firstVoiceprintContent"/> 
                 </element> 
                 <zeroOrMore> 
                   <element name="voiceprint"> 
                     <ref name="restVoiceprintContent"/> 
                   </element> 
                 </zeroOrMore> 
               </element> 
             </start> 
            
             <define name="firstVoiceprintContent"> 
               <attribute name="id"> 
                 <data type="string"/> 
               </attribute> 
               <interleave> 
  
 S Shanmugham                  IETF-Draft                      Page 165 

                            MRCPv2 Protocol              October, 2004 

                 <optional> 
                   <element name="adapted"> 
                     <data type="boolean"/> 
                   </element> 
                   <element name="needmoredata"> 
                     <ref name="needmoredataContent"/> 
                   </element> 
                 </optional> 
                 <element name="incremental"> 
                   <ref name="firstCommonContent"/> 
                 </element> 
                 <element name="cumulative"> 
                   <ref name="firstCommonContent"/> 
                 </element> 
               </interleave> 
             </define> 
            
             <define name="restVoiceprintContent"> 
               <attribute name="id"> 
                 <data type="string"/> 
               </attribute> 
               <interleave> 
                 <optional> 
                   <element name="incremental"> 
                     <ref name="restCommonContent"/> 
                   </element> 
                 </optional> 
                 <element name="cumulative"> 
                   <ref name="restCommonContent"/> 
                 </element> 
               </interleave> 
             </define> 
            
             <define name="firstCommonContent"> 
               <interleave> 
                 <choice> 
                   <element name="decision"> 
                     <ref name="decisionContent"/> 
                   </element> 
                 </choice> 
                 <element name="device"> 
                   <ref name="deviceContent"/> 
                 </element> 
                 <element name="gender"> 
                   <ref name="genderContent"/> 
                 </element> 
                 <zeroOrMore> 
                   <element name="verification-score"> 
                     <ref name="verification-scoreContent"/> 
                   </element> 
                 </zeroOrMore> 
  
 S Shanmugham                  IETF-Draft                      Page 166 

                            MRCPv2 Protocol              October, 2004 

               </interleave> 
             </define> 
            
             <define name="restCommonContent"> 
               <interleave> 
                 <optional> 
                   <element name="decision"> 
                     <ref name="decisionContent"/> 
                   </element> 
                 </optional> 
                 <optional> 
                   <element name="utterance-length"> 
                     <ref name="utterance-lengthContent"/> 
                   </element> 
                 </optional> 
                 <optional> 
                   <element name="device"> 
                     <ref name="deviceContent"/> 
                   </element> 
                 </optional> 
                 <optional> 
                   <element name="gender"> 
                     <ref name="genderContent"/> 
                   </element> 
                 </optional> 
                 <zeroOrMore> 
                   <element name="verification-score"> 
                     <ref name="verification-scoreContent"/> 
                   </element> 
                 </zeroOrMore> 
                </interleave> 
             </define> 
            
             <define name="decisionContent"> 
               <choice> 
                 <value>accepted</value> 
                 <value>rejected</value> 
                 <value>undecided</value> 
               </choice> 
             </define> 
            
             <define name="needmoredataContent"> 
               <data type="boolean"/> 
             </define> 
            
             <define name="utterance-lengthContent"> 
               <data type="nonNegativeInteger"/> 
             </define> 
            
             <define name="deviceContent"> 
               <choice> 
  
 S Shanmugham                  IETF-Draft                      Page 167 

                            MRCPv2 Protocol              October, 2004 

                 <value>cellular-phone</value> 
                 <value>electret-phone</value> 
                 <value>carbon-button-phone</value> 
                 <value>unknown</value> 
               </choice> 
             </define> 
            
             <define name="genderContent"> 
               <choice> 
                 <value>male</value> 
                 <value>female</value> 
                 <value>unknown</value> 
               </choice> 
             </define> 
            
             <define name="verification-scoreContent"> 
               <data type="float"> 
                 <param name="minInclusive">0</param> 
                 <param name="maxInclusive">1</param> 
               </data> 
             </define> 
            
           </grammar> 
  
   Verification Results Document Type Definition 
           <!--      MRCP Verification Results DTD 
           --> 
            
           <!ELEMENT verification-result (voiceprint+)> 
           <!ELEMENT voiceprint (adapted?, incremental?, cumulative)> 
           <!ATTLIST voiceprint id CDATA #REQUIRED> 
           <!ELEMENT incremental ((decision | needmoredata)?,  
                     num-frames?, device?, gender?, verification-score)> 
           <!ELEMENT cumulative  ((decision | needmoredata)?,  
                     num-frames?, device?, gender?, verification-score)> 
           <!ELEMENT decision (#PCDATA)> 
           <!ELEMENT needmoredata (#PCDATA)> 
           <!ELEMENT num-frames (#PCDATA)> 
           <!ELEMENT device (#PCDATA)> 
           <!ELEMENT gender (#PCDATA)> 
           <!ELEMENT adapted (#PCDATA)> 
           <!ELEMENT verification-score (#PCDATA)> 
  
 Full Copyright Statement 
     
    Copyright (C) The Internet Society (2004). This document is subject 
    to the rights, licenses and restrictions contained in BCP 78, and 
    except as set forth therein, the authors retain all their rights. 
     
    This document and the information contained herein are provided on 
    an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 
  
 S Shanmugham                  IETF-Draft                      Page 168 

                            MRCPv2 Protocol              October, 2004 

    REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE 
    INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR 
    IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 
    THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 
    WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 
     
 Intellectual Property 
     
    The IETF takes no position regarding the validity or scope of any 
    Intellectual Property Rights or other rights that might be claimed 
    to pertain to the implementation or use of the technology described 
    in this document or the extent to which any license under such 
    rights might or might not be available; nor does it represent that 
    it has made any independent effort to identify any such rights.  
    Information on the procedures with respect to rights in RFC 
    documents can be found in BCP 78 and BCP 79. 
     
    Copies of IPR disclosures made to the IETF Secretariat and any 
    assurances of licenses to be made available, or the result of an 
    attempt made to obtain a general license or permission for the use 
    of such proprietary rights by implementers or users of this 
    specification can be obtained from the IETF on-line IPR repository 
    at http://www.ietf.org/ipr. 
     
    The IETF invites any interested party to bring to its attention any 
    copyrights, patents or patent applications, or other proprietary 
    rights that may cover technology that may be required to implement 
    this standard.  Please address the information to the IETF at ietf-
    ipr@ietf.org. 
     
     
 Contributors 
      Daniel C. Burnett  
      Nuance Communications  
      1005 Hamilton Court  
      Menlo Park, CA 94025-1422  
      USA  
                   
      Email:  burnett@nuance.com  
                   
                   
      Pierre Forgues  
      Nuance Communications Ltd.  
      111 Duke Street  
      Suite 4100  
      Montreal, Quebec  
      Canada H3C 2M1  
                   
      Email:  forgues@nuance.com  
       
      Charles Galles  
  
 S Shanmugham                  IETF-Draft                      Page 169 

                            MRCPv2 Protocol              October, 2004 

      Intervoice, Inc.  
      17811 Waterview Parkway  
      Dallas, Texas 75252  
                   
      Email:  charles.galles@intervoice.com  
       
      Klaus Reifenrath 
      Scansoft, Inc 
      Guldensporenpark 32 
      Building D 
      9820 Merelbeke 
      Belgium 
       
      Email: klaus.reifenrath@scansoft.com  
     
 Acknowledgements 
     
    Andre Gillet (Nuance Communications) 
    Andrew Hunt (ScanSoft) 
    Aaron Kneiss (ScanSoft) 
    Brian Eberman (ScanSoft) 
    Corey Stohs (Cisco Systems Inc) 
    Dan Burnett (Nuance Communications) 
    Jeff Kusnitz (IBM Corp) 
    Ganesh N Ramaswamy (IBM Corp) 
    Klaus Reifenrath (ScanSoft) 
    Kristian Finlator (ScanSoft) 
    Martin Dragomirecky (Cisco Systems Inc) 
    Peter Monaco (Nuance Communications) 
    Pierre Forgues (Nuance Communications) 
    Ran Zilca (IBM Corp) 
    Suresh Kaliannan (Cisco Systems Inc.) 
    Skip Cave (Intervoice Inc) 
    Magnus WesterLund (Ericsson Inc.) 
    Thomas Gal (LumenVox Inc.)
     
 Editors' Addresses 
     
    Saravanan Shanmugham 
    Cisco Systems Inc. 
    170 W Tasman Drive, 
    San Jose, 
    CA 95134 
     
    Email: sarvi@cisco.com 
       




  
 S Shanmugham                  IETF-Draft                      Page 170 


--------------070205000707050908050604
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

--------------070205000707050908050604--



From speechsc-bounces@ietf.org  Thu Oct 21 08:10:21 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id IAA12554
	for <speechsc-web-archive@ietf.org>; Thu, 21 Oct 2004 08:10:21 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CKbyV-0007F8-Qu
	for speechsc-web-archive@ietf.org; Thu, 21 Oct 2004 08:23:25 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CKatY-0008Qd-Cv; Thu, 21 Oct 2004 07:14:12 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CKNIg-0005pX-VR
	for speechsc@megatron.ietf.org; Wed, 20 Oct 2004 16:43:14 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id QAA24082
	for <speechsc@ietf.org>; Wed, 20 Oct 2004 16:43:07 -0400 (EDT)
Received: from sj-iport-4.cisco.com ([171.68.10.86])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CKNV5-00053G-2l
	for speechsc@ietf.org; Wed, 20 Oct 2004 16:56:04 -0400
Received: from sj-core-4.cisco.com (171.68.223.138)
	by sj-iport-4.cisco.com with ESMTP; 20 Oct 2004 13:43:08 -0700
X-BrightmailFiltered: true
Received: from vtg-um-e2k1.sj21ad.cisco.com (vtg-um-e2k1.cisco.com
	[171.70.93.55])
	by sj-core-4.cisco.com (8.12.10/8.12.6) with ESMTP id i9KKgZHn013611
	for <speechsc@ietf.org>; Wed, 20 Oct 2004 13:42:36 -0700 (PDT)
Received: from cisco.com ([10.32.130.231]) by vtg-um-e2k1.sj21ad.cisco.com
	with Microsoft SMTPSVC(5.0.2195.6713); 
	Wed, 20 Oct 2004 13:42:29 -0700
Message-ID: <4176CDB5.8070109@cisco.com>
Date: Wed, 20 Oct 2004 13:42:29 -0700
From: Sarvi Shanmugham <sarvi@cisco.com>
Organization: Cisco Systems Inc.
User-Agent: Mozilla Thunderbird 0.5 (Windows/20040207)
X-Accept-Language: en-us, en
MIME-Version: 1.0
To: "IETF SPEECHSC (E-mail)" <speechsc@ietf.org>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
X-OriginalArrivalTime: 20 Oct 2004 20:42:29.0909 (UTC)
	FILETIME=[51301C50:01C4B6E5]
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 08170828343bcf1325e4a0fb4584481c
Content-Transfer-Encoding: 7bit
Subject: [Speechsc] Note Self :-)
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 79899194edc4f33a41f49410777972f8
Content-Transfer-Encoding: 7bit


I noticed that the some of the NLSML examples in the apendix of the 
-05.txt draft have confidence value of 100, 60 or 40. They should be 
1.0, 0.6 and 0.4

I have fixed these in my word document and will be available shortly in 
-06.txt.
The 06.txt changes are mainly expected to be the the  IANA consideration 
and if people feel strongly converting the Verification/Enrollment XML 
to W3C Schema. I am hoping we can go last call with the -06 draft, and I 
don't expect any major changes to the spec in the -06 draft. If anyone 
feels otherwise, please let me know.

Thx,
Sarvi.

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


From speechsc-bounces@ietf.org  Thu Oct 21 08:18:35 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id IAA13683
	for <speechsc-web-archive@ietf.org>; Thu, 21 Oct 2004 08:18:35 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CKc6T-0007Wz-Ub
	for speechsc-web-archive@ietf.org; Thu, 21 Oct 2004 08:31:39 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CKaxF-000648-Tn; Thu, 21 Oct 2004 07:18:01 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CKPUV-0005vG-T3
	for speechsc@megatron.ietf.org; Wed, 20 Oct 2004 19:03:35 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id TAA10536
	for <speechsc@ietf.org>; Wed, 20 Oct 2004 19:03:32 -0400 (EDT)
Received: from sj-iport-5.cisco.com ([171.68.10.87])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CKPgz-0000vM-NN
	for speechsc@ietf.org; Wed, 20 Oct 2004 19:16:31 -0400
Received: from sj-core-4.cisco.com (171.68.223.138)
	by sj-iport-5.cisco.com with ESMTP; 20 Oct 2004 16:05:13 -0700
X-BrightmailFiltered: true
Received: from vtg-um-e2k1.sj21ad.cisco.com (vtg-um-e2k1.cisco.com
	[171.70.93.55])
	by sj-core-4.cisco.com (8.12.10/8.12.6) with ESMTP id i9KN2xHn024281
	for <speechsc@ietf.org>; Wed, 20 Oct 2004 16:02:59 -0700 (PDT)
Received: from cisco.com ([10.32.130.231]) by vtg-um-e2k1.sj21ad.cisco.com
	with Microsoft SMTPSVC(5.0.2195.6713); 
	Wed, 20 Oct 2004 16:02:57 -0700
Message-ID: <4176EEA1.8020403@cisco.com>
Date: Wed, 20 Oct 2004 16:02:57 -0700
From: Sarvi Shanmugham <sarvi@cisco.com>
Organization: Cisco Systems Inc.
User-Agent: Mozilla Thunderbird 0.5 (Windows/20040207)
X-Accept-Language: en-us, en
MIME-Version: 1.0
To: "IETF SPEECHSC (E-mail)" <speechsc@ietf.org>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
X-OriginalArrivalTime: 20 Oct 2004 23:02:58.0006 (UTC)
	FILETIME=[F0B9A760:01C4B6F8]
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 08e48e05374109708c00c6208b534009
Content-Transfer-Encoding: 7bit
Subject: [Speechsc] -05 draft  as a word document with change tracking
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 7a6398bf8aaeabc7a7bb696b6b0a2aad
Content-Transfer-Encoding: 7bit

I just uploaded the word document of the IETF draft  and can be access 
through the link below.

ftp://ftp-eng.cisco.com/ftp/sarvi/ietf/draft-ietf-speechsc-mrcpv2-05.txt
ftp://ftp-eng.cisco.com/ftp/sarvi/ietf/draft-ietf-speechsc-mrcpv2-05.doc

Thanks,
Sarvi

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


From speechsc-bounces@ietf.org  Thu Oct 21 20:35:52 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id UAA13569
	for <speechsc-web-archive@ietf.org>; Thu, 21 Oct 2004 20:35:52 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CKnc7-0006aV-J0
	for speechsc-web-archive@ietf.org; Thu, 21 Oct 2004 20:49:03 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CKl8E-0006uW-Rg; Thu, 21 Oct 2004 18:10:02 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CKj8L-0003X5-6p; Thu, 21 Oct 2004 16:02:01 -0400
Received: from CNRI.Reston.VA.US (localhost [127.0.0.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id QAA14331;
	Thu, 21 Oct 2004 16:01:58 -0400 (EDT)
Message-Id: <200410212001.QAA14331@ietf.org>
Mime-Version: 1.0
Content-Type: Multipart/Mixed; Boundary="NextPart"
To: i-d-announce@ietf.org
From: Internet-Drafts@ietf.org
Date: Thu, 21 Oct 2004 16:01:58 -0400
Cc: speechsc@ietf.org
Subject: [Speechsc] I-D ACTION:draft-ietf-speechsc-mrcpv2-05.txt
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.4 (/)
X-Scan-Signature: 287c806b254c6353fcb09ee0e53bbc5e

--NextPart

A New Internet-Draft is available from the on-line Internet-Drafts directories.
This draft is a work item of the Speech Services Control Working Group of the IETF.

	Title		: Media Resource Control Protocol Version 2(MRCPv2)
	Author(s)	: S. Shanmugham
	Filename	: draft-ietf-speechsc-mrcpv2-05.txt
	Pages		: 170
	Date		: 2004-10-20
	
This document describes a proposal for a Media Resource Control 
    Protocol Version 2 (MRCPv2) and aims to meet the requirements 
    specified in the SPEECHSC working group requirements document. It is 
    based on the Media Resource Control Protocol (MRCP), also called 
    MRCPv1 developed jointly by Cisco Systems, Inc., Nuance 
    Communications, and Speechworks Inc.  
     
    The MRCPv2 protocol will control media service resources like speech 
    synthesizers, recognizers, signal generators, signal detectors, fax 
    servers etc. over a network. This protocol depends on a session 
    management protocol such as the Session Initiation Protocol (SIP) to 
    establish a separate MRCPv2 control session between the client and 
    the server. It also depends on SIP to establish the media pipe and 
    associated parameters between the media source or sink and the media 
    server. Once this is done, the MRCPv2 protocol exchange can happen 
    over the control session established above allowing the client to 
    command and control the media processing resources that may exist on 
    the media server.

A URL for this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-ietf-speechsc-mrcpv2-05.txt

To remove yourself from the I-D Announcement list, send a message to 
i-d-announce-request@ietf.org with the word unsubscribe in the body of the message.  
You can also visit https://www1.ietf.org/mailman/listinfo/I-D-announce 
to change your subscription settings.


Internet-Drafts are also available by anonymous FTP. Login with the username
"anonymous" and a password of your e-mail address. After logging in,
type "cd internet-drafts" and then
	"get draft-ietf-speechsc-mrcpv2-05.txt".

A list of Internet-Drafts directories can be found in
http://www.ietf.org/shadow.html 
or ftp://ftp.ietf.org/ietf/1shadow-sites.txt


Internet-Drafts can also be obtained by e-mail.

Send a message to:
	mailserv@ietf.org.
In the body type:
	"FILE /internet-drafts/draft-ietf-speechsc-mrcpv2-05.txt".
	
NOTE:	The mail server at ietf.org can return the document in
	MIME-encoded form by using the "mpack" utility.  To use this
	feature, insert the command "ENCODING mime" before the "FILE"
	command.  To decode the response(s), you will need "munpack" or
	a MIME-compliant mail reader.  Different MIME-compliant mail readers
	exhibit different behavior, especially when dealing with
	"multipart" MIME messages (i.e. documents which have been split
	up into multiple messages), so check your local documentation on
	how to manipulate these messages.
		
		
Below is the data which will enable a MIME compliant mail reader
implementation to automatically retrieve the ASCII version of the
Internet-Draft.

--NextPart
Content-Type: Multipart/Alternative; Boundary="OtherAccess"

--OtherAccess
Content-Type: Message/External-body; access-type="mail-server";
	server="mailserv@ietf.org"

Content-Type: text/plain
Content-ID: <2004-10-21154814.I-D@ietf.org>

ENCODING mime
FILE /internet-drafts/draft-ietf-speechsc-mrcpv2-05.txt

--OtherAccess
Content-Type: Message/External-body; name="draft-ietf-speechsc-mrcpv2-05.txt";
	site="ftp.ietf.org"; access-type="anon-ftp";
	directory="internet-drafts"

Content-Type: text/plain
Content-ID: <2004-10-21154814.I-D@ietf.org>


--OtherAccess--

--NextPart
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

--NextPart--





From speechsc-bounces@ietf.org  Fri Oct 22 08:56:20 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id IAA00010
	for <speechsc-web-archive@ietf.org>; Fri, 22 Oct 2004 08:56:20 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CKzAl-0005Zl-KS
	for speechsc-web-archive@ietf.org; Fri, 22 Oct 2004 09:09:35 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CKynZ-0002Dj-Pu; Fri, 22 Oct 2004 08:45:37 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CKyj0-0000Uj-BB
	for speechsc@megatron.ietf.org; Fri, 22 Oct 2004 08:40:54 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id IAA28718
	for <speechsc@ietf.org>; Fri, 22 Oct 2004 08:40:51 -0400 (EDT)
Received: from mx2.scansoft.com ([198.71.64.82])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CKyvn-0005Bb-5m
	for speechsc@ietf.org; Fri, 22 Oct 2004 08:54:07 -0400
Received: from pb-exchcon.pb.scansoft.com ([10.1.4.73]) by mx2 with
	trend_isnt_name_B; Fri, 22 Oct 2004 08:41:33 -0400
Received: by pb-exchcon.pb.scansoft.com with Internet Mail Service
	(5.5.2653.19) id <S4T3219R>; Fri, 22 Oct 2004 08:39:18 -0400
Message-ID: <BBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com>
From: "Reifenrath, Klaus" <Klaus.Reifenrath@Scansoft.com>
To: "'Sarvi Shanmugham'" <sarvi@cisco.com>
Subject: RE: [Speechsc] Note Self :-)
Date: Fri, 22 Oct 2004 08:37:46 -0400
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
Content-Type: multipart/mixed; boundary="----_=_NextPart_000_01C4B833.EEE08170"
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 1ba0ec39a747b7612d6a8ae66d1a873c
Cc: "IETF SPEECHSC \(E-mail\)" <speechsc@ietf.org>
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 4fc59e88b356924367ae169e6a06365d

This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

------_=_NextPart_000_01C4B833.EEE08170
Content-Type: text/plain;
	charset="iso-8859-1"

Hi Sarvi,

I did not have much time to read the 05 draft yet. I think we need to wait
for last call until the NLSML section was carefully reviewed! This may take
some time as we need to evolve ASR experts.

According to the current spec the verification and enrollment results should
be in MRCP-NLSML. Therefore the W3C schema and DTD of MRCP-NLSML should
include the verification and enrollment result elements. I attach the W3C
schema for the verification result that I posted some weeks ago, which can
easily be integrated into your NLSML schema.

Further major issues:
a) From my point of view a section about SIP based server selection (RFC
3841) is still missing.
b) Another pending issue is the start line format (see attached mail
thread).

Minor issues:
a) on page 16 you reference section 3.2 instead of 4.2
b) section 5: does the message length of the start line include the length
of the start line?
c) section 5.2 (page 20): is 405 also the right failure code, if the session
is not found 
d) hot-max-duration/hot-min-duration (on page 66) specify the
maximum/minimum length of an utterance in milliseconds    (not in seconds)

Klaus

 



-----Original Message-----
From: Sarvi Shanmugham [mailto:sarvi@cisco.com]
Sent: Mittwoch, 20. Oktober 2004 22:42
To: IETF SPEECHSC (E-mail)
Subject: [Speechsc] Note Self :-)



I noticed that the some of the NLSML examples in the apendix of the 
-05.txt draft have confidence value of 100, 60 or 40. They should be 
1.0, 0.6 and 0.4

I have fixed these in my word document and will be available shortly in 
-06.txt.
The 06.txt changes are mainly expected to be the the  IANA consideration 
and if people feel strongly converting the Verification/Enrollment XML 
to W3C Schema. I am hoping we can go last call with the -06 draft, and I 
don't expect any major changes to the spec in the -06 draft. If anyone 
feels otherwise, please let me know.

Thx,
Sarvi.

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


------_=_NextPart_000_01C4B833.EEE08170
Content-Type: application/octet-stream;
	name="ver.xsd"
Content-Disposition: attachment;
	filename="ver.xsd"
Content-Transfer-Encoding: quoted-printable

<?xml version=3D"1.0" encoding=3D"UTF-8"?>
<xsd:schema=20
	targetNamespace=3D"http://www.ietf.org/mrcp2"
	xmlns:xsd=3D"http://www.w3.org/2001/XMLSchema"
	xmlns:mrcp=3D"http://www.ietf.org/mrcp2"=20
	version=3D"2.0">
	<xsd:annotation>
		<xsd:documentation>
       		Schema for MRCPv2 speaker verification result data.
		</xsd:documentation>
    	</xsd:annotation>

	<xsd:element name=3D"result">
		<xsd:annotation>
			<xsd:documentation>
       			Use the result-element as root element as in NLSML.=20
			</xsd:documentation>
		</xsd:annotation>
		<xsd:complexType>
			<xsd:sequence>
				<xsd:annotation>
					<xsd:documentation>
		       			Why do we need the result-type element?
					</xsd:documentation>
				</xsd:annotation>
				<xsd:element name=3D"result-type" type=3D"mrcp:ResultType" =
minOccurs=3D"1" maxOccurs=3D"1" form=3D"qualified"/>
				<xsd:choice>
					<xsd:element name=3D"verification-result" =
type=3D"mrcp:VerificationResultType" minOccurs=3D"1" maxOccurs=3D"1" =
form=3D"qualified"/>
					<xsd:element name=3D"training-result" =
type=3D"mrcp:TrainingResultType" minOccurs=3D"1" maxOccurs=3D"1" =
form=3D"qualified"/>
				</xsd:choice>
			</xsd:sequence>
			<xsd:attribute name=3D"grammar" type=3D"xsd:anyURI"/>
		</xsd:complexType>
		<xsd:unique name=3D"uniqueVoiceprintId">
			<xsd:selector xpath=3D"mrcp:verification-result/voiceprint"/>
			<xsd:field xpath=3D"@identifier"/>
		</xsd:unique>
	</xsd:element>
  =20
	<xsd:complexType name=3D"ResultType">
         	<xsd:attribute name=3D"type" type=3D"mrcp:ResultTypeEnum"/>
	</xsd:complexType>=20
=09
	<xsd:group name=3D"NumFramesGroup">
		<xsd:sequence>
			<xsd:element name=3D"incremental-num-frames" =
type=3D"xsd:positiveInteger" minOccurs=3D"1" maxOccurs=3D"1"/>
	      		<xsd:element name=3D"cumulative-num-frames" =
type=3D"xsd:positiveInteger" minOccurs=3D"1" maxOccurs=3D"1"/>
		</xsd:sequence>=09
	</xsd:group>=20
  =20
	<xsd:complexType name=3D"VerificationResultType">
		<xsd:sequence>
			<xsd:group ref=3D"mrcp:NumFramesGroup"/>
			<xsd:element name=3D"voiceprint" =
type=3D"mrcp:VerificationVoicePrintType" minOccurs=3D"1" =
maxOccurs=3D"unbounded"/>
		</xsd:sequence>
		<xsd:attribute name=3D"uri" type=3D"xsd:anyURI"/>
	</xsd:complexType>

	<xsd:complexType name=3D"TrainingResultType">
		<xsd:sequence>
			<xsd:group ref=3D"mrcp:NumFramesGroup"/>
	        	<xsd:element name=3D"voiceprint" =
type=3D"mrcp:TrainingVoicePrintType" minOccurs=3D"1" maxOccurs=3D"1"/>
	     	</xsd:sequence>
		<xsd:attribute name=3D"uri" type=3D"xsd:anyURI"/>
	</xsd:complexType>

	<xsd:complexType name=3D"VerificationVoicePrintType">
		<xsd:all>
			<xsd:element name=3D"adapted" type=3D"xsd:boolean" minOccurs=3D"1" =
maxOccurs=3D"1"/>
			<xsd:element name=3D"decision" type=3D"mrcp:DecisionType" =
minOccurs=3D"1" maxOccurs=3D"1"/>
			<xsd:element name=3D"incremental" =
type=3D"mrcp:VerificationCommanType" minOccurs=3D"0" maxOccurs=3D"1"/>
			<xsd:element name=3D"cumulative" =
type=3D"mrcp:VerificationCommanType" minOccurs=3D"1" maxOccurs=3D"1"/>
		</xsd:all>
		<xsd:attribute name=3D"repository-uri" type=3D"xsd:anyURI"/>
		<xsd:attribute name=3D"identifier" type=3D"xsd:string"/>
	</xsd:complexType>

	<xsd:complexType name=3D"VerificationCommanType">
		<xsd:all>
			<xsd:element name=3D"verification-score" =
type=3D"mrcp:VerificationScoreType" minOccurs=3D"1" maxOccurs=3D"1"/>
			<xsd:element name=3D"device" type=3D"mrcp:DeviceType" =
minOccurs=3D"0" maxOccurs=3D"1"/>
			<xsd:element name=3D"gender" type=3D"mrcp:GenderType" =
minOccurs=3D"0" maxOccurs=3D"1"/>
		</xsd:all>
	</xsd:complexType>

	<xsd:complexType name=3D"TrainingVoicePrintType">
		<xsd:all>
			<xsd:element name=3D"need-more-data" type=3D"xsd:boolean" =
minOccurs=3D"1" maxOccurs=3D"1"/>
			<xsd:element name=3D"incremental" type=3D"mrcp:TrainingCommanType" =
minOccurs=3D"0" maxOccurs=3D"1"/>
			<xsd:element name=3D"cumulative" type=3D"mrcp:TrainingCommanType" =
minOccurs=3D"1" maxOccurs=3D"1"/>
		</xsd:all>
		<xsd:attribute name=3D"repository-uri" type=3D"xsd:anyURI"/>
		<xsd:attribute name=3D"identifier" type=3D"xsd:string"/>
	</xsd:complexType>

	<xsd:complexType name=3D"TrainingCommanType">
		<xsd:all>
			<xsd:annotation>
				<xsd:documentation>
	       			Some verification engines do not compute verification scores =
during training
				</xsd:documentation>
			</xsd:annotation>
			<xsd:element name=3D"verification-score" =
type=3D"mrcp:VerificationScoreType" minOccurs=3D"0" maxOccurs=3D"1"/>
			<xsd:element name=3D"device" type=3D"mrcp:DeviceType" =
minOccurs=3D"0" maxOccurs=3D"1"/>
			<xsd:element name=3D"gender" type=3D"mrcp:GenderType" =
minOccurs=3D"0" maxOccurs=3D"1"/>
			<xsd:element name=3D"consistent" type=3D"xsd:boolean" =
minOccurs=3D"0" maxOccurs=3D"1"/>
		</xsd:all>
	</xsd:complexType>
 =20
	<xsd:simpleType name=3D"DecisionType">
		<xsd:restriction base=3D"xsd:string">
			<xsd:enumeration value =3D "accepted" />
			<xsd:enumeration value =3D "rejected" />
			<xsd:enumeration value =3D "undecided" />
		</xsd:restriction>=20
	</xsd:simpleType>=20

	<xsd:simpleType name=3D"GenderType">
		<xsd:restriction base=3D"xsd:string">
			<xsd:enumeration value =3D "male" />
			<xsd:enumeration value =3D "female" />
			<xsd:enumeration value =3D "unknown" />
		</xsd:restriction>=20
	</xsd:simpleType>=20

	<xsd:simpleType name=3D"DeviceType" >
		<xsd:restriction base=3D"xsd:string">
			<xsd:enumeration value =3D "cellular-phone" />
			<xsd:enumeration value =3D "electret-phone" />
			<xsd:enumeration value =3D "carbon-button-phone" />
			<xsd:enumeration value =3D "unknown" />
		</xsd:restriction>=20
	</xsd:simpleType>=20

	<xsd:simpleType name=3D"VerificationScoreType">
		<xsd:restriction base=3D"xsd:float">
			<xsd:minInclusive value =3D "0" />
			<xsd:maxInclusive value =3D "1" />
		</xsd:restriction>=20
	</xsd:simpleType>=20

	<xsd:simpleType name=3D"ResultTypeEnum">
		<xsd:restriction base=3D"xsd:string">
			<xsd:enumeration value =3D "VERIFICATION" />
		</xsd:restriction>=20
	</xsd:simpleType>=20

</xsd:schema>

------_=_NextPart_000_01C4B833.EEE08170
Content-Type: message/rfc822
Content-Description: RE: [Speechsc] question about v2 message format

Message-ID: <7DE7C4EF3B7C8B4B82955191378290D8018563B9@mtb1exch01.nuance.com>
From: Pierre Forgues <forgues@nuance.com>
To: "Reifenrath, Klaus" <Klaus.Reifenrath@Scansoft.com>, Daniel Burnett
	<burnett@nuance.com>
Cc: speechsc@ietf.org
Subject: RE: [Speechsc] question about v2 message format
Date: Mon, 28 Jun 2004 07:44:30 -0400
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
Content-Type: text/plain;
	charset="iso-8859-1"

I agree we should follow the lead with other protocols like SIP.  Client
to server request line must start with the method and end with the
protocol and version.

Server to client response-line and event-line start with the protocol
and version.

Pierre

-----Original Message-----
From: speechsc-bounces@ietf.org [mailto:speechsc-bounces@ietf.org] On
Behalf Of Reifenrath, Klaus
Sent: Friday, June 25, 2004 8:01 AM
To: Daniel Burnett
Cc: speechsc@ietf.org
Subject: RE: [Speechsc] question about v2 message format

Hi Dan,

I like the SIP format where the start-line of a request starts with the
method and the status-line of the response starts with the SIP-version.
Unfortunately we have 3 types of MRCP messages: requests, responses and
events. If we change the message definition I would suggest to start the
request-line with the method and the response-line and event-line with
the
MRCP version.

But also with the current MRCPv2 message definition the message parser
can
distinguish the message types efficiently. Having the version
information
right at the beginning has some advantages. 

Klaus



-----Original Message-----
From: Daniel Burnett [mailto:burnett@nuance.com]
Sent: Freitag, 25. Juni 2004 06:28
To: speechsc@ietf.org
Subject: [Speechsc] question about v2 message format


The convention used for MRCP messages has changed in v2 and it is
different
from that in
RTSP, SIP, and HTTP.

New convention: "MRCP/2.0 123 VER-SET-VOICEPRINT 123456
Old convention: "START-OF-SPEECH 543258 IN-PROGRESS MRCP/1.0"
Why the change in convention?

-- dan


_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

 
  

------_=_NextPart_000_01C4B833.EEE08170
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

------_=_NextPart_000_01C4B833.EEE08170--



From speechsc-bounces@ietf.org  Fri Oct 22 13:31:46 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id NAA23141
	for <speechsc-web-archive@ietf.org>; Fri, 22 Oct 2004 13:31:46 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CL3TN-0002oT-Nn
	for speechsc-web-archive@ietf.org; Fri, 22 Oct 2004 13:45:06 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CL3Fs-0007I0-El; Fri, 22 Oct 2004 13:31:08 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CL3A1-0004hx-Pf
	for speechsc@megatron.ietf.org; Fri, 22 Oct 2004 13:25:05 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id NAA22506
	for <speechsc@ietf.org>; Fri, 22 Oct 2004 13:25:01 -0400 (EDT)
Received: from sj-iport-5.cisco.com ([171.68.10.87])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CL3Mq-0002g0-9s
	for speechsc@ietf.org; Fri, 22 Oct 2004 13:38:21 -0400
Received: from sj-core-3.cisco.com (171.68.223.137)
	by sj-iport-5.cisco.com with ESMTP; 22 Oct 2004 10:24:35 -0700
X-BrightmailFiltered: true
Received: from vtg-um-e2k1.sj21ad.cisco.com (vtg-um-e2k1.cisco.com
	[171.70.93.55])
	by sj-core-3.cisco.com (8.12.10/8.12.6) with ESMTP id i9MHOM9h008397;
	Fri, 22 Oct 2004 10:24:28 -0700 (PDT)
Received: from cisco.com ([10.32.130.231]) by vtg-um-e2k1.sj21ad.cisco.com
	with Microsoft SMTPSVC(5.0.2195.6713); 
	Fri, 22 Oct 2004 10:24:23 -0700
Message-ID: <41794246.4050206@cisco.com>
Date: Fri, 22 Oct 2004 10:24:22 -0700
From: Sarvi Shanmugham <sarvi@cisco.com>
Organization: Cisco Systems Inc.
User-Agent: Mozilla Thunderbird 0.5 (Windows/20040207)
X-Accept-Language: en-us, en
MIME-Version: 1.0
To: "Reifenrath, Klaus" <Klaus.Reifenrath@Scansoft.com>
Subject: Re: [Speechsc] Note Self :-)
References: <BBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com>
In-Reply-To: <BBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com>
X-OriginalArrivalTime: 22 Oct 2004 17:24:23.0760 (UTC)
	FILETIME=[F9513500:01C4B85B]
X-Spam-Score: 1.1 (+)
X-Scan-Signature: 08582f3b796126054df71137d5cb69f8
Cc: "IETF SPEECHSC \(E-mail\)" <speechsc@ietf.org>
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1706522661=="
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 1.1 (+)
X-Scan-Signature: 8e140a89d08e89747ee196e282ac2228

This is a multi-part message in MIME format.
--===============1706522661==
Content-Type: multipart/alternative;
	boundary="------------030300070505040005050709"

This is a multi-part message in MIME format.
--------------030300070505040005050709
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit

Comments inline.

Reifenrath, Klaus wrote:

>Hi Sarvi,
>
>I did not have much time to read the 05 draft yet. I think we need to wait
>for last call until the NLSML section was carefully reviewed! This may take
>some time as we need to evolve ASR experts.
>  
>
I agree there needs to be some review of this section. But I should also 
note that the agreement was that this would be the same as the NLSML 
used in MRCPv1 and that we would not try to add any new features.  What 
I have tried to do is to bring in the NLSML from the latest W3C NLSML 
draft and make it consistent with MRCPv1 NLSML usage.  So lets go ahead 
and review to make sure that NLSML section meets those goals.

>According to the current spec the verification and enrollment results should
>be in MRCP-NLSML. Therefore the W3C schema and DTD of MRCP-NLSML should
>include the verification and enrollment result elements. I attach the W3C
>schema for the verification result that I posted some weeks ago, which can
>easily be integrated into your NLSML schema.
>  
>
I don't think we are planning on calling it anything else other than 
NLSML, correct.
But that said, I think we should leave the 3 schema's as separate 
entities. Coz we may end up using the Enrollment and Verification Schema 
even when we move to EMMA, assuming EMMA would still not be addressing 
these 2 aspects in its first version. So leaving it modular has its 
advantages.
But note that we still embed the Verification and Enrollment XML within 
the NLSML markup but under its own namespace. Just that we are leaving 
its definition as separate schema.

I will try to convert the rest of the XML into W3C schema as well.

>Further major issues:
>a) From my point of view a section about SIP based server selection (RFC
>3841) is still missing.
>  
>
Do you have some text in mind that you would like to add. If not I will 
add some in the next draft.

>b) Another pending issue is the start line format (see attached mail
>thread).
>  
>
This was changed based on early feedback from Cullen gave on this 
mailing list. The idea was that it was good practice to have the first 
field recognize the protocol and version. Also, that the Length message 
be available at a fixed location from the beginning of the message for 
all messages. It was said that it makes parsing efficient as we know the 
length of the message by looking at a fixed location from the beginning 
of the message.

>Minor issues:
>a) on page 16 you reference section 3.2 instead of 4.2
>  
>
fixed.

>b) section 5: does the message length of the start line include the length
>of the start line?
>  
>
Have corrected as follows "The message-length field specifies the length 
of the message, *including the start-line,* and MUST be the 2nd token 
from the beginning of the message."

>c) section 5.2 (page 20): is 405 also the right failure code, if the session
>is not found 
>  
>
yes. will  clarified

>d) hot-max-duration/hot-min-duration (on page 66) specify the
>maximum/minimum length of an utterance in milliseconds    (not in seconds)
>  
>
Looks like this was fixed in -05.txt

thanks,
Sarvi

>Klaus
>
> 
>
>
>
>-----Original Message-----
>From: Sarvi Shanmugham [mailto:sarvi@cisco.com]
>Sent: Mittwoch, 20. Oktober 2004 22:42
>To: IETF SPEECHSC (E-mail)
>Subject: [Speechsc] Note Self :-)
>
>
>
>I noticed that the some of the NLSML examples in the apendix of the 
>-05.txt draft have confidence value of 100, 60 or 40. They should be 
>1.0, 0.6 and 0.4
>
>I have fixed these in my word document and will be available shortly in 
>-06.txt.
>The 06.txt changes are mainly expected to be the the  IANA consideration 
>and if people feel strongly converting the Verification/Enrollment XML 
>to W3C Schema. I am hoping we can go last call with the -06 draft, and I 
>don't expect any major changes to the spec in the -06 draft. If anyone 
>feels otherwise, please let me know.
>
>Thx,
>Sarvi.
>
>_______________________________________________
>Speechsc mailing list
>Speechsc@ietf.org
>https://www1.ietf.org/mailman/listinfo/speechsc
>
>  
>
>
> ------------------------------------------------------------------------
>
> Subject:
> RE: [Speechsc] question about v2 message format
> From:
> Pierre Forgues <forgues@nuance.com>
> Date:
> Mon, 28 Jun 2004 07:44:30 -0400
> To:
> "Reifenrath, Klaus" <Klaus.Reifenrath@Scansoft.com>, Daniel Burnett 
> <burnett@nuance.com>
>
> To:
> "Reifenrath, Klaus" <Klaus.Reifenrath@Scansoft.com>, Daniel Burnett 
> <burnett@nuance.com>
> CC:
> speechsc@ietf.org
>
>
>I agree we should follow the lead with other protocols like SIP.  Client
>to server request line must start with the method and end with the
>protocol and version.
>
>Server to client response-line and event-line start with the protocol
>and version.
>
>Pierre
>
>-----Original Message-----
>From: speechsc-bounces@ietf.org [mailto:speechsc-bounces@ietf.org] On
>Behalf Of Reifenrath, Klaus
>Sent: Friday, June 25, 2004 8:01 AM
>To: Daniel Burnett
>Cc: speechsc@ietf.org
>Subject: RE: [Speechsc] question about v2 message format
>
>Hi Dan,
>
>I like the SIP format where the start-line of a request starts with the
>method and the status-line of the response starts with the SIP-version.
>Unfortunately we have 3 types of MRCP messages: requests, responses and
>events. If we change the message definition I would suggest to start the
>request-line with the method and the response-line and event-line with
>the
>MRCP version.
>
>But also with the current MRCPv2 message definition the message parser
>can
>distinguish the message types efficiently. Having the version
>information
>right at the beginning has some advantages. 
>
>Klaus
>
>
>
>-----Original Message-----
>From: Daniel Burnett [mailto:burnett@nuance.com]
>Sent: Freitag, 25. Juni 2004 06:28
>To: speechsc@ietf.org
>Subject: [Speechsc] question about v2 message format
>
>
>The convention used for MRCP messages has changed in v2 and it is
>different
>from that in
>RTSP, SIP, and HTTP.
>
>New convention: "MRCP/2.0 123 VER-SET-VOICEPRINT 123456
>Old convention: "START-OF-SPEECH 543258 IN-PROGRESS MRCP/1.0"
>Why the change in convention?
>
>-- dan
>
>
>_______________________________________________
>Speechsc mailing list
>Speechsc@ietf.org
>https://www1.ietf.org/mailman/listinfo/speechsc
>
>_______________________________________________
>Speechsc mailing list
>Speechsc@ietf.org
>https://www1.ietf.org/mailman/listinfo/speechsc
>
> 
>  
>  
>


--------------030300070505040005050709
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
  <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
  <title></title>
</head>
<body>
Comments inline.<br>
<br>
Reifenrath, Klaus wrote:<br>
<blockquote
 cite="midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com"
 type="cite">
  <pre wrap="">Hi Sarvi,

I did not have much time to read the 05 draft yet. I think we need to wait
for last call until the NLSML section was carefully reviewed! This may take
some time as we need to evolve ASR experts.
  </pre>
</blockquote>
I agree there needs to be some review of this section. But I should
also note that the agreement was that this would be the same as the
NLSML used in MRCPv1 and that we would not try to add any new
features.&nbsp; What I have tried to do is to bring in the NLSML from the
latest W3C NLSML draft and make it consistent with MRCPv1 NLSML usage.&nbsp;
So lets go ahead and review to make sure that NLSML section meets those
goals.<br>
<blockquote
 cite="midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com"
 type="cite">
  <pre wrap="">
According to the current spec the verification and enrollment results should
be in MRCP-NLSML. Therefore the W3C schema and DTD of MRCP-NLSML should
include the verification and enrollment result elements. I attach the W3C
schema for the verification result that I posted some weeks ago, which can
easily be integrated into your NLSML schema.
  </pre>
</blockquote>
I don't think we are planning on calling it anything else other than
NLSML, correct. <br>
But that said, I think we should leave the 3 schema's as separate
entities. Coz we may end up using the Enrollment and Verification
Schema even when we move to EMMA, assuming EMMA would still not be
addressing these 2 aspects in its first version. So leaving it modular
has its advantages.<br>
But note that we still embed the Verification and Enrollment XML within
the NLSML markup but under its own namespace. Just that we are leaving
its definition as separate schema.<br>
<br>
I will try to convert the rest of the XML into W3C schema as well. <br>
<blockquote
 cite="midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com"
 type="cite">
  <pre wrap="">
Further major issues:
a) From my point of view a section about SIP based server selection (RFC
3841) is still missing.
  </pre>
</blockquote>
Do you have some text in mind that you would like to add. If not I will
add some in the next draft.<br>
<blockquote
 cite="midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com"
 type="cite">
  <pre wrap="">b) Another pending issue is the start line format (see attached mail
thread).
  </pre>
</blockquote>
This was changed based on early feedback from Cullen gave on this
mailing list. The idea was that it was good practice to have the first
field recognize the protocol and version. Also, that the Length message
be available at a fixed location from the beginning of the message for
all messages. It was said that it makes parsing efficient as we know
the length of the message by looking at a fixed location from the
beginning of the message.<br>
<blockquote
 cite="midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com"
 type="cite">
  <pre wrap="">
Minor issues:
a) on page 16 you reference section 3.2 instead of 4.2
  </pre>
</blockquote>
fixed.<br>
<blockquote
 cite="midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com"
 type="cite">
  <pre wrap="">b) section 5: does the message length of the start line include the length
of the start line?
  </pre>
</blockquote>
Have corrected as follows <span lang="EN-US"
 style="font-size: 12pt; font-family: &quot;Times New Roman&quot;;">"The
message-length field specifies the length of the message, <b>including
the start-line,</b> and MUST be the 2nd token from the beginning of the
message."</span>
<blockquote
 cite="midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com"
 type="cite">
  <pre wrap="">c) section 5.2 (page 20): is 405 also the right failure code, if the session
is not found 
  </pre>
</blockquote>
yes. will&nbsp; clarified<br>
<blockquote
 cite="midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com"
 type="cite">
  <pre wrap="">d) hot-max-duration/hot-min-duration (on page 66) specify the
maximum/minimum length of an utterance in milliseconds    (not in seconds)
  </pre>
</blockquote>
Looks like this was fixed in -05.txt<br>
<br>
thanks,<br>
Sarvi<br>
<blockquote
 cite="midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com"
 type="cite">
  <pre wrap="">
Klaus

 



-----Original Message-----
From: Sarvi Shanmugham [<a class="moz-txt-link-freetext" href="mailto:sarvi@cisco.com">mailto:sarvi@cisco.com</a>]
Sent: Mittwoch, 20. Oktober 2004 22:42
To: IETF SPEECHSC (E-mail)
Subject: [Speechsc] Note Self :-)



I noticed that the some of the NLSML examples in the apendix of the 
-05.txt draft have confidence value of 100, 60 or 40. They should be 
1.0, 0.6 and 0.4

I have fixed these in my word document and will be available shortly in 
-06.txt.
The 06.txt changes are mainly expected to be the the  IANA consideration 
and if people feel strongly converting the Verification/Enrollment XML 
to W3C Schema. I am hoping we can go last call with the -06 draft, and I 
don't expect any major changes to the spec in the -06 draft. If anyone 
feels otherwise, please let me know.

Thx,
Sarvi.

_______________________________________________
Speechsc mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Speechsc@ietf.org">Speechsc@ietf.org</a>
<a class="moz-txt-link-freetext" href="https://www1.ietf.org/mailman/listinfo/speechsc">https://www1.ietf.org/mailman/listinfo/speechsc</a>

  </pre>
  <br>
  <hr width="90%" size="4"><br>
  <table border="0" cellspacing="0" cellpadding="0" width="100%"
 class="header-part1">
    <tbody>
      <tr>
        <td>
        <div class="headerdisplayname" style="display: inline;">Subject:
        </div>
RE: [Speechsc] question about v2 message format</td>
      </tr>
      <tr>
        <td>
        <div class="headerdisplayname" style="display: inline;">From: </div>
Pierre Forgues <a class="moz-txt-link-rfc2396E" href="mailto:forgues@nuance.com">&lt;forgues@nuance.com&gt;</a></td>
      </tr>
      <tr>
        <td>
        <div class="headerdisplayname" style="display: inline;">Date: </div>
Mon, 28 Jun 2004 07:44:30 -0400</td>
      </tr>
      <tr>
        <td>
        <div class="headerdisplayname" style="display: inline;">To: </div>
"Reifenrath, Klaus" <a class="moz-txt-link-rfc2396E" href="mailto:Klaus.Reifenrath@Scansoft.com">&lt;Klaus.Reifenrath@Scansoft.com&gt;</a>, Daniel
Burnett <a class="moz-txt-link-rfc2396E" href="mailto:burnett@nuance.com">&lt;burnett@nuance.com&gt;</a></td>
      </tr>
    </tbody>
  </table>
  <table border="0" cellspacing="0" cellpadding="0" width="100%"
 class="header-part2">
    <tbody>
      <tr>
        <td>
        <div class="headerdisplayname" style="display: inline;">To: </div>
"Reifenrath, Klaus" <a class="moz-txt-link-rfc2396E" href="mailto:Klaus.Reifenrath@Scansoft.com">&lt;Klaus.Reifenrath@Scansoft.com&gt;</a>, Daniel
Burnett <a class="moz-txt-link-rfc2396E" href="mailto:burnett@nuance.com">&lt;burnett@nuance.com&gt;</a></td>
      </tr>
      <tr>
        <td>
        <div class="headerdisplayname" style="display: inline;">CC: </div>
<a class="moz-txt-link-abbreviated" href="mailto:speechsc@ietf.org">speechsc@ietf.org</a></td>
      </tr>
    </tbody>
  </table>
  <br>
  <pre wrap="">I agree we should follow the lead with other protocols like SIP.  Client
to server request line must start with the method and end with the
protocol and version.

Server to client response-line and event-line start with the protocol
and version.

Pierre

-----Original Message-----
From: <a class="moz-txt-link-abbreviated" href="mailto:speechsc-bounces@ietf.org">speechsc-bounces@ietf.org</a> [<a class="moz-txt-link-freetext" href="mailto:speechsc-bounces@ietf.org">mailto:speechsc-bounces@ietf.org</a>] On
Behalf Of Reifenrath, Klaus
Sent: Friday, June 25, 2004 8:01 AM
To: Daniel Burnett
Cc: <a class="moz-txt-link-abbreviated" href="mailto:speechsc@ietf.org">speechsc@ietf.org</a>
Subject: RE: [Speechsc] question about v2 message format

Hi Dan,

I like the SIP format where the start-line of a request starts with the
method and the status-line of the response starts with the SIP-version.
Unfortunately we have 3 types of MRCP messages: requests, responses and
events. If we change the message definition I would suggest to start the
request-line with the method and the response-line and event-line with
the
MRCP version.

But also with the current MRCPv2 message definition the message parser
can
distinguish the message types efficiently. Having the version
information
right at the beginning has some advantages. 

Klaus



-----Original Message-----
From: Daniel Burnett [<a class="moz-txt-link-freetext" href="mailto:burnett@nuance.com">mailto:burnett@nuance.com</a>]
Sent: Freitag, 25. Juni 2004 06:28
To: <a class="moz-txt-link-abbreviated" href="mailto:speechsc@ietf.org">speechsc@ietf.org</a>
Subject: [Speechsc] question about v2 message format


The convention used for MRCP messages has changed in v2 and it is
different
from that in
RTSP, SIP, and HTTP.

New convention: "MRCP/2.0 123 VER-SET-VOICEPRINT 123456
Old convention: "START-OF-SPEECH 543258 IN-PROGRESS MRCP/1.0"
Why the change in convention?

-- dan


_______________________________________________
Speechsc mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Speechsc@ietf.org">Speechsc@ietf.org</a>
<a class="moz-txt-link-freetext" href="https://www1.ietf.org/mailman/listinfo/speechsc">https://www1.ietf.org/mailman/listinfo/speechsc</a>

_______________________________________________
Speechsc mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Speechsc@ietf.org">Speechsc@ietf.org</a>
<a class="moz-txt-link-freetext" href="https://www1.ietf.org/mailman/listinfo/speechsc">https://www1.ietf.org/mailman/listinfo/speechsc</a>

 
  
  </pre>
</blockquote>
<br>
</body>
</html>

--------------030300070505040005050709--


--===============1706522661==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

--===============1706522661==--



From speechsc-bounces@ietf.org  Sat Oct 23 21:52:49 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id VAA06899
	for <speechsc-web-archive@ietf.org>; Sat, 23 Oct 2004 21:52:49 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CLXm7-0008HS-Lt
	for speechsc-web-archive@ietf.org; Sat, 23 Oct 2004 22:06:27 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CLXVZ-00006P-My; Sat, 23 Oct 2004 21:49:21 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CLXRR-0008C1-V6
	for speechsc@megatron.ietf.org; Sat, 23 Oct 2004 21:45:05 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id VAA06457
	for <speechsc@ietf.org>; Sat, 23 Oct 2004 21:45:03 -0400 (EDT)
Received: from salvelinus.brooktrout.com ([204.176.205.6])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CLXeY-00087e-0Z
	for speechsc@ietf.org; Sat, 23 Oct 2004 21:58:41 -0400
Received: from nhmail2.needham.brooktrout.com (nhmail2.eng.brooktrout.com
	[204.176.205.242])
	by salvelinus.brooktrout.com (8.12.5/8.12.5) with ESMTP id
	i9O1d7r2018665
	for <speechsc@ietf.org>; Sat, 23 Oct 2004 21:39:08 -0400 (EDT)
Received: by nhmail2.eng.brooktrout.com with Internet Mail Service
	(5.5.2653.19) id <PQMPP1BC>; Sat, 23 Oct 2004 21:36:21 -0400
Message-ID: <EDD694D47377D7119C8400D0B77FD331C108BB@nhmail2.eng.brooktrout.com>
From: Eric Burger <eburger@brooktrout.com>
To: "IETF SPEECHSC (E-mail)" <speechsc@ietf.org>
Subject: RE: [Speechsc] Note Self :-)
Date: Sat, 23 Oct 2004 21:36:00 -0400
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
Content-Type: text/plain;
	charset="iso-8859-1"
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 856eb5f76e7a34990d1d457d8e8e5b7f
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 7655788c23eb79e336f5f8ba8bce7906

No need to wait for WGLC for people to look at it!

> -----Original Message-----
> From: Reifenrath, Klaus [mailto:Klaus.Reifenrath@Scansoft.com]
> Sent: Friday, October 22, 2004 8:38 AM
> To: 'Sarvi Shanmugham'
> Cc: IETF SPEECHSC (E-mail)
> Subject: RE: [Speechsc] Note Self :-)
> 
> 
> Hi Sarvi,
> 
> I did not have much time to read the 05 draft yet. I think we 
> need to wait
> for last call until the NLSML section was carefully reviewed! 
> This may take
> some time as we need to evolve ASR experts.
[snip]

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


From speechsc-bounces@ietf.org  Mon Oct 25 12:54:26 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA13444
	for <speechsc-web-archive@ietf.org>; Mon, 25 Oct 2004 12:54:26 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CM8KO-0002kk-0q
	for speechsc-web-archive@ietf.org; Mon, 25 Oct 2004 13:08:26 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CM7pE-00073r-V6; Mon, 25 Oct 2004 12:36:04 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CM7nA-0006in-Ra
	for speechsc@megatron.ietf.org; Mon, 25 Oct 2004 12:33:56 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id MAA12131
	for <speechsc@ietf.org>; Mon, 25 Oct 2004 12:33:53 -0400 (EDT)
Received: from penguin.ericsson.se ([193.180.251.47])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CM80T-0002Mp-CT
	for speechsc@ietf.org; Mon, 25 Oct 2004 12:47:53 -0400
Received: from esealmw140.al.sw.ericsson.se ([153.88.254.121])
	by penguin.ericsson.se (8.12.10/8.12.10/WIREfire-1.8b) with ESMTP id
	i9PGXefM005123
	for <speechsc@ietf.org>; Mon, 25 Oct 2004 18:33:43 +0200 (MEST)
Received: from esealnt610.al.sw.ericsson.se ([153.88.254.120]) by
	esealmw140.al.sw.ericsson.se with Microsoft SMTPSVC(6.0.3790.211); 
	Mon, 25 Oct 2004 18:33:40 +0200
Received: from [147.214.34.66] (EGA0E00202EAUIV.ki.sw.ericsson.se
	[147.214.34.66]) by esealnt610.al.sw.ericsson.se with SMTP
	(Microsoft Exchange Internet Mail Service Version 5.5.2657.72)
	id VJQ1M52X; Mon, 25 Oct 2004 18:33:40 +0200
Message-ID: <417D0B6E.2090603@ericsson.com>
Date: Mon, 25 Oct 2004 16:19:26 +0200
X-Sybari-Trust: 918f5802 ad48f3dd 526cc6d8 00000179
From: Magnus Westerlund <magnus.westerlund@ericsson.com>
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US;
	rv:1.7.2) Gecko/20040803
X-Accept-Language: sv, en-us, en
MIME-Version: 1.0
To: Sarvi Shanmugham <sarvi@cisco.com>
Subject: Re: [Speechsc] Comments on draft-ietf-speechsc-mrcpv2-03.txt
References: <417979ED.9080501@cisco.com>
In-Reply-To: <417979ED.9080501@cisco.com>
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit
X-OriginalArrivalTime: 25 Oct 2004 16:33:40.0149 (UTC)
	FILETIME=[626C4650:01C4BAB0]
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 36b1f8810cb91289d885dc8ab4fc8172
Content-Transfer-Encoding: 7bit
Cc: speechsc@ietf.org
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 87a3f533bb300b99e2a18357f3c1563d
Content-Transfer-Encoding: 7bit

Hi Sarvi,

A few comments inline.

Sarvi Shanmugham wrote:
> Magnus Westerlund wrote:
>>
>> 8. Section 2: Do you completely rule out a proxy in MRCPv2? That would 
>> then be a big difference and should be noted in this section. The 
>> reason is that it strongly effects if what you have specified in terms 
>> of methods, signalling will work or not. Also when referencing HTTP 
>> headers, it is important to consider this issue.
> 
> I don't beileve MRCPv2 proxies have been rulled out. If  a proxy exists, 
> I would expect it to be a back to SIP UA which would also proxy the 
> MRCPv2 channel messages coming from the client to the appropriate server 
> behind it.
>

Okay, then if I remember correctly, there is some issues with how things 
are defined. When one has a proxy, one suddenly needs to clarify a lot 
of things, like:
- How does the proxy know where to send a request?
- What changes are allowed to the headers and other parts of the requests.
- How do one perform capability detection/negotiation when one have one 
or more proxies?

Therefore one can't easily put in proxies, they have to be considered 
from the beginning. Thus I think that you will have some problem of 
allowing proxies. I think the WG needs to make a decision if this is 
desired, and secondly if you are willing to spend the time on doing the 
work.


>>
>> 12. Section 3.2, example 2: The offer and answers contain a dynamic 
>> payload type 96 for the audio description that is not mapped. This is 
>> not correct.
> 
> If you are pointing to the missing  mapping line for 96. I will get to 
> it in the next draft.

Yes.

>>
>> 15. Section 3.3, second last paragraph on page 13, last sentence:
>> "The media stream in either
>>     direction may contain more than one Synchronized Source (SSRC)
>>     identifier due to multiple sources contributing to the media on the
>>     pipe and the clientserver SHOULD be able to deal with it. "
>>
>> Please correct "the client server" is it missing a "or"?
> 
> Yes.
> 
>>
>> Also, what do you mean with dealing with it. They are clearly 
>> different sources and unless RTP retransmission is used, performing 
>> speech recognition on different SSRCs are like doing on different 
>> phone lines in some sense, while in others it is natural to do it on 
>> the mixed signal from all sources. Further consideration are needed.
> 
> It is assumed to be one source. But there may be cases where RFC2833 
> DTMF packets arrive on the same RTP pipe but with different SSRCs. I am 
> also not ruling out the case where the server may want to support 
> recognizing hotwords when listening to a conference. There could be 
> different speakers on different SSRC in the multicase case and we might 
> want to respond for anyone saying the "hotword".
> 

Okay, I took a look at section 4.3 in version 05. It is a start, however 
I think you will need a lot of more specification in relation to the 
media transport. The question is how to handle this, when one has 
actually learn what is needed. I have to little knowledge about your 
applications to really help you here. I can compare to RTSP, where it 
has become evident that we needed to clarify a lot about the handling of 
RTP. I would recommend you take a look at section B of 
draft-ietf-mmusic-rfc2326bis-08. I know that this section is not 
directly applicable, however it may given you some points to think over.

I would also consider how the negotiation in the SIP domain for media 
affects the possibility for interoperability.

>>
>> 33. Section 6.1, Cache-Control: So if I understand this correctly, 
>> this header controls how the media server should cache any resource 
>> that are referenced in a request to perform a resource purpose, like 
>> recognition? This is rather different from HTTPs use of the header. It 
>> might be wise to consider to rename it.
> 
> Well the usage of these headers have the same meaning as in HTTP, except 
> that it is not meant for the reltionship between the MRCP client and the 
> server, but the HTTP client on the MRCP server and its cache and its 
> relationship with  the HTTP server where it goes to fetch documents.  
> Which is why these headers have been leveraged from HTTP and their 
> naming has been left the same.
> 

I think that difference is important. In HTTP the cache-control applies 
to the response, while in MRCPv2 it applies to a referenced resource. I 
think one should consider renaming the header to avoid misunderstanding.

By the way, is MRCP intended to be part of the common header registry 
for email and HTTP starts with?

>>
>> 36. Section 8: I think that one should spend some paragraphs on 
>> writing an introduction to each different type of resource describing 
>> its purpose and the normal setup in which it is used. From the 
>> beginning I had some problem understanding how everything worked. It 
>> will make things easier also when some AD is going to read it later.
> 
> Are you refering to any particular resource, coz as I see it each 
> resource does have an introduction in the beginning describing its 
> capabilities.

No, I think it has to do with all the resources. However, it might be 
that I am not knowing the services enough.

>>
>> 56. Section 8.11, Example: The message lengths is clearly wrong.
>>
> I have not tried to exactly match the message length. I don't think that 
> is a requirement is it.

Then you should clearly note that. Because people are considering 
examples very carefully, and any failure to being compliant with the 
specification do confuse people reading the spec.

Cheers

Magnus Westerlund

Multimedia Technologies, Ericsson Research EAB/TVA/A
----------------------------------------------------------------------
Ericsson AB                | Phone +46 8 4048287
Torshamsgatan 23           | Fax   +46 8 7575550
S-164 80 Stockholm, Sweden | mailto: magnus.westerlund@ericsson.com


_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


From speechsc-bounces@ietf.org  Mon Oct 25 19:49:54 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id TAA19609
	for <speechsc-web-archive@ietf.org>; Mon, 25 Oct 2004 19:49:54 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CMEod-0004gG-KM
	for speechsc-web-archive@ietf.org; Mon, 25 Oct 2004 20:03:58 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CMENi-0006cP-1Z; Mon, 25 Oct 2004 19:36:06 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CMBay-0007Gh-Sp
	for speechsc@megatron.ietf.org; Mon, 25 Oct 2004 16:37:36 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id QAA06294
	for <speechsc@ietf.org>; Mon, 25 Oct 2004 16:37:34 -0400 (EDT)
Received: from mail.lumenvox.com ([206.169.193.45] helo=lv-svr.lumenvox.com)
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CMBoU-0008So-IB
	for speechsc@ietf.org; Mon, 25 Oct 2004 16:51:35 -0400
Received: from PROGTHOMAS ([10.0.0.141]) by lv-svr.lumenvox.com with Microsoft
	SMTPSVC(5.0.2195.6713); Mon, 25 Oct 2004 13:28:32 -0700
From: "Thomas Gal" <ThomasGal@LumenVox.com>
To: "'Magnus Westerlund'" <magnus.westerlund@ericsson.com>,
        "'Sarvi Shanmugham'" <sarvi@cisco.com>
Subject: RE: [Speechsc] Comments on draft-ietf-speechsc-mrcpv2-03.txt
Date: Mon, 25 Oct 2004 13:36:56 -0700
Organization: LumenVox LLC
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Office Outlook, Build 11.0.6353
Thread-Index: AcS6siwZQJSzPn42RmeXhbuRMK6e4AAGH6ug
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1441
In-Reply-To: <2D0CA64CDC33E14DA7AB043B8CC4D2BB0263AC0D@svr-exc.domain.com>
Message-ID: <LV-SVRW5WnHNxW7sagp00001158@lv-svr.lumenvox.com>
X-OriginalArrivalTime: 25 Oct 2004 20:28:32.0984 (UTC)
	FILETIME=[32683D80:01C4BAD1]
X-Spam-Score: 0.0 (/)
X-Scan-Signature: bf422c85703d3d847fb014987125ac48
Content-Transfer-Encoding: 7bit
Cc: speechsc@ietf.org
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
Reply-To: ThomasGal@LumenVox.com
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 7a0494a0224ca59418dd8f92694c1fdb
Content-Transfer-Encoding: 7bit

Comments inline. I never got any comments back on the wording right before
section 2 on page 4:

"It also describes how these messages are carried over a transport layer
such as TCP, SCTP or TLS."

But again I think it should be something like:

"It also describes how these messages are carried over a transport layer
such as TCP or SCTP, and how they can be secured through using TLS."

Just from a perspective of keeping the concepts in line. 

-Tom

thomasgal@lumenvox.com  

> Sarvi Shanmugham wrote:
> > Magnus Westerlund wrote:
> >>
> >> 8. Section 2: Do you completely rule out a proxy in MRCPv2? That 
> >> would then be a big difference and should be noted in this 
> section. 
> >> The reason is that it strongly effects if what you have 
> specified in 
> >> terms of methods, signalling will work or not. Also when 
> referencing 
> >> HTTP headers, it is important to consider this issue.
> > 
> > I don't beileve MRCPv2 proxies have been rulled out. If  a proxy 
> > exists, I would expect it to be a back to SIP UA which would also 
> > proxy the
> > MRCPv2 channel messages coming from the client to the appropriate 
> > server behind it.
> >
> 
> Okay, then if I remember correctly, there is some issues with 
> how things are defined. When one has a proxy, one suddenly 
> needs to clarify a lot of things, like:
> - How does the proxy know where to send a request?

Probably you have a farm of MRCP servers that REGISTER their availability
with a registrar that is accessed by the front end proxies to know where to
send requests to. I would assume that most peoples engines already have some
sort of loadbalancing functionality built in anyway so this might be a layer
behind the front end as well.


> - What changes are allowed to the headers and other parts of 
> the requests.

I would suspect none. MRCP is for controlling the media resrouces not for
routing requests right?

> - How do one perform capability detection/negotiation when 
> one have one or more proxies?
> 

No doubt the proxies COULD have varying abilities. Either way they are
really just functioning to pass the requests along in a certain fashion
correct? What specific limiting case are you thinking of?

	Correct me if I'm wrong but these seem like SIP system
implementation issues not MRCP at all. Rather since this is "supposed" to be
protocol agnostic isn't this more of an issue for the protocols used in
setting up the system, as opposed to something we need to address directly?
What strong affects does it have?


> Therefore one can't easily put in proxies, they have to be 
> considered from the beginning. Thus I think that you will 
> have some problem of allowing proxies. I think the WG needs 
> to make a decision if this is desired, and secondly if you 
> are willing to spend the time on doing the work.
> 

	So we need to define what specifically about MRCP begets routing
functionality at this level before we make that decision. What exactly is it
that SIP for example is missing that makes us want to add
functionality(complexity) to MRCP? Obviously there's lots of cases where you
may make SIP decisions based on MRCP headers and such, but again I think
this is more from an infrastructure standpoint and is really just
optimizations, not core MRCP similar to how we were fuddling with combining
enrollment functionality with recognitions. 


> >> Also, what do you mean with dealing with it. They are clearly 
> >> different sources and unless RTP retransmission is used, 
> performing 
> >> speech recognition on different SSRCs are like doing on different 
> >> phone lines in some sense, while in others it is natural 
> to do it on 
> >> the mixed signal from all sources. Further consideration 
> are needed.
> > 
> > It is assumed to be one source. But there may be cases 
> where RFC2833 
> > DTMF packets arrive on the same RTP pipe but with different 
> SSRCs. I 
> > am also not ruling out the case where the server may want 
> to support 
> > recognizing hotwords when listening to a conference. There could be 
> > different speakers on different SSRC in the multicase case and we 
> > might want to respond for anyone saying the "hotword".
> > 
> 
> Okay, I took a look at section 4.3 in version 05. It is a 
> start, however I think you will need a lot of more 
> specification in relation to the media transport. The 
> question is how to handle this, when one has actually learn 
> what is needed. I have to little knowledge about your 
> applications to really help you here. I can compare to RTSP, 
> where it has become evident that we needed to clarify a lot 
> about the handling of RTP. I would recommend you take a look 
> at section B of draft-ietf-mmusic-rfc2326bis-08. I know that 
> this section is not directly applicable, however it may given 
> you some points to think over.
> 
> I would also consider how the negotiation in the SIP domain 
> for media affects the possibility for interoperability.
> 

	I agree that we absolutely should not preclude the ability to send
audio that is the product of mixing a conference to a recognizer with the
understanding that currently this is obviously complicated from the
recognition end as well. Not sure about the details though. Again does this
need to be specified in this document?

> >>
> >> 33. Section 6.1, Cache-Control: So if I understand this correctly, 
> >> this header controls how the media server should cache any 
> resource 
> >> that are referenced in a request to perform a resource 
> purpose, like 
> >> recognition? This is rather different from HTTPs use of 
> the header. 
> >> It might be wise to consider to rename it.
> > 
> > Well the usage of these headers have the same meaning as in HTTP, 
> > except that it is not meant for the reltionship between the MRCP 
> > client and the server, but the HTTP client on the MRCP 
> server and its 
> > cache and its relationship with  the HTTP server where it 
> goes to fetch documents.
> > Which is why these headers have been leveraged from HTTP and their 
> > naming has been left the same.
> > 
> 
> I think that difference is important. In HTTP the 
> cache-control applies to the response, while in MRCPv2 it 
> applies to a referenced resource. I think one should consider 
> renaming the header to avoid misunderstanding.
> 

Absolutely, perhaps something like HTTP-CACHE-CONTROl or
REFERENCE-CACHE-CONTROL. I include the second one because I would guess that
what we are really meaning this for is Incoming cache-control(like
references in an SRGS grammar to another SRGS grammar or a url/text-list
with more URLS etc) as opposed to outgoing (i.e. responses to clients). In
that case I think REFERENCE-CACHE-CONTROL makes more sense because we don't
want to necessarily preclude using other protocols like ftp as well as the
"session:" urls right? Do we literally only mean HTTP URLs? If not perhaps
the wording on this section could be changed a little bit to say that we are
only referring to HTTP for the context of the caching behavior, not in
limiting the protocols that can be used in external references.

> 	b) Another pending issue is the start line format (see 
> attached mail
> 	thread).
> 	  
> 
> This was changed based on early feedback from Cullen gave on 
> this mailing list. The idea was that it was good practice to 
> have the first field recognize the protocol and version. 
> Also, that the Length message be available at a fixed 
> location from the beginning of the message for all messages. 
> It was said that it makes parsing efficient as we know the 
> length of the message by looking at a fixed location from the 
> beginning of the message.
> 
> 

I think this was an absolutely great idea. I feel like most protocols are
like this so that you don't have to wait til some indeterminate point in the
message to know which version parser to send everything to. As well I think
the message length should come early more so you can optimize your network
reads. Wasn't it odd that in the first version it wasn't actually
consistent, and it appeared in different locations based on the type of
request?


_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


From speechsc-bounces@ietf.org  Tue Oct 26 07:06:57 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id HAA23605
	for <speechsc-web-archive@ietf.org>; Tue, 26 Oct 2004 07:06:57 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CMPNy-0000Ra-CG
	for speechsc-web-archive@ietf.org; Tue, 26 Oct 2004 07:21:07 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CMP95-0003sN-IG; Tue, 26 Oct 2004 07:05:43 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CMP7Z-0003Yb-8m
	for speechsc@megatron.ietf.org; Tue, 26 Oct 2004 07:04:10 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id HAA23301
	for <speechsc@ietf.org>; Tue, 26 Oct 2004 07:04:06 -0400 (EDT)
Received: from mx2.scansoft.com ([198.71.64.82])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CMPLC-0000Nf-Dd
	for speechsc@ietf.org; Tue, 26 Oct 2004 07:18:15 -0400
Received: from pb-exchcon.pb.scansoft.com ([10.1.4.73]) by mx2 with
	trend_isnt_name_B; Tue, 26 Oct 2004 07:05:48 -0400
Received: by pb-exchcon.pb.scansoft.com with Internet Mail Service
	(5.5.2653.19) id <VTXX0CTZ>; Tue, 26 Oct 2004 07:03:30 -0400
Message-ID: <BBF29C9B95E52E4DB5C29A0ACC94E83BA62391@ac-exch1.eu.scansoft.com>
From: "Reifenrath, Klaus" <Klaus.Reifenrath@Scansoft.com>
To: "'Sarvi Shanmugham'" <sarvi@cisco.com>
Subject: RE: [Speechsc] Note Self :-)
Date: Tue, 26 Oct 2004 07:01:37 -0400
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
X-Spam-Score: 1.2 (+)
X-Scan-Signature: 8a9672ae1970aa20cd94e880017fa9b4
Cc: "IETF SPEECHSC \(E-mail\)" <speechsc@ietf.org>
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1984766728=="
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 1.2 (+)
X-Scan-Signature: c6d4566aad1fef50f784fa8a77ccada7

This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

--===============1984766728==
Content-Type: multipart/alternative;
	boundary="----_=_NextPart_001_01C4BB4B.29C03460"

This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.

------_=_NextPart_001_01C4BB4B.29C03460
Content-Type: text/plain;
	charset="iso-8859-1"

Comments inline
 
A few more editorial notes:
1) The legend for the generic header fields is wrong (copy and paste error).
2) In the legend on page 57 the letter (M) is used twice.
3) I like the new tables of the header fields very much. Unfortunately they
no longer show, which header fields MUST be supported by all MRCP servers. 
 
Regards,
Klaus

-----Original Message-----
From: Sarvi Shanmugham [mailto:sarvi@cisco.com]
Sent: Freitag, 22. Oktober 2004 19:24
To: Reifenrath, Klaus
Cc: IETF SPEECHSC (E-mail)
Subject: Re: [Speechsc] Note Self :-)


Comments inline.

Reifenrath, Klaus wrote:


Hi Sarvi,



I did not have much time to read the 05 draft yet. I think we need to wait

for last call until the NLSML section was carefully reviewed! This may take

some time as we need to evolve ASR experts.

  

I agree there needs to be some review of this section. But I should also
note that the agreement was that this would be the same as the NLSML used in
MRCPv1 and that we would not try to add any new features.  
[Reifenrath, Klaus] I agree 100%.
 What I have tried to do is to bring in the NLSML from the latest W3C NLSML
draft and make it consistent with MRCPv1 NLSML usage. 
[Reifenrath, Klaus] Thanks for your effort!!
  So lets go ahead and review to make sure that NLSML section meets those
goals.
[Reifenrath, Klaus] OK. Review is already in progress.


According to the current spec the verification and enrollment results should

be in MRCP-NLSML. Therefore the W3C schema and DTD of MRCP-NLSML should

include the verification and enrollment result elements. I attach the W3C

schema for the verification result that I posted some weeks ago, which can

easily be integrated into your NLSML schema.

  

I don't think we are planning on calling it anything else other than NLSML,
correct. 
But that said, I think we should leave the 3 schema's as separate entities.
Coz we may end up using the Enrollment and Verification Schema even when we
move to EMMA, assuming EMMA would still not be addressing these 2 aspects in
its first version. So leaving it modular has its advantages.
[Reifenrath, Klaus] I think we should add a reference to the NLSML W3C
working draft on page 73 and we should make clear that NLSML will be
replaced by EMMA in the future. 
But note that we still embed the Verification and Enrollment XML within the
NLSML markup but under its own namespace. 
[Reifenrath, Klaus] What is the meaning of the mandatory grammar attribute
for verification results? Since NLSML is now part of the MRCP spec I wonder
if namespacing is still required.
Just that we are leaving its definition as separate schema.

I will try to convert the rest of the XML into W3C schema as well. 
[Reifenrath, Klaus] Why don't you use the W3C schema I submitted?


Further major issues:

a) From my point of view a section about SIP based server selection (RFC

3841) is still missing.

  

Do you have some text in mind that you would like to add. If not I will add
some in the next draft.


b) Another pending issue is the start line format (see attached mail

thread).

  

This was changed based on early feedback from Cullen gave on this mailing
list. The idea was that it was good practice to have the first field
recognize the protocol and version. Also, that the Length message be
available at a fixed location from the beginning of the message for all
messages. It was said that it makes parsing efficient as we know the length
of the message by looking at a fixed location from the beginning of the
message.
[Reifenrath, Klaus] OK 


Minor issues:

a) on page 16 you reference section 3.2 instead of 4.2

  

fixed.


b) section 5: does the message length of the start line include the length

of the start line?

  

Have corrected as follows "The message-length field specifies the length of
the message, including the start-line, and MUST be the 2nd token from the
beginning of the message." 

c) section 5.2 (page 20): is 405 also the right failure code, if the session

is not found 

  

yes. will  clarified


d) hot-max-duration/hot-min-duration (on page 66) specify the

maximum/minimum length of an utterance in milliseconds    (not in seconds)

  

Looks like this was fixed in -05.txt
[Reifenrath, Klaus] No. On page 66 you find the following sentences:
 
    It specifies the maximum length of an utterance (in seconds) that 
    should be considered for Hotword recognition.
 
    It specifies the minimum length of an utterance (in seconds) that can 
    be considered for Hotword. 

thanks,
Sarvi


Klaus



 







-----Original Message-----

From: Sarvi Shanmugham [ mailto:sarvi@cisco.com <mailto:sarvi@cisco.com> ]

Sent: Mittwoch, 20. Oktober 2004 22:42

To: IETF SPEECHSC (E-mail)

Subject: [Speechsc] Note Self :-)







I noticed that the some of the NLSML examples in the apendix of the 

-05.txt draft have confidence value of 100, 60 or 40. They should be 

1.0, 0.6 and 0.4



I have fixed these in my word document and will be available shortly in 

-06.txt.

The 06.txt changes are mainly expected to be the the  IANA consideration 

and if people feel strongly converting the Verification/Enrollment XML 

to W3C Schema. I am hoping we can go last call with the -06 draft, and I 

don't expect any major changes to the spec in the -06 draft. If anyone 

feels otherwise, please let me know.



Thx,

Sarvi.



_______________________________________________

Speechsc mailing list

Speechsc@ietf.org <mailto:Speechsc@ietf.org> 

https://www1.ietf.org/mailman/listinfo/speechsc
<https://www1.ietf.org/mailman/listinfo/speechsc> 



  

  _____  



Subject: 
RE: [Speechsc] question about v2 message format

From: 
Pierre Forgues  <mailto:forgues@nuance.com> <forgues@nuance.com>

Date: 
Mon, 28 Jun 2004 07:44:30 -0400

To: 
"Reifenrath, Klaus"  <mailto:Klaus.Reifenrath@Scansoft.com>
<Klaus.Reifenrath@Scansoft.com>, Daniel Burnett  <mailto:burnett@nuance.com>
<burnett@nuance.com>

To: 
"Reifenrath, Klaus"  <mailto:Klaus.Reifenrath@Scansoft.com>
<Klaus.Reifenrath@Scansoft.com>, Daniel Burnett  <mailto:burnett@nuance.com>
<burnett@nuance.com>

CC: 
speechsc@ietf.org <mailto:speechsc@ietf.org> 

I agree we should follow the lead with other protocols like SIP.  Client

to server request line must start with the method and end with the

protocol and version.



Server to client response-line and event-line start with the protocol

and version.



Pierre



-----Original Message-----

From:  speechsc-bounces@ietf.org <mailto:speechsc-bounces@ietf.org>  [
mailto:speechsc-bounces@ietf.org <mailto:speechsc-bounces@ietf.org> ] On

Behalf Of Reifenrath, Klaus

Sent: Friday, June 25, 2004 8:01 AM

To: Daniel Burnett

Cc:  speechsc@ietf.org <mailto:speechsc@ietf.org> 

Subject: RE: [Speechsc] question about v2 message format



Hi Dan,



I like the SIP format where the start-line of a request starts with the

method and the status-line of the response starts with the SIP-version.

Unfortunately we have 3 types of MRCP messages: requests, responses and

events. If we change the message definition I would suggest to start the

request-line with the method and the response-line and event-line with

the

MRCP version.



But also with the current MRCPv2 message definition the message parser

can

distinguish the message types efficiently. Having the version

information

right at the beginning has some advantages. 



Klaus







-----Original Message-----

From: Daniel Burnett [ mailto:burnett@nuance.com <mailto:burnett@nuance.com>
]

Sent: Freitag, 25. Juni 2004 06:28

To:  speechsc@ietf.org <mailto:speechsc@ietf.org> 

Subject: [Speechsc] question about v2 message format





The convention used for MRCP messages has changed in v2 and it is

different

from that in

RTSP, SIP, and HTTP.



New convention: "MRCP/2.0 123 VER-SET-VOICEPRINT 123456

Old convention: "START-OF-SPEECH 543258 IN-PROGRESS MRCP/1.0"

Why the change in convention?



-- dan





_______________________________________________

Speechsc mailing list

Speechsc@ietf.org <mailto:Speechsc@ietf.org> 

https://www1.ietf.org/mailman/listinfo/speechsc
<https://www1.ietf.org/mailman/listinfo/speechsc> 



_______________________________________________

Speechsc mailing list

Speechsc@ietf.org <mailto:Speechsc@ietf.org> 

https://www1.ietf.org/mailman/listinfo/speechsc
<https://www1.ietf.org/mailman/listinfo/speechsc> 



 

  

  



------_=_NextPart_001_01C4BB4B.29C03460
Content-Type: text/html;
	charset="iso-8859-1"

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE></TITLE>

<META content="MSHTML 6.00.2800.1458" name=GENERATOR></HEAD>
<BODY>
<DIV><FONT face=Arial color=#0000ff size=2><SPAN 
class=583340909-26102004>Comments inline</SPAN></FONT></DIV>
<DIV><FONT face=Arial color=#0000ff size=2><SPAN 
class=583340909-26102004></SPAN></FONT>&nbsp;</DIV>
<DIV><FONT face=Arial color=#0000ff size=2><SPAN class=583340909-26102004>A few 
more editorial notes:</SPAN></FONT></DIV>
<DIV><FONT face=Arial color=#0000ff size=2><SPAN class=583340909-26102004>1) The 
legend for the generic header fields&nbsp;is wrong (copy and paste 
error).</SPAN></FONT></DIV>
<DIV><FONT face=Arial color=#0000ff size=2><SPAN 
class=583340909-26102004>2)&nbsp;In the legend on page 57 the letter (M) is used 
twice.</SPAN></FONT></DIV>
<DIV><FONT face=Arial color=#0000ff size=2><SPAN class=583340909-26102004>3) I 
like the new tables of the header fields very much. Unfortunately they no longer 
show, which header fields MUST be supported by all MRCP 
servers.&nbsp;</SPAN></FONT></DIV>
<DIV><FONT face=Arial color=#0000ff size=2><SPAN 
class=583340909-26102004></SPAN></FONT>&nbsp;</DIV>
<DIV><FONT face=Arial color=#0000ff size=2><SPAN 
class=583340909-26102004>Regards,</SPAN></FONT></DIV>
<DIV><FONT face=Arial color=#0000ff size=2><SPAN 
class=583340909-26102004>Klaus</SPAN></FONT></DIV>
<BLOCKQUOTE>
  <DIV class=OutlookMessageHeader dir=ltr align=left><FONT face=Tahoma 
  size=2>-----Original Message-----<BR><B>From:</B> Sarvi Shanmugham 
  [mailto:sarvi@cisco.com]<BR><B>Sent:</B> Freitag, 22. Oktober 2004 
  19:24<BR><B>To:</B> Reifenrath, Klaus<BR><B>Cc:</B> IETF SPEECHSC 
  (E-mail)<BR><B>Subject:</B> Re: [Speechsc] Note Self 
  :-)<BR><BR></FONT></DIV>Comments inline.<BR><BR>Reifenrath, Klaus wrote:<BR>
  <BLOCKQUOTE 
  cite=midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com 
  type="cite"><PRE wrap="">Hi Sarvi,

I did not have much time to read the 05 draft yet. I think we need to wait
for last call until the NLSML section was carefully reviewed! This may take
some time as we need to evolve ASR experts.
  </PRE></BLOCKQUOTE>
  <DIV>I agree there needs to be some review of this section. But I should also 
  note that the agreement was that this would be the same as the NLSML used in 
  MRCPv1 and that we would not try to add any new features.&nbsp; <BR><SPAN 
  class=583340909-26102004><FONT face=Arial color=#0000ff size=2>[Reifenrath, 
  Klaus]&nbsp;I agree 100%.</FONT></SPAN></DIV>
  <DIV><SPAN class=583340909-26102004>&nbsp;</SPAN>What I have tried to do is to 
  bring in the NLSML from the latest W3C NLSML draft and make it consistent with 
  MRCPv1 NLSML usage.&nbsp;<BR><SPAN class=583340909-26102004><FONT face=Arial 
  color=#0000ff size=2>[Reifenrath, Klaus]&nbsp;Thanks for your 
  effort!!</FONT></SPAN></DIV>
  <DIV><SPAN class=583340909-26102004>&nbsp;</SPAN> So lets go ahead and review 
  to make sure that NLSML section meets those goals.<BR><SPAN 
  class=583340909-26102004><FONT face=Arial color=#0000ff size=2>[Reifenrath, 
  Klaus]&nbsp;OK. Review is already in progress.</FONT></SPAN><BR></DIV>
  <BLOCKQUOTE 
  cite=midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com 
  type="cite"><PRE wrap="">According to the current spec the verification and enrollment results should
be in MRCP-NLSML. Therefore the W3C schema and DTD of MRCP-NLSML should
include the verification and enrollment result elements. I attach the W3C
schema for the verification result that I posted some weeks ago, which can
easily be integrated into your NLSML schema.
  </PRE></BLOCKQUOTE>
  <DIV>I don't think we are planning on calling it anything else other than 
  NLSML, correct. <BR>But that said, I think we should leave the 3 schema's as 
  separate entities. Coz we may end up using the Enrollment and Verification 
  Schema even when we move to EMMA, assuming EMMA would still not be addressing 
  these 2 aspects in its first version. So leaving it modular has its 
  advantages.<BR><SPAN class=583340909-26102004><FONT face=Arial color=#0000ff 
  size=2>[Reifenrath, Klaus]&nbsp;I think we should add a reference to the NLSML 
  W3C working draft&nbsp;on page 73 and we should make clear that NLSML will be 
  replaced by EMMA in the future.&nbsp;</FONT></SPAN><BR>But note that we still 
  embed the Verification and Enrollment XML within the NLSML markup but under 
  its own namespace. <BR><SPAN class=583340909-26102004><FONT face=Arial 
  size=2><FONT color=#0000ff>[Reifenrath, Klaus]&nbsp;What is the meaning of the 
  mandatory grammar attribute for verification results?&nbsp;<FONT 
  face="Times New Roman" size=3>Since NLSML is now part of the MRCP spec I 
  wonder if namespacing is still required.</FONT></FONT></FONT></SPAN></DIV>
  <DIV><SPAN class=583340909-26102004></SPAN>Just that we are leaving its 
  definition as separate schema.<BR><BR>I will try to convert the rest of the 
  XML into W3C schema as well. <BR><SPAN class=583340909-26102004><FONT 
  face=Arial color=#0000ff size=2>[Reifenrath, Klaus]&nbsp;Why don't you use the 
  W3C schema I submitted?</FONT></SPAN><BR></DIV>
  <BLOCKQUOTE 
  cite=midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com 
  type="cite"><PRE wrap="">Further major issues:
a) From my point of view a section about SIP based server selection (RFC
3841) is still missing.
  </PRE></BLOCKQUOTE>Do you have some text in mind that you would like to add. 
  If not I will add some in the next draft.<BR>
  <BLOCKQUOTE 
  cite=midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com 
  type="cite"><PRE wrap="">b) Another pending issue is the start line format (see attached mail
thread).
  </PRE></BLOCKQUOTE>This was changed based on early feedback from Cullen gave 
  on this mailing list. The idea was that it was good practice to have the first 
  field recognize the protocol and version. Also, that the Length message be 
  available at a fixed location from the beginning of the message for all 
  messages. It was said that it makes parsing efficient as we know the length of 
  the message by looking at a fixed location from the beginning of the 
  message.<BR><SPAN class=583340909-26102004><FONT face=Arial color=#0000ff 
  size=2>[Reifenrath, Klaus]&nbsp;OK&nbsp;</FONT></SPAN><BR>
  <BLOCKQUOTE 
  cite=midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com 
  type="cite"><PRE wrap="">Minor issues:
a) on page 16 you reference section 3.2 instead of 4.2
  </PRE></BLOCKQUOTE>fixed.<BR>
  <BLOCKQUOTE 
  cite=midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com 
  type="cite"><PRE wrap="">b) section 5: does the message length of the start line include the length
of the start line?
  </PRE></BLOCKQUOTE>Have corrected as follows <SPAN lang=EN-US 
  style="FONT-SIZE: 12pt; FONT-FAMILY: 'Times New Roman'">"The message-length 
  field specifies the length of the message, <B>including the start-line,</B> 
  and MUST be the 2nd token from the beginning of the message."</SPAN> 
  <BLOCKQUOTE 
  cite=midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com 
  type="cite"><PRE wrap="">c) section 5.2 (page 20): is 405 also the right failure code, if the session
is not found 
  </PRE></BLOCKQUOTE>yes. will&nbsp; clarified<BR>
  <BLOCKQUOTE 
  cite=midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com 
  type="cite"><PRE wrap="">d) hot-max-duration/hot-min-duration (on page 66) specify the
maximum/minimum length of an utterance in milliseconds    (not in seconds)
  </PRE></BLOCKQUOTE>
  <DIV>Looks like this was fixed in -05.txt<BR><SPAN 
  class=583340909-26102004><FONT face=Arial color=#0000ff size=2>[Reifenrath, 
  Klaus]&nbsp;No. On page 66 you find the following 
  sentences:</FONT></SPAN></DIV>
  <DIV><SPAN class=583340909-26102004><FONT face=Arial color=#0000ff 
  size=2></FONT></SPAN>&nbsp;</DIV>
  <DIV><SPAN class=583340909-26102004><FONT face=Arial color=#0000ff 
  size=2>&nbsp;&nbsp;&nbsp; It&nbsp;specifies the maximum length of an utterance 
  (in seconds) that <BR>&nbsp;&nbsp;&nbsp; should be considered for Hotword 
  recognition.</FONT></SPAN></DIV>
  <DIV><SPAN class=583340909-26102004><FONT face=Arial color=#0000ff 
  size=2></FONT></SPAN>&nbsp;</DIV>
  <DIV><SPAN class=583340909-26102004><FONT face=Arial color=#0000ff 
  size=2>&nbsp;&nbsp;&nbsp; It&nbsp;specifies the minimum length of an utterance 
  (in seconds) that can <BR>&nbsp;&nbsp;&nbsp; be considered for 
  Hotword.</FONT>&nbsp;</SPAN><BR><BR>thanks,<BR>Sarvi<BR></DIV>
  <BLOCKQUOTE 
  cite=midBBF29C9B95E52E4DB5C29A0ACC94E83BA6236D@ac-exch1.eu.scansoft.com 
  type="cite"><PRE wrap="">Klaus

 



-----Original Message-----
From: Sarvi Shanmugham [<A class=moz-txt-link-freetext href="mailto:sarvi@cisco.com">mailto:sarvi@cisco.com</A>]
Sent: Mittwoch, 20. Oktober 2004 22:42
To: IETF SPEECHSC (E-mail)
Subject: [Speechsc] Note Self :-)



I noticed that the some of the NLSML examples in the apendix of the 
-05.txt draft have confidence value of 100, 60 or 40. They should be 
1.0, 0.6 and 0.4

I have fixed these in my word document and will be available shortly in 
-06.txt.
The 06.txt changes are mainly expected to be the the  IANA consideration 
and if people feel strongly converting the Verification/Enrollment XML 
to W3C Schema. I am hoping we can go last call with the -06 draft, and I 
don't expect any major changes to the spec in the -06 draft. If anyone 
feels otherwise, please let me know.

Thx,
Sarvi.

_______________________________________________
Speechsc mailing list
<A class=moz-txt-link-abbreviated href="mailto:Speechsc@ietf.org">Speechsc@ietf.org</A>
<A class=moz-txt-link-freetext href="https://www1.ietf.org/mailman/listinfo/speechsc">https://www1.ietf.org/mailman/listinfo/speechsc</A>

  </PRE><BR>
    <HR width="90%" SIZE=4>
    <BR>
    <TABLE class=header-part1 cellSpacing=0 cellPadding=0 width="100%" 
      border=0><TBODY>
      <TR>
        <TD>
          <DIV class=headerdisplayname style="DISPLAY: inline">Subject: 
          </DIV>RE: [Speechsc] question about v2 message format</TD></TR>
      <TR>
        <TD>
          <DIV class=headerdisplayname style="DISPLAY: inline">From: 
          </DIV>Pierre Forgues <A class=moz-txt-link-rfc2396E 
          href="mailto:forgues@nuance.com">&lt;forgues@nuance.com&gt;</A></TD></TR>
      <TR>
        <TD>
          <DIV class=headerdisplayname style="DISPLAY: inline">Date: </DIV>Mon, 
          28 Jun 2004 07:44:30 -0400</TD></TR>
      <TR>
        <TD>
          <DIV class=headerdisplayname style="DISPLAY: inline">To: 
          </DIV>"Reifenrath, Klaus" <A class=moz-txt-link-rfc2396E 
          href="mailto:Klaus.Reifenrath@Scansoft.com">&lt;Klaus.Reifenrath@Scansoft.com&gt;</A>, 
          Daniel Burnett <A class=moz-txt-link-rfc2396E 
          href="mailto:burnett@nuance.com">&lt;burnett@nuance.com&gt;</A></TD></TR></TBODY></TABLE>
    <TABLE class=header-part2 cellSpacing=0 cellPadding=0 width="100%" 
      border=0><TBODY>
      <TR>
        <TD>
          <DIV class=headerdisplayname style="DISPLAY: inline">To: 
          </DIV>"Reifenrath, Klaus" <A class=moz-txt-link-rfc2396E 
          href="mailto:Klaus.Reifenrath@Scansoft.com">&lt;Klaus.Reifenrath@Scansoft.com&gt;</A>, 
          Daniel Burnett <A class=moz-txt-link-rfc2396E 
          href="mailto:burnett@nuance.com">&lt;burnett@nuance.com&gt;</A></TD></TR>
      <TR>
        <TD>
          <DIV class=headerdisplayname style="DISPLAY: inline">CC: </DIV><A 
          class=moz-txt-link-abbreviated 
          href="mailto:speechsc@ietf.org">speechsc@ietf.org</A></TD></TR></TBODY></TABLE><BR><PRE wrap="">I agree we should follow the lead with other protocols like SIP.  Client
to server request line must start with the method and end with the
protocol and version.

Server to client response-line and event-line start with the protocol
and version.

Pierre

-----Original Message-----
From: <A class=moz-txt-link-abbreviated href="mailto:speechsc-bounces@ietf.org">speechsc-bounces@ietf.org</A> [<A class=moz-txt-link-freetext href="mailto:speechsc-bounces@ietf.org">mailto:speechsc-bounces@ietf.org</A>] On
Behalf Of Reifenrath, Klaus
Sent: Friday, June 25, 2004 8:01 AM
To: Daniel Burnett
Cc: <A class=moz-txt-link-abbreviated href="mailto:speechsc@ietf.org">speechsc@ietf.org</A>
Subject: RE: [Speechsc] question about v2 message format

Hi Dan,

I like the SIP format where the start-line of a request starts with the
method and the status-line of the response starts with the SIP-version.
Unfortunately we have 3 types of MRCP messages: requests, responses and
events. If we change the message definition I would suggest to start the
request-line with the method and the response-line and event-line with
the
MRCP version.

But also with the current MRCPv2 message definition the message parser
can
distinguish the message types efficiently. Having the version
information
right at the beginning has some advantages. 

Klaus



-----Original Message-----
From: Daniel Burnett [<A class=moz-txt-link-freetext href="mailto:burnett@nuance.com">mailto:burnett@nuance.com</A>]
Sent: Freitag, 25. Juni 2004 06:28
To: <A class=moz-txt-link-abbreviated href="mailto:speechsc@ietf.org">speechsc@ietf.org</A>
Subject: [Speechsc] question about v2 message format


The convention used for MRCP messages has changed in v2 and it is
different
from that in
RTSP, SIP, and HTTP.

New convention: "MRCP/2.0 123 VER-SET-VOICEPRINT 123456
Old convention: "START-OF-SPEECH 543258 IN-PROGRESS MRCP/1.0"
Why the change in convention?

-- dan


_______________________________________________
Speechsc mailing list
<A class=moz-txt-link-abbreviated href="mailto:Speechsc@ietf.org">Speechsc@ietf.org</A>
<A class=moz-txt-link-freetext href="https://www1.ietf.org/mailman/listinfo/speechsc">https://www1.ietf.org/mailman/listinfo/speechsc</A>

_______________________________________________
Speechsc mailing list
<A class=moz-txt-link-abbreviated href="mailto:Speechsc@ietf.org">Speechsc@ietf.org</A>
<A class=moz-txt-link-freetext href="https://www1.ietf.org/mailman/listinfo/speechsc">https://www1.ietf.org/mailman/listinfo/speechsc</A>

 
  
  </PRE></BLOCKQUOTE><BR></BLOCKQUOTE></BODY></HTML>

------_=_NextPart_001_01C4BB4B.29C03460--


--===============1984766728==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

--===============1984766728==--



From speechsc-bounces@ietf.org  Fri Oct 29 10:47:17 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id KAA06239
	for <speechsc-web-archive@ietf.org>; Fri, 29 Oct 2004 10:47:16 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CNYGR-0000bv-B6
	for speechsc-web-archive@ietf.org; Fri, 29 Oct 2004 11:02:05 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CNXsz-0002Kx-Bx; Fri, 29 Oct 2004 10:37:49 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CL6rs-0004qW-Vv
	for speechsc@megatron.ietf.org; Fri, 22 Oct 2004 17:22:37 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id RAA19553
	for <speechsc@ietf.org>; Fri, 22 Oct 2004 17:22:32 -0400 (EDT)
Received: from sj-iport-4.cisco.com ([171.68.10.86])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CL74f-0001hE-ON
	for speechsc@ietf.org; Fri, 22 Oct 2004 17:35:53 -0400
Received: from sj-core-3.cisco.com (171.68.223.137)
	by sj-iport-4.cisco.com with ESMTP; 22 Oct 2004 14:22:50 -0700
X-BrightmailFiltered: true
Received: from vtg-um-e2k1.sj21ad.cisco.com (vtg-um-e2k1.cisco.com
	[171.70.93.55])
	by sj-core-3.cisco.com (8.12.10/8.12.6) with ESMTP id i9MLLu9b014832;
	Fri, 22 Oct 2004 14:21:56 -0700 (PDT)
Received: from cisco.com ([10.32.130.231]) by vtg-um-e2k1.sj21ad.cisco.com
	with Microsoft SMTPSVC(5.0.2195.6713); 
	Fri, 22 Oct 2004 14:21:50 -0700
Message-ID: <417979ED.9080501@cisco.com>
Date: Fri, 22 Oct 2004 14:21:49 -0700
From: Sarvi Shanmugham <sarvi@cisco.com>
Organization: Cisco Systems Inc.
User-Agent: Mozilla Thunderbird 0.5 (Windows/20040207)
X-Accept-Language: en-us, en
MIME-Version: 1.0
To: Magnus Westerlund <magnus.westerlund@ericsson.com>
Subject: Re: [Speechsc] Comments on draft-ietf-speechsc-mrcpv2-03.txt
References: <40F557BF.1@ericsson.com>
In-Reply-To: <40F557BF.1@ericsson.com>
X-OriginalArrivalTime: 22 Oct 2004 21:21:50.0518 (UTC)
	FILETIME=[250BF560:01C4B87D]
X-Spam-Score: 0.5 (/)
X-Scan-Signature: a5578080041569f6762c69473045fd5e
X-Mailman-Approved-At: Fri, 29 Oct 2004 10:37:47 -0400
Cc: "'speechsc@ietf.org'" <speechsc@ietf.org>
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1099306632=="
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.5 (/)
X-Scan-Signature: 0e374e6adb07517f84cf50828c3c93a9

This is a multi-part message in MIME format.
--===============1099306632==
Content-Type: multipart/alternative;
	boundary="------------030308040500010708080903"

This is a multi-part message in MIME format.
--------------030308040500010708080903
Content-Type: text/plain; charset=us-ascii; format=flowed
Content-Transfer-Encoding: 7bit

Hi Magnus,

Magnus Westerlund wrote:

> Hi,
>
> I have read some parts of the specification, basically only skipping 
> chapter 9, 10, 11 And Appendix A. Please be aware that I am reading 
> this from the perspective of not knowing the area of speech processing 
> engines. Although I should, I will not make the effort of structuring 
> my comments in different categories.
>
> 1. Status of this memo: The boilerplate will need to be updated. see 
> the new ID checklist and ID guidelines.

Done.

>
> 2. Abstract: Missing space between 2 and (MRCPv2).

Done.

>
> 3. Section 1:
>    "The MRCPv2 protocol is designed to provide a mechanism for a client
>     device requiring audio/video stream processing to control media
>     processing resources on the network."
>
> I found this sentence a bit strange, and think it could be 
> substantially simpler formulated.

Done.

>
> 4. Section 1: "It also describes how these messages are carried
>     over a transport layer such as TCP or SCTP." I think this sentence 
> should consider TLS as secure transport is so important.

Done

>
> 5. Section 1:
>    "The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
>     "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY" and "OPTIONAL" in this
>     document are to be interpreted as described in RFC 2119[10]."
>
> I would propose that this paragraph is the start of the "Notational 
> Conventions" section, and move the rest of section 4 into this new 
> section 2. The fact is that the current section 4 comes to late, long 
> after several of this conventions are already used.

Done

>
> 6. Section 2: Architecture figure. It should be labelled and numbered. 
> Also please keep it in one page.

Done

>
> 7. Question in relation to section 2.2: Is it correct that you assume 
> that the client will already know where the media resources exist or 
> that some other mechanism is used to learn about them?

I believe the client needs to know the SIP URI pointing to MRCPv2 
server. But it can then use SDP to query the capabilities of the server 
as discussed in the spec.

>
> 8. Section 2: Do you completely rule out a proxy in MRCPv2? That would 
> then be a big difference and should be noted in this section. The 
> reason is that it strongly effects if what you have specified in terms 
> of methods, signalling will work or not. Also when referencing HTTP 
> headers, it is important to consider this issue.

I don't beileve MRCPv2 proxies have been rulled out. If  a proxy exists, 
I would expect it to be a back to SIP UA which would also proxy the 
MRCPv2 channel messages coming from the client to the appropriate server 
behind it.

>
> 9. Section 3.2: I think when writing the rule that all SDP offer with 
> m= lines with MRCP MUST use port 9, it should be clarified in 
> parenthesis that this is the discard port, i.e. 9 (TCP discard port). 
> Please note that port 9 is not registered as discard for SCTP.

Done

>
> 10. Section 3.2, Example 2: I could not find an example 1, prior to 
> example 2.

Done.

>
> 11. Section 3.2, Example 2: In relation to a=cmid. Is the reason that 
> you decided to create the this new attribute for binding them, that 
> the grouping of media line does not allow for multiple groups of the 
> same type to reference the same mid? Otherwise I would think that 
> defining a group method would have been a nicer solution more inline 
> with what exist.

Yes.

>
> 12. Section 3.2, example 2: The offer and answers contain a dynamic 
> payload type 96 for the audio description that is not mapped. This is 
> not correct.

If you are pointing to the missing  mapping line for 96. I will get to 
it in the next draft.

>
> 13. Section 3.2, example 3: Is it really obvious that two different 
> recognizes at a server really is capable of accessing the same media, 
> and in consistent manner. In general I think the media handling issues 
> are kind of loosely thought of. One issue that I thought of, does 
> different resources need to use the same amount of buffering, for 
> example if one resource need to be quick and another has more time, 
> then they should have different buffering strategies.

A recognizer and Verifier and Recorder are all feeding of the same 
stream and their response when sharing the stream is expected to be the 
same. Anyway, the spec does not specify if each resource needs to buffer 
on its owne or if they should share a common buffer. That is left to the 
implementation decide.

>
> 14. Section 3.3: The cmid syntax. This defines the syntax, but is not 
> a complete definition of the attribute. You will also have to do an 
> IANA registration of the SDP attribute. Also, the m= lines for control may

Will add an IANA Considerations section to the document.

> also need to have a "a=mid" line to allow for other types of grouping. 
> See RFC 3388.

I agree.

>
> 15. Section 3.3, second last paragraph on page 13, last sentence:
> "The media stream in either
>     direction may contain more than one Synchronized Source (SSRC)
>     identifier due to multiple sources contributing to the media on the
>     pipe and the clientserver SHOULD be able to deal with it. "
>
> Please correct "the client server" is it missing a "or"?

Yes.

>
> Also, what do you mean with dealing with it. They are clearly 
> different sources and unless RTP retransmission is used, performing 
> speech recognition on different SSRCs are like doing on different 
> phone lines in some sense, while in others it is natural to do it on 
> the mixed signal from all sources. Further consideration are needed.

It is assumed to be one source. But there may be cases where RFC2833 
DTMF packets arrive on the same RTP pipe but with different SSRCs. I am 
also not ruling out the case where the server may want to support 
recognizing hotwords when listening to a conference. There could be 
different speakers on different SSRC in the multicase case and we might 
want to respond for anyone saying the "hotword".

>
> 16. Section 3.3, last paragraph on page 13: How does the client know 
> that the Offer was refused due to the fact that the server can't 
> handle sharing the audio pipe?

By failing the SIP INVITE request with say the SIP 501 Not  Implemented 
Server error..

>
> 17. Section 3.2:
> "This m-line
>     will have a media type field of "control" and a transport type field
>     of "TCP", "SCTP" or "TLS". "
>
> I hope you understand that neither SCTP or TLS is defined as proto 
> identifier in SDP. They will need to be registered together with some 
> definition how to use them. Also the TCP is under defined and could 
> benefit from a update.

Made the changes to adopt mechanism of comedia draft per your suggestion 
on a different thread. Which does define these proto identifiers.

>
> 18. Section 3.2 and 3.4:
> In 3.2 the following is written:
> "    All servers MUST support TCP, SCTP and TLS and it is up to the
>     client to choose which mode of transport it wants to use for an
>     MRCPv2 session. "
>
> In 3.4 the following is written:
> "All MRCPv2 based
>     media servers MUST support TCP for transport and MAY support SCTP. "
>
> A nice little inconsistency.

Per discussions on the alias, this has been changed to "TLS, SHOULD 
support TCP and MAY support"

>
> 19. Section 3.4: Are there any rules governing the reuse of MRCP 
> control connections? Can the server or client insist on doing it, and 
> how is it signalled? From a server perspective it must be of interest 
> of avoiding opening either a listening port, or at least one new TCP 
> connection.

The server and client MUST support sharing one or more transport pipes 
across multiple MRCPv2 channels which are recognized by the 
Channel-Identifiers only. But the server cannot mandate this sharing and 
is up the client.

>
> 20. Section 5: Second paragraph: Is it not time to remove the 
> statement that receivers SHOULD interpret CR and LF by themselves. 
> Can't people implement this correctly?

Are you suggesting we should remove the option to have CR or LF only? I 
am fine with this.
Fixed in draft-05

>
> 21. Section 5.1,
> "message-length =    1*DIGIT"
>
> Please consider that it might be more appropriate to actually provide 
> bounded fields. They allow for consistency checking and would simplify 
> implementation. Let say that message length should be limited to 1 
> gigabyte, thus allowing for the value to be stored in a 32 bit-integer 
> and do nice checking that the field does not overflow. Thus I would 
> propose that the syntax is changed to:
> "message-length =    1*10DIGIT"
>
> If a gigabyte is to small, then give it a few more number, but do not 
> make it unlimited length. This applies to a many of the syntax fields. 
> For example request IDs could be restricted to a 64 bit number.
>
> 22. Section 5.1, wrong formatting in fourth paragraph.

fixed.

>
> 23. Section 5.2:
> "The mrcp-version field used here is similar to the one used in the
>     Request Line and indicates the version of MRCPv2 protocol running on
>     the server. "
>
> I think that "similar" is the wrong word here.

fixed.

>
> 24. All ABNF syntax: Lack of defined format for extensions. Almost all 
> syntax is missing definition for which format a extension method, 
> header, parameter, etc. can have. This should really be defined, so 
> that one avoids backwards consideration due to syntax handling when 
> deploying extensions. Example: Current syntax:
> "     request-state    =  "COMPLETE"
>                        |  "IN-PROGRESS"
>                        |  "PENDING"
> "
> Proposed syntax:
>       request-state    =  "COMPLETE"
>                        |  "IN-PROGRESS"
>                        |  "PENDING"
>                        |  req-state-ext
>
>       req-state-ext = 1*(UPPER_ALPHA / "-")

Fixed, methods and header fields. Request state is not extensible.

>
> 25. ABNF syntax: Wrong format of ABNF. This syntax in this spec does 
> not follow 2234. In 2234 ABNF the alternative is "/" instead of the 
> older "|". Please read 2234 and update. Be also aware that all the 
> syntax will need to be complete and go through the syntax checker.

Fixed.

>
> 26. Section 5.2: Status codes:
> Don't you think it is appropriate to use more main status code 
> classes? I would think that 5xx (Server Errors) would be needed.

A few more status codes have been added including 5XX codes per 
discussion in the mailing list.

>
> 27. Section 6.1: It is something strange with the "field-value" syntax 
> definition. Following the text it should allow any number of LWS, 
> followed by content. If broken over multiple lines, then it should be 
> something like (CRLF 1*LWS). I would guess that it should be something 
> like:
>
> field-value = *LWS field-content *(CRLF 1*LWS field-content)
>
Fixed.

>
>
> 28. Section 6.1:
>    "The order in which header fields with the
>     same field-name are received is therefore significant to the
>     interpretation of the combined field value, and thus a proxy MUST
>     NOT change the order of these field values when a message is
>     forwarded."
>
> To much copy paste? If you don't have proxies then this is unnecessary 
> text.

Like I have mentioned earlier. It is possible to have a MRCPv2 proxy in 
between the client and the server that relays the MRCPv2 messages. Plus, 
this may be restriction on the proxy that may come in handy for future 
version of the protocol, where there is a possibility of multiple 
instances of the same header field.

>
> 29. Section 6.1, Active-Request-Id-List: What is "it" in the first 
> sentence?

The request/method. Fixed accordingly.

>
> 30. Section 6.1, Content-Encoding: In the syntax you are using the 
> rule 1#content-encoding. I haven't seen a definition in ABNF for it, 
> so you would need to define it. If you are unable to define it, then I 
> would recommend that you change it to:
> content-encoding *("," *LWS content-encoding)

fixed this and other places that use the 1# rule.

>
> 31. Section 6.1, Content-Encoding: I think you will need to be a bit 
> more explicit on how multiple encoding and their order reflect how 
> things should be done. For example, it is not defined if a second 
> encodings should be written before or after the previous one in the 
> header.

"If multiple encoding have been applied to an entity, the content coding 
MUST be listed in the order in which they were applied. "
I think this covers it well. This means that if say 2 types of 
compression have been applied, say gzip and zcat, they are listed in the 
order in which they were applied to the orginal data to arrive at the 
content attached. i.e. Content-Enconding: gzip, zcat.
To retrieve the data the uncompress operations of each type have to be 
applied in reverse.

>
> 32. Section 6.1, Content-Id: I would recommend that you put a real 
> reference in after RFC 2111.

fixed.

>
> 33. Section 6.1, Cache-Control: So if I understand this correctly, 
> this header controls how the media server should cache any resource 
> that are referenced in a request to perform a resource purpose, like 
> recognition? This is rather different from HTTPs use of the header. It 
> might be wise to consider to rename it.

Well the usage of these headers have the same meaning as in HTTP, except 
that it is not meant for the reltionship between the MRCP client and the 
server, but the HTTP client on the MRCP server and its cache and its 
relationship with  the HTTP server where it goes to fetch documents.  
Which is why these headers have been leveraged from HTTP and their 
naming has been left the same.

>
> 34. Section 6.1, Set-Cookie:
> "Since the type of cookie
>     header is dictated by the HTTP origin server, MRCPv2 clients and
>     servers SHOULD support both the set-cookie and set-cookie2 entity
>     header fields. "
>
> Sorry I have a bit hard to understand how the cookie flow is between 
> HTTP, across MRCP servers and client. It might be needful to include a 
> section about cookies in MRCPv2.

I have added some text to the beginning of the Set-Cookie and 
Set-Cookie2 definition discussing the usage and intent of these headers. 
Hope this clarifies.

>
> 35. Section 7, S->C SIP message. Is this SIP message really correct. I 
> know to little SIP to be certain. For example should the "Accept" 
> header really be there?

Per section 6.7 of the SIP RFC Accept headers are allowed in INVITE and 
OPTION request and their 2xx or 4xx responses.  Is there something else 
that in this message that is cause for concern.

>
> 36. Section 8: I think that one should spend some paragraphs on 
> writing an introduction to each different type of resource describing 
> its purpose and the normal setup in which it is used. From the 
> beginning I had some problem understanding how everything worked. It 
> will make things easier also when some AD is going to read it later.

Are you refering to any particular resource, coz as I see it each 
resource does have an introduction in the beginning describing its 
capabilities.

>
> 37. section 8.4: Why is all headers called parameters. I think a 
> consistent terminology is important.
>
Changed to Headers where appropriate.

> 38. Section 8.4, There is a table describing the different headers and 
> their requirement to be implemented and methods that can use them. 
> However as I found out when editing RTSP and which also SIP has 
> understood, it is not sufficient. It is also good to describe the 
> interesting relations between header and methods, by describing if the 
> header is mandatory, optional, conditional in request or responses. I 
> would recommend that each resource type make a table like table 2 and 
> 3 in section 20 of RFC3261. It should list all the resource methods 
> and the generic methods possible to use in this resource type.

used the table for format you have recommended here.

>
> 39. Section 8.4, Jump-Target. The section name is called Jump-Target 
> while the syntax defines "Jump-Size".

Fixed.

>
> 40. Section 8.4, Kill-On-Barge-In: This issue is shared with all the 
> configuring methods, what scope does they have. Session, request, or 
> server?

Hopefully the new table format is more clear.

>
> 41. Section 8.4, Kill-On-Barge-In:
> "   If the recognizer or signal detector resource is on the same server
>     as the synthesizer, the server should be intelligent enough to
>     recognize their interactions by their common MRCPv2 channel
>     identifier (ignoring the portion after "@" which is the resource
>     type) and work with each other to provide kill-on-barge-in support. "
>
> Should be smart enough? Kind of strange formulation. Either SHALL be 
> it, or MAY be it. I read later on that the client is always requiring 
> to forward a BARGE-IN detection from a recognizer to the TTS. However 
> it seems that it should be clarified through signalling if the server 
> will do this, or if the lesser performance from the client forward 
> method will be required.

I have clarified that the server SHOULD implement the optimization. 
Still, the client MUST relay barge-in events  from the input resource, 
even if they  may be part of the same SIP session since we still want 
this to work where the server doesn't do it, or if there is MRCP proxy 
in between which makes it look like one MRCP session to the client, 
though it may be dealing with separate MRCP sessions behind it.  

>
> 42. Section 8.4, Completion Cause: Syntax of completion cause. Might 
> it not be better to allow a cause description that allows for other 
> characters then ALPHA, like VCHAR?

I agree. Fixed.

>
> 43. Section 8.4, Voice-Parameters and Prosody-Parameters:
> Is this really the best way of providing signalling of these parameter 
> values. Without having looked at the reference, I have no idea what 
> type of data they will need to indicate, neither how many they are. 
> Also what issues exist in extending this. Wouldn't parameters in a 
> header be better? In any case I think that one should at least list 
> the ones that exist. Also what will happen when W3C updates their 
> specification?
>
> Sorry for the inconsistent comment. I am lacking information to 
> determine if defining header in the way you do really is a good way.

I think parameters in the headr would be a better idea. Will change 
accordingly.

>
> 44. section 8.4, Vocie-Parameters:
> "If the synthesizer resource does not support this operation, it 
> should respond back to the client with a status of unsupported." What 
> status code is this? I would recommend that any time on reference a 
> status code to include both number and explanatory string.

will do.

>
> 45. Section 8.4, Failed URI: shouldn't the syntax element be "uri" 
> instead of "url"?

fixed.

>
> 46. Section 8.4, Failed URI cause: Shouldn't the text be VCHAR instead 
> of ALPHA?

fixed.

>
> 47. Section 8.4, Speak Start:
> "   When a CONTROL jump backward request is issued to a currently
>     speaking synthesizer resource and the jumps beyond the start of the
>     speech, the current SPEAK request re-starts from the beginning of
>     its speech data and the response to the CONTROL request would
>     contain this header indicating a restart."
>
> I would replace "would" with normative language SHALL.

fixed.

>
> 48. Section 8.4, Speak Length: This parameter MAY BE ...
> lower case be.
>
fixed.

> 49. Section 8.5: I don't understand how to determine how things 
> included in a multi-part body should be ordered into rendering order?
>
> 50. Section 8.6: Method issue. In several methods, one can get 
> termination or other asynchronous behaviour. However I don't see any 
> mechanism other than speech markers to allow the client to determine 
> how long into the processing was terminated. It might be my lack of 
> understanding of the applications, that I don't understand why the 
> mechanism provided is sufficient. If I would doing this I would 
> provide a EVENT and RESPONSE header that would tell where in the 
> process a SPEAK request was interrupted.

This information is already avaliable through the failure headers which 
describe the cause and reason text along with additional information 
such as which URI failed and with what cause.  Apart from that the 
completion-cause codes also have infromation as to why it failed which 
has information on which stage.
I have also aded the marker header which has timestamp information to be 
sent in other asynchronous events as well. Hope this addresses your concern.

>
> 51. Section 8.6: In a SPEAK response, I would also include a header 
> that gives the current queue of in progress and pending SPEAK requests.

Considering there is a request processing order and that each request 
does return its status. I don't see a need for this.

>
> 52. Section 8.6:
> "This means that this
>     SPEAK request is in queue and will be processed after the currently
>     active SPEAK request is completed."
>
> This sentence implies that the latest SPEAK request would put it self 
> as first in the queue and be played after the completion of the 
> IN-PROGRESS SPEAK request. Please clarify behaviour.

Correct. This means that this SPEAK request will be placed in the 
request queue and will be processed in the other order received after 
the currently active SPEAK request and previously queued SPEAK requests 
are completed. Clarified statement as mentioned here.

>
> 53. Section 8.8:
> "If there were no SPEAK requests terminated
>     as a result of the BARGE-IN-OCCURRED method, the response would
>     still be a 200 success but MUST not contain an active-request-id-
>     list header field. "
>
> the "MUST not" shall be "MUST NOT"

fixed.

>
> 54. Section 8.9: I don't see a method for pausing at a specific point 
> in the provided SPEAK commend. If one likes to pause at specific 
> marker or word, that doesn't seem possible. I would think that 
> providing a block of text to speak, and then something happens that 
> results in that one now should pause after a specific sentence or so. 
> Without protocol support this will be difficult to do nicely. This 
> comment does also apply to CONTROL.

Does any one see a need for this. So far we haven't seen a need for this 
kind a functionality.

>
> 55. Section 8.10:
> "If a RESUME method is
>     issued on a session when a SPEAK is not active the server SHOULD ..."
> I don't think active clearly describes what state you are referring to.

It seems clear. I am not sure how it could be clarified even further. Do 
you have a text suggestion suggestion to improve this..

>
> 56. Section 8.11, Example: The message lengths is clearly wrong.
>
I have not tried to exactly match the message length. I don't think that 
is a requirement is it.

> 57. Whole specification: Presence of non US 7-bit ascii characters. 
> Please run ID checker or other suitable tool.

will do.

>
> 58. IANA section. For your information you will need to write an 
> fairly extensive IANA section setting up a number of different 
> registries. You will also need to register a number of SDP related 
> entries. I would guess that you will need 5+ number of pages for this 
> section. Look at SIP or my RTSP draft 
> (draft-ietf-mmusic-rfc2326bis-06.txt) for indication what will be 
> required.

being worked on.

>
> 59. Extensensibility. I think you in addition to the IANA section 
> should explain what type of extensions you foresee. And how to handle 
> extensions within a resource type.

Will work on it.

>
> 60 Section 14. The reference table needs to be split into a normative 
> and an informative part.
>
Done.

> 61. Copyright and IPR statement needs to be updated.

This has been updated in the -04 draft.

I appreciate your taking the time to provide a very detailed review of 
the document.
Thanks,
Sarvi

>
>
> I would recommend that you get further WG external review. I will try 
> to read the other resource types, but will not promise any date. If 
> anything is unclear, simply ask for clarification.
>
>
> Cheers
>
> Magnus Westerlund
>
> Multimedia Technologies, Ericsson Research EAB/TVA/A
> ----------------------------------------------------------------------
> Ericsson AB                | Phone +46 8 4048287
> Torshamsgatan 23           | Fax   +46 8 7575550
> S-164 80 Stockholm, Sweden | mailto: magnus.westerlund@ericsson.com
>
>
>
> _______________________________________________
> Speechsc mailing list
> Speechsc@ietf.org
> https://www1.ietf.org/mailman/listinfo/speechsc
>


--------------030308040500010708080903
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
  <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
  <title></title>
</head>
<body>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
Hi Magnus, <br>
<br>
Magnus Westerlund wrote:<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite">Hi, <br>
  <br>
I have read some parts of the specification, basically only skipping
chapter 9, 10, 11 And Appendix A. Please be aware that I am reading
this from the perspective of not knowing the area of speech processing
engines. Although I should, I will not make the effort of structuring
my comments in different categories. <br>
  <br>
1. Status of this memo: The boilerplate will need to be updated. see
the new ID checklist and ID guidelines. <br>
</blockquote>
Done.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"><br>
2. Abstract: Missing space between 2 and (MRCPv2). <br>
</blockquote>
Done.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"><br>
3. Section 1: <br>
&nbsp;&nbsp; "The MRCPv2 protocol is designed to provide a mechanism for a client
  <br>
&nbsp;&nbsp;&nbsp; device requiring audio/video stream processing to control media <br>
&nbsp;&nbsp;&nbsp; processing resources on the network." <br>
  <br>
I found this sentence a bit strange, and think it could be
substantially simpler formulated. <br>
</blockquote>
Done.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"><br>
4. Section 1: "It also describes how these messages are carried <br>
&nbsp;&nbsp;&nbsp; over a transport layer such as TCP or SCTP." I think this sentence
should consider TLS as secure transport is so important. <br>
</blockquote>
Done<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"><br>
5. Section 1: <br>
&nbsp;&nbsp; "The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
  <br>
&nbsp;&nbsp;&nbsp; "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY" and "OPTIONAL" in this
  <br>
&nbsp;&nbsp;&nbsp; document are to be interpreted as described in RFC 2119[10]." <br>
  <br>
I would propose that this paragraph is the start of the "Notational
Conventions" section, and move the rest of section 4 into this new
section 2. The fact is that the current section 4 comes to late, long
after several of this conventions are already used. <br>
</blockquote>
Done<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"><br>
6. Section 2: Architecture figure. It should be labelled and numbered.
Also please keep it in one page. <br>
</blockquote>
Done<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"><br>
7. Question in relation to section 2.2: Is it correct that you assume
that the client will already know where the media resources exist or
that some other mechanism is used to learn about them? <br>
</blockquote>
I believe the client needs to know the SIP URI pointing to MRCPv2
server. But it can then use SDP to query the capabilities of the server
as discussed in the spec.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"><br>
8. Section 2: Do you completely rule out a proxy in MRCPv2? That would
then be a big difference and should be noted in this section. The
reason is that it strongly effects if what you have specified in terms
of methods, signalling will work or not. Also when referencing HTTP
headers, it is important to consider this issue. <br>
</blockquote>
I don't beileve MRCPv2 proxies have been rulled out. If&nbsp; a proxy
exists, I would expect it to be a back to SIP UA which would also proxy
the MRCPv2 channel messages coming from the client to the appropriate
server behind it.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"><br>
9. Section 3.2: I think when writing the rule that all SDP offer with
m= lines with MRCP MUST use port 9, it should be clarified in
parenthesis that this is the discard port, i.e. 9 (TCP discard port).
Please note that port 9 is not registered as discard for SCTP. <br>
</blockquote>
Done<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"><br>
10. Section 3.2, Example 2: I could not find an example 1, prior to
example 2. <br>
</blockquote>
Done.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"><br>
11. Section 3.2, Example 2: In relation to a=cmid. Is the reason that
you decided to create the this new attribute for binding them, that the
grouping of media line does not allow for multiple groups of the same
type to reference the same mid? Otherwise I would think that defining a
group method would have been a nicer solution more inline with what
exist. <br>
</blockquote>
Yes.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
12. Section 3.2, example 2: The offer and answers contain a dynamic
payload type 96 for the audio description that is not mapped. This is
not correct. <br>
</blockquote>
If you are pointing to the missing&nbsp; mapping line for 96. I will get to
it in the next
draft.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
13. Section 3.2, example 3: Is it really obvious that two different
recognizes at a server really is capable of accessing the same media,
and in consistent manner. In general I think the media handling issues
are kind of loosely thought of. One issue that I thought of, does
different resources need to use the same amount of buffering, for
example if one resource need to be quick and another has more time,
then they should have different buffering strategies. <br>
</blockquote>
A recognizer and Verifier and Recorder are all feeding of the same
stream and their response when sharing the stream is expected to be the
same. Anyway, the spec does not specify if each resource needs to
buffer on its owne or if they should share a common buffer. That is
left to the implementation decide.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
14. Section 3.3: The cmid syntax. This defines the syntax, but is not a
complete definition of the attribute. You will also have to do an IANA
registration of the SDP attribute. Also, the m= lines for control may</blockquote>
Will add an IANA Considerations section to the document.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite">also need to
have a "a=mid" line to allow for other types of grouping.
See RFC 3388. <br>
</blockquote>
I agree.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
15. Section 3.3, second last paragraph on page 13, last sentence: <br>
"The media stream in either <br>
&nbsp;&nbsp;&nbsp; direction may contain more than one Synchronized Source (SSRC) <br>
&nbsp;&nbsp;&nbsp; identifier due to multiple sources contributing to the media on the
  <br>
&nbsp;&nbsp;&nbsp; pipe and the clientserver SHOULD be able to deal with it. " <br>
  <br>
Please correct "the client server" is it missing a "or"? <br>
</blockquote>
Yes.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
Also, what do you mean with dealing with it. They are clearly different
sources and unless RTP retransmission is used, performing speech
recognition on different SSRCs are like doing on different phone lines
in some sense, while in others it is natural to do it on the mixed
signal from all sources. Further consideration are needed. <br>
</blockquote>
It is assumed to be one source. But there may be cases where RFC2833
DTMF packets arrive on the same RTP pipe but with different SSRCs. I am
also not ruling out the case where the server may want to support
recognizing hotwords when listening to a conference. There could be
different speakers on different SSRC in the multicase case and we might
want to respond for anyone saying the "hotword". <br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
16. Section 3.3, last paragraph on page 13: How does the client know
that the Offer was refused due to the fact that the server can't handle
sharing the audio pipe? <br>
</blockquote>
By failing the SIP INVITE request with say the SIP 501 Not&nbsp; Implemented
Server error..<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
17. Section 3.2: <br>
"This m-line <br>
&nbsp;&nbsp;&nbsp; will have a media type field of "control" and a transport type
field <br>
&nbsp;&nbsp;&nbsp; of "TCP", "SCTP" or "TLS". " <br>
  <br>
I hope you understand that neither SCTP or TLS is defined as proto
identifier in SDP. They will need to be registered together with some
definition how to use them. Also the TCP is under defined and could
benefit from a update. <br>
</blockquote>
Made the changes to adopt mechanism of comedia draft per your
suggestion on a different thread. Which does define these proto
identifiers.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
18. Section 3.2 and 3.4: <br>
In 3.2 the following is written: <br>
"&nbsp;&nbsp;&nbsp; All servers MUST support TCP, SCTP and TLS and it is up to the <br>
&nbsp;&nbsp;&nbsp; client to choose which mode of transport it wants to use for an <br>
&nbsp;&nbsp;&nbsp; MRCPv2 session. " <br>
  <br>
In 3.4 the following is written: <br>
"All MRCPv2 based <br>
&nbsp;&nbsp;&nbsp; media servers MUST support TCP for transport and MAY support SCTP.
" <br>
  <br>
A nice little inconsistency. <br>
</blockquote>
Per discussions on the alias, this has been changed to "<span
 style="font-size: 12pt; font-family: &quot;Times New Roman&quot;;">TLS, SHOULD
support TCP
and MAY support"</span>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
19. Section 3.4: Are there any rules governing the reuse of MRCP
control connections? Can the server or client insist on doing it, and
how is it signalled? From a server perspective it must be of interest
of avoiding opening either a listening port, or at least one new TCP
connection. <br>
</blockquote>
The server and client MUST support sharing one or more transport pipes
across multiple MRCPv2 channels which are recognized by the
Channel-Identifiers only. But the server cannot mandate this sharing
and is up the client.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
20. Section 5: Second paragraph: Is it not time to remove the statement
that receivers SHOULD interpret CR and LF by themselves. Can't people
implement this correctly? <br>
</blockquote>
Are you suggesting we should remove the option to have CR or LF only? I
am fine with this. <br>
Fixed in draft-05 <br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
21. Section 5.1, <br>
"message-length =&nbsp;&nbsp;&nbsp; 1*DIGIT" <br>
  <br>
Please consider that it might be more appropriate to actually provide
bounded fields. They allow for consistency checking and would simplify
implementation. Let say that message length should be limited to 1
gigabyte, thus allowing for the value to be stored in a 32 bit-integer
and do nice checking that the field does not overflow. Thus I would
propose that the syntax is changed to: <br>
"message-length =&nbsp;&nbsp;&nbsp; 1*10DIGIT" <br>
  <br>
If a gigabyte is to small, then give it a few more number, but do not
make it unlimited length. This applies to a many of the syntax fields.
For example request IDs could be restricted to a 64 bit number. <br>
  <br>
22. Section 5.1, wrong formatting in fourth paragraph. <br>
</blockquote>
fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
23. Section 5.2: <br>
"The mrcp-version field used here is similar to the one used in the <br>
&nbsp;&nbsp;&nbsp; Request Line and indicates the version of MRCPv2 protocol running
on <br>
&nbsp;&nbsp;&nbsp; the server. " <br>
  <br>
I think that "similar" is the wrong word here. <br>
</blockquote>
fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
24. All ABNF syntax: Lack of defined format for extensions. Almost all
syntax is missing definition for which format a extension method,
header, parameter, etc. can have. This should really be defined, so
that one avoids backwards consideration due to syntax handling when
deploying extensions. Example: Current syntax: <br>
"&nbsp;&nbsp;&nbsp;&nbsp; request-state&nbsp;&nbsp;&nbsp; =&nbsp; "COMPLETE" <br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; "IN-PROGRESS" <br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; "PENDING" <br>
" <br>
Proposed syntax: <br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; request-state&nbsp;&nbsp;&nbsp; =&nbsp; "COMPLETE" <br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; "IN-PROGRESS" <br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; "PENDING" <br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |&nbsp; req-state-ext <br>
  <br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; req-state-ext = 1*(UPPER_ALPHA / "-") <br>
</blockquote>
Fixed, methods and header fields. Request state is not extensible.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
25. ABNF syntax: Wrong format of ABNF. This syntax in this spec does
not follow 2234. In 2234 ABNF the alternative is "/" instead of the
older "|". Please read 2234 and update. Be also aware that all the
syntax will need to be complete and go through the syntax checker. <br>
</blockquote>
Fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
26. Section 5.2: Status codes: <br>
Don't you think it is appropriate to use more main status code classes?
I would think that 5xx (Server Errors) would be needed. <br>
</blockquote>
A few more status codes have been added including 5XX codes per
discussion in the mailing list.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
27. Section 6.1: It is something strange with the "field-value" syntax
definition. Following the text it should allow any number of LWS,
followed by content. If broken over multiple lines, then it should be
something like (CRLF 1*LWS). I would guess that it should be something
like: <br>
  <br>
field-value = *LWS field-content *(CRLF 1*LWS field-content) <br>
  <br>
</blockquote>
Fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
  <br>
28. Section 6.1: <br>
&nbsp;&nbsp; "The order in which header fields with the <br>
&nbsp;&nbsp;&nbsp; same field-name are received is therefore significant to the <br>
&nbsp;&nbsp;&nbsp; interpretation of the combined field value, and thus a proxy MUST <br>
&nbsp;&nbsp;&nbsp; NOT change the order of these field values when a message is <br>
&nbsp;&nbsp;&nbsp; forwarded." <br>
  <br>
To much copy paste? If you don't have proxies then this is unnecessary
text. <br>
</blockquote>
Like I have mentioned earlier. It is possible to have a MRCPv2 proxy in
between the client and the server that relays the MRCPv2 messages.
Plus, this may be restriction on the proxy that may come in handy for
future version of the protocol, where there is a possibility of
multiple instances of the same header field.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
29. Section 6.1, Active-Request-Id-List: What is "it" in the first
sentence? <br>
</blockquote>
The request/method. Fixed accordingly.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
30. Section 6.1, Content-Encoding: In the syntax you are using the rule
1#content-encoding. I haven't seen a definition in ABNF for it, so you
would need to define it. If you are unable to define it, then I would
recommend that you change it to: <br>
content-encoding *("," *LWS content-encoding) <br>
</blockquote>
fixed this and other places that use the 1# rule.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
31. Section 6.1, Content-Encoding: I think you will need to be a bit
more explicit on how multiple encoding and their order reflect how
things should be done. For example, it is not defined if a second
encodings should be written before or after the previous one in the
header. <br>
</blockquote>
"<span style="font-size: 12pt; font-family: &quot;Times New Roman&quot;;">If
multiple encoding
have been applied to an entity, the content coding MUST be listed in
the order
in which they were applied. "</span><br>
I think this covers it well. This means that if say 2 types of
compression have been applied, say gzip and zcat, they are listed in
the order in which they were applied to the orginal data to arrive at
the content attached. i.e. Content-Enconding: gzip, zcat. <br>
To retrieve the data the uncompress operations of each type have to be
applied in reverse. <br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
32. Section 6.1, Content-Id: I would recommend that you put a real
reference in after RFC 2111. <br>
</blockquote>
fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
33. Section 6.1, Cache-Control: So if I understand this correctly, this
header controls how the media server should cache any resource that are
referenced in a request to perform a resource purpose, like
recognition? This is rather different from HTTPs use of the header. It
might be wise to consider to rename it. <br>
</blockquote>
Well the usage of these headers have the same meaning as in HTTP,
except that it is not meant for the reltionship between the MRCP client
and the server, but the HTTP client on the MRCP server and its cache
and its relationship with&nbsp; the HTTP server where it goes to fetch
documents.&nbsp; Which is why these headers have been leveraged from HTTP
and their naming has been left the same.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
34. Section 6.1, Set-Cookie: <br>
"Since the type of cookie <br>
&nbsp;&nbsp;&nbsp; header is dictated by the HTTP origin server, MRCPv2 clients and <br>
&nbsp;&nbsp;&nbsp; servers SHOULD support both the set-cookie and set-cookie2 entity <br>
&nbsp;&nbsp;&nbsp; header fields. " <br>
  <br>
Sorry I have a bit hard to understand how the cookie flow is between
HTTP, across MRCP servers and client. It might be needful to include a
section about cookies in MRCPv2. <br>
</blockquote>
I have added some text to the beginning of the Set-Cookie and
Set-Cookie2 definition discussing the usage and intent of these
headers. Hope this clarifies.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
35. Section 7, S-&gt;C SIP message. Is this SIP message really correct.
I know to little SIP to be certain. For example should the "Accept"
header really be there? <br>
</blockquote>
Per section 6.7 of the SIP RFC Accept headers are allowed in INVITE and
OPTION request and their 2xx or 4xx responses.&nbsp; Is there something else
that in this message that is cause for concern.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
36. Section 8: I think that one should spend some paragraphs on writing
an introduction to each different type of resource describing its
purpose and the normal setup in which it is used. From the beginning I
had some problem understanding how everything worked. It will make
things easier also when some AD is going to read it later. <br>
</blockquote>
Are you refering to any particular resource, coz as I see it each
resource does have an introduction in the beginning describing its
capabilities.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
37. section 8.4: Why is all headers called parameters. I think a
consistent terminology is important. <br>
  <br>
</blockquote>
Changed to Headers where appropriate.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite">38. Section
8.4, There is a table describing the different headers and
their requirement to be implemented and methods that can use them.
However as I found out when editing RTSP and which also SIP has
understood, it is not sufficient. It is also good to describe the
interesting relations between header and methods, by describing if the
header is mandatory, optional, conditional in request or responses. I
would recommend that each resource type make a table like table 2 and 3
in section 20 of RFC3261. It should list all the resource methods and
the generic methods possible to use in this resource type. <br>
</blockquote>
used the table for format you have recommended here.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
39. Section 8.4, Jump-Target. The section name is called Jump-Target
while the syntax defines "Jump-Size". <br>
</blockquote>
Fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
40. Section 8.4, Kill-On-Barge-In: This issue is shared with all the
configuring methods, what scope does they have. Session, request, or
server? <br>
</blockquote>
Hopefully the new table format is more clear.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
41. Section 8.4, Kill-On-Barge-In: <br>
"&nbsp;&nbsp; If the recognizer or signal detector resource is on the same server
  <br>
&nbsp;&nbsp;&nbsp; as the synthesizer, the server should be intelligent enough to <br>
&nbsp;&nbsp;&nbsp; recognize their interactions by their common MRCPv2 channel <br>
&nbsp;&nbsp;&nbsp; identifier (ignoring the portion after "@" which is the resource <br>
&nbsp;&nbsp;&nbsp; type) and work with each other to provide kill-on-barge-in support.
" <br>
  <br>
Should be smart enough? Kind of strange formulation. Either SHALL be
it, or MAY be it. I read later on that the client is always requiring
to forward a BARGE-IN detection from a recognizer to the TTS. However
it seems that it should be clarified through signalling if the server
will do this, or if the lesser performance from the client forward
method will be required. <br>
</blockquote>
I have clarified that the server SHOULD implement the optimization.
Still, the client MUST relay barge-in events&nbsp; from the input resource,
even if they&nbsp; may be part of the same SIP session since we still want
this to work where the server doesn't do it, or if there is MRCP proxy
in between which makes it look like one MRCP session to the client,
though it may be dealing with separate MRCP sessions behind it. &nbsp; <br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
42. Section 8.4, Completion Cause: Syntax of completion cause. Might it
not be better to allow a cause description that allows for other
characters then ALPHA, like VCHAR? <br>
</blockquote>
I agree. Fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
43. Section 8.4, Voice-Parameters and Prosody-Parameters: <br>
Is this really the best way of providing signalling of these parameter
values. Without having looked at the reference, I have no idea what
type of data they will need to indicate, neither how many they are.
Also what issues exist in extending this. Wouldn't parameters in a
header be better? In any case I think that one should at least list the
ones that exist. Also what will happen when W3C updates their
specification? <br>
  <br>
Sorry for the inconsistent comment. I am lacking information to
determine if defining header in the way you do really is a good way. <br>
</blockquote>
I think parameters in the headr would be a better idea. Will change
accordingly.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
44. section 8.4, Vocie-Parameters: <br>
"If the synthesizer resource does not support this operation, it should
respond back to the client with a status of unsupported." What status
code is this? I would recommend that any time on reference a status
code to include both number and explanatory string. <br>
</blockquote>
will do.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
45. Section 8.4, Failed URI: shouldn't the syntax element be "uri"
instead of "url"? <br>
</blockquote>
fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
46. Section 8.4, Failed URI cause: Shouldn't the text be VCHAR instead
of ALPHA? <br>
</blockquote>
fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
47. Section 8.4, Speak Start: <br>
"&nbsp;&nbsp; When a CONTROL jump backward request is issued to a currently <br>
&nbsp;&nbsp;&nbsp; speaking synthesizer resource and the jumps beyond the start of the
  <br>
&nbsp;&nbsp;&nbsp; speech, the current SPEAK request re-starts from the beginning of <br>
&nbsp;&nbsp;&nbsp; its speech data and the response to the CONTROL request would <br>
&nbsp;&nbsp;&nbsp; contain this header indicating a restart." <br>
  <br>
I would replace "would" with normative language SHALL. <br>
</blockquote>
fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
48. Section 8.4, Speak Length: This parameter MAY BE ... <br>
lower case be. <br>
  <br>
</blockquote>
fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite">49. Section
8.5: I don't understand how to determine how things
included in a multi-part body should be ordered into rendering order? <br>
  <br>
50. Section 8.6: Method issue. In several methods, one can get
termination or other asynchronous behaviour. However I don't see any
mechanism other than speech markers to allow the client to determine
how long into the processing was terminated. It might be my lack of
understanding of the applications, that I don't understand why the
mechanism provided is sufficient. If I would doing this I would provide
a EVENT and RESPONSE header that would tell where in the process a
SPEAK request was interrupted. <br>
</blockquote>
This information is already avaliable through the failure headers which
describe the cause and reason text along with additional information
such as which URI failed and with what cause.&nbsp; Apart from that the
completion-cause codes also have infromation as to why it failed which
has information on which stage. <br>
I have also aded the marker header which has timestamp information to
be sent in other asynchronous events as well. Hope this addresses your
concern.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
51. Section 8.6: In a SPEAK response, I would also include a header
that gives the current queue of in progress and pending SPEAK requests.
  <br>
</blockquote>
Considering there is a request processing order and that each request
does return its status. I don't see a need for this.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
52. Section 8.6: <br>
"This means that this <br>
&nbsp;&nbsp;&nbsp; SPEAK request is in queue and will be processed after the currently
  <br>
&nbsp;&nbsp;&nbsp; active SPEAK request is completed." <br>
  <br>
This sentence implies that the latest SPEAK request would put it self
as first in the queue and be played after the completion of the
IN-PROGRESS SPEAK request. Please clarify behaviour. <br>
</blockquote>
Correct. <span style="font-size: 12pt; font-family: &quot;Times New Roman&quot;;">This
means that this
SPEAK request will be placed in the request queue and will be processed
in the
other order received after the currently active SPEAK request and
previously
queued SPEAK requests are completed. Clarified statement as mentioned
here.<br>
</span>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
53. Section 8.8: <br>
"If there were no SPEAK requests terminated <br>
&nbsp;&nbsp;&nbsp; as a result of the BARGE-IN-OCCURRED method, the response would <br>
&nbsp;&nbsp;&nbsp; still be a 200 success but MUST not contain an active-request-id- <br>
&nbsp;&nbsp;&nbsp; list header field. " <br>
  <br>
the "MUST not" shall be "MUST NOT" <br>
</blockquote>
fixed.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
54. Section 8.9: I don't see a method for pausing at a specific point
in the provided SPEAK commend. If one likes to pause at specific marker
or word, that doesn't seem possible. I would think that providing a
block of text to speak, and then something happens that results in that
one now should pause after a specific sentence or so. Without protocol
support this will be difficult to do nicely. This comment does also
apply to CONTROL. <br>
</blockquote>
Does any one see a need for this. So far we haven't seen a need for
this kind a functionality.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
55. Section 8.10: <br>
"If a RESUME method is <br>
&nbsp;&nbsp;&nbsp; issued on a session when a SPEAK is not active the server SHOULD
..." <br>
I don't think active clearly describes what state you are referring to.
  <br>
</blockquote>
It seems clear. I am not sure how it could be clarified even further.
Do you have a text suggestion suggestion to improve this.. <br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
56. Section 8.11, Example: The message lengths is clearly wrong. <br>
  <br>
</blockquote>
I have not tried to exactly match the message length. I don't think
that is a requirement is it.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite">57. Whole
specification: Presence of non US 7-bit ascii characters.
Please run ID checker or other suitable tool. <br>
</blockquote>
will do.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
58. IANA section. For your information you will need to write an fairly
extensive IANA section setting up a number of different registries. You
will also need to register a number of SDP related entries. I would
guess that you will need 5+ number of pages for this section. Look at
SIP or my RTSP draft (draft-ietf-mmusic-rfc2326bis-06.txt) for
indication what will be required. <br>
</blockquote>
being worked on.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
59. Extensensibility. I think you in addition to the IANA section
should explain what type of extensions you foresee. And how to handle
extensions within a resource type. <br>
</blockquote>
Will work on it.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
60 Section 14. The reference table needs to be split into a normative
and an informative part. <br>
  <br>
</blockquote>
Done.<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite">61. Copyright
and IPR statement needs to be updated. <br>
</blockquote>
This has been updated in the -04 draft. <br>
<br>
I appreciate your taking the time to provide a very detailed review of
the document. <br>
Thanks,<br>
Sarvi<br>
<blockquote cite="mid40F557BF.1@ericsson.com" type="cite"> <br>
  <br>
I would recommend that you get further WG external review. I will try
to read the other resource types, but will not promise any date. If
anything is unclear, simply ask for clarification. <br>
  <br>
  <br>
Cheers <br>
  <br>
Magnus Westerlund <br>
  <br>
Multimedia Technologies, Ericsson Research EAB/TVA/A <br>
---------------------------------------------------------------------- <br>
Ericsson AB&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Phone +46 8 4048287 <br>
Torshamsgatan 23&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Fax&nbsp;&nbsp; +46 8 7575550 <br>
S-164 80 Stockholm, Sweden | mailto: <a
 class="moz-txt-link-abbreviated"
 href="mailto:magnus.westerlund@ericsson.com">magnus.westerlund@ericsson.com</a>
  <br>
  <br>
  <br>
  <br>
_______________________________________________ <br>
Speechsc mailing list <br>
  <a class="moz-txt-link-abbreviated" href="mailto:Speechsc@ietf.org">Speechsc@ietf.org</a>
  <br>
  <a class="moz-txt-link-freetext"
 href="https://www1.ietf.org/mailman/listinfo/speechsc">https://www1.ietf.org/mailman/listinfo/speechsc</a>
  <br>
  <br>
</blockquote>
<br>
</body>
</html>

--------------030308040500010708080903--


--===============1099306632==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc

--===============1099306632==--



From speechsc-bounces@ietf.org  Fri Oct 29 17:47:44 2004
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id RAA17374
	for <speechsc-web-archive@ietf.org>; Fri, 29 Oct 2004 17:47:43 -0400 (EDT)
Received: from megatron.ietf.org ([132.151.6.71])
	by ietf-mx.ietf.org with esmtp (Exim 4.33)
	id 1CNepQ-0003VP-UH
	for speechsc-web-archive@ietf.org; Fri, 29 Oct 2004 18:02:37 -0400
Received: from localhost.localdomain ([127.0.0.1] helo=megatron.ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32)
	id 1CNeDl-0000px-Vv; Fri, 29 Oct 2004 17:23:41 -0400
Received: from odin.ietf.org ([132.151.1.176] helo=ietf.org)
	by megatron.ietf.org with esmtp (Exim 4.32) id 1CNdmB-0005k8-0g
	for speechsc@megatron.ietf.org; Fri, 29 Oct 2004 16:55:11 -0400
Received: from ietf-mx.ietf.org (ietf-mx.ietf.org [132.151.6.1])
	by ietf.org (8.9.1a/8.9.1a) with ESMTP id QAA11061
	for <speechsc@ietf.org>; Fri, 29 Oct 2004 16:55:08 -0400 (EDT)
Received: from salvelinus.brooktrout.com ([204.176.205.6])
	by ietf-mx.ietf.org with esmtp (Exim 4.33) id 1CNe0W-0001p2-Iy
	for speechsc@ietf.org; Fri, 29 Oct 2004 17:10:00 -0400
Received: from nhmail2.needham.brooktrout.com (nhmail2.eng.brooktrout.com
	[204.176.205.242])
	by salvelinus.brooktrout.com (8.12.5/8.12.5) with ESMTP id
	i9TKmRLU004159
	for <speechsc@ietf.org>; Fri, 29 Oct 2004 16:48:29 -0400 (EDT)
Received: by nhmail2.eng.brooktrout.com with Internet Mail Service
	(5.5.2653.19) id <PQMPPGTD>; Fri, 29 Oct 2004 16:45:38 -0400
Message-ID: <EDD694D47377D7119C8400D0B77FD331C10BFF@nhmail2.eng.brooktrout.com>
From: Eric Burger <eburger@brooktrout.com>
To: "IETF SPEECHSC (E-mail)" <speechsc@ietf.org>
Date: Fri, 29 Oct 2004 16:44:30 -0400
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
Content-Type: text/plain
X-Spam-Score: 0.0 (/)
X-Scan-Signature: 34d35111647d654d033d58d318c0d21a
Subject: [Speechsc] FW: Protocol Action: 'RTP Payload Formats for European
	Telecommu
	nications Standards Institute (ETSI) European Standard ES  202 050,
	ES 20 2 211,
	and ES 202 212 Distributed Speech Recognition  Encoding' to Propos
	ed Standard 
X-BeenThere: speechsc@ietf.org
X-Mailman-Version: 2.1.5
Precedence: list
List-Id: Speech Services Control Working Group <speechsc.ietf.org>
List-Unsubscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=unsubscribe>
List-Post: <mailto:speechsc@ietf.org>
List-Help: <mailto:speechsc-request@ietf.org?subject=help>
List-Subscribe: <https://www1.ietf.org/mailman/listinfo/speechsc>,
	<mailto:speechsc-request@ietf.org?subject=subscribe>
Sender: speechsc-bounces@ietf.org
Errors-To: speechsc-bounces@ietf.org
X-Spam-Score: 0.0 (/)
X-Scan-Signature: e1b0e72ff1bbd457ceef31828f216a86

Not directly in our area, but of interest to many of us:


The IESG has approved the following document:

- 'RTP Payload Formats for European Telecommunications Standards Institute 
   (ETSI) European Standard ES 202 050, ES 202 211, and ES 202 212
Distributed 
   Speech Recognition Encoding '
   <draft-ietf-avt-rtp-dsr-codecs-03.txt> as a Proposed Standard

This document is the product of the Audio/Video Transport Working Group. 

The IESG contact persons are Allison Mankin and Jon Peterson.

Technical Summary
 
    Distributed speech recognition (DSR) technology in this architecture
    uses remote device acting as a thin client, also known as the front-end,
    to communicate with a speech recognition server, also called a speech
    engine, over a network connection, to obtain speech recognition
    services.  More details on DSR over Internet can be found in RFC 3557

    To achieve interoperability with different client devices and speech
    engines, the first ETSI standard DSR front-end ES 201 108 was
    published in early 2000 and an RTP packetization for ES 201 108
    frames is defined in RFC 3557 by IETF.

    In ES 202 050, ETSI issues another standard for an Advanced DSR
    front-end that provides substantially improved recognition
    performance when background noise is present.  The codecs in ES 202
    050 use a slightly different frame format from those of ES 201 108
    and thus the two do not inter-operate with each other.

    The RTP packetization for ES 202 050 front-end defined in this
    document uses the same RTP packet format layout as that defined in
    RFC 3557.  The differences are in the DSR codec frame bit
    definition and the payload type MIME registration.

    The two further standards, ES 202 211 and ES 202 212, for which this
    document offers payloads, provide extensions to the each of the
    DSR front-end standards.  These respective extensions allow the
    speech waveform to be reconstructed for human audition and they
    can also be used to improve recognition performance for tonal
    languages.  This is done by sending additional pitch and voicing
    information for each frame along with the recognition features.
 
Working Group Summary
 
   The document was sent to the ietf-types list for MIME type review and did
   not surface any concerns.   The DSR issues were reviewed by the SPEECHSC
   WG at the time of RFC 3557, and the Area Director viewed this document as
   having no new issues.   The working group supported advancing this 
   document. 
 
Protocol Quality
 
  This document was reviewed for the IESG by Magnus Westerlund and 
  Allison Mankin.

RFC Editor Notes


Section 4
OLD:

Author/Change controller:

       *  Qiaobing.Xie@motorola.com

       *  IETF Audio/Video transport working group

NEW:

Author:

*  Qiaobing.Xie@motorola.com

Change controller:

*  IETF Audio/Video transport working group delegated by the IESG

Section 5

The following paragraph should be moved out of Section 5, and become
Section 4.3 Congestion Control: 

    Congestion control for RTP MUST be used in accordance with RFC 3550
    [9], and any applicable RTP profile, e.g.  RFC 3551 [10].


_______________________________________________
IETF-Announce mailing list
IETF-Announce@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf-announce

_______________________________________________
Speechsc mailing list
Speechsc@ietf.org
https://www1.ietf.org/mailman/listinfo/speechsc


